Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
5,700 | 6,158 | Generating Images with Perceptual Similarity
Metrics based on Deep Networks
Alexey Dosovitskiy and Thomas Brox
University of Freiburg
{dosovits, brox}@cs.uni-freiburg.de
Abstract
We propose a class of loss functions, which we call deep perceptual similarity
metrics (DeePSiM), allowing to generate sharp high resolution images from compressed abstract representations. Instead of computing distances in the image space,
we compute distances between image features extracted by deep neural networks.
This metric reflects perceptual similarity of images much better and, thus, leads to
better results. We demonstrate two examples of use cases of the proposed loss: (1)
networks that invert the AlexNet convolutional network; (2) a modified version of
a variational autoencoder that generates realistic high-resolution random images.
1
Introduction
Recently there has been a surge of interest in training neural networks to generate images. These are
being used for a wide variety of applications: generative models, analysis of learned representations,
learning of 3D representations, future prediction in videos. Nevertheless, there is little work on
studying loss functions which are appropriate for the image generation task. The widely used
squared Euclidean (SE) distance between images often yields blurry results; see Fig. 1 (b). This is
especially the case when there is inherent uncertainty in the prediction. For example, suppose we
aim to reconstruct an image from its feature representation. The precise location of all details is not
preserved in the features. A loss in image space leads to averaging all likely locations of details,
hence the reconstruction looks blurry.
However, exact locations of all fine details are not important for perceptual similarity of images.
What is important is the distribution of these details. Our main insight is that invariance to irrelevant
transformations and sensitivity to local image statistics can be achieved by measuring distances in a
suitable feature space. In fact, convolutional networks provide a feature representation with desirable
properties. They are invariant to small, smooth deformations but sensitive to perceptually important
image properties, like salient edges and textures.
Using a distance in feature space alone does not yet yield a good loss function; see Fig. 1 (d).
Since feature representations are typically contractive, feature similarity does not automatically
mean image similarity. In practice this leads to high-frequency artifacts (Fig. 1 (d)). To force the
network generate realistic images, we introduce a natural image prior based on adversarial training,
as proposed by Goodfellow et al. [1] 1 . We train a discriminator network to distinguish the output of
the generator from real images based on local image statistics. The objective of the generator is to
trick the discriminator, that is, to generate images that the discriminator cannot distinguish from real
ones. A combination of similarity in an appropriate feature space with adversarial training yields
the best results; see Fig. 1 (e). Results produced with adversarial loss alone (Fig. 1 (c)) are clearly
inferior to those of our approach, so the feature space loss is crucial.
1
An interesting alternative would be to explicitly analyze feature statistics, similar to Gatys et al. [2] .
However, our preliminary experiments with this approach were not successful.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Original
(a)
Img loss Img + Adv Img + Feat
(b)
(c)
(d)
Our
(e)
Figure 1: Reconstructions from AlexNet FC6 with different components of the loss.
Figure 2: Schematic of our model.
Black solid lines denote the forward pass.
Dashed lines with arrows on both ends
are the losses. Thin dashed lines denote
the flow of gradients.
The new loss function is well suited for generating images from highly compressed representations.
We demonstrate this in two applications: inversion of the AlexNet convolutional network and a
generative model based on a variational autoencoder. Reconstructions obtained with our method from
high-level activations of AlexNet are significantly better than with existing approaches. They reveal
that even the predicted class probabilities contain rich texture, color, and position information. As an
example of a true generative model, we show that a variational autoencoder trained with the new loss
produces sharp and realistic high-resolution 227 ? 227 pixel images.
2
Related work
There is a long history of neural network based models for image generation. A prominent class of
probabilistic models of images are restricted Boltzmann machines [3] and their deep variants [4, 5].
Autoencoders [6] have been widely used for unsupervised learning and generative modeling, too.
Recently, stochastic neural networks [7] have become popular, and deterministic networks are being
used for image generation tasks [8]. In all these models, loss is measured in the image space. By
combining convolutions and un-pooling (upsampling) layers [5, 1, 8] these models can be applied to
large images.
There is a large body of work on assessing the perceptual similarity of images. Some prominent
examples are the visible differences predictor [9], the spatio-temporal model for moving picture
quality assessment [10], and the perceptual distortion metric of Winkler [11]. The most popular
perceptual image similarity metric is the structural similarity metric (SSIM) [12], which compares
the local statistics of image patches. We are not aware of any work making use of similarity metrics
for machine learning, except a recent pre-print of Ridgeway et al. [13]. They train autoencoders
by directly maximizing the SSIM similarity of images. This resembles in spirit what we do, but
technically is very different. Because of its shallow and local nature, SSIM does not have invariance
properties needed for the tasks we are solving in this paper.
Generative adversarial networks (GANs) have been proposed by Goodfellow et al. [1]. In theory,
this training procedure can lead to a generator that perfectly models the data distribution. Practically,
training GANs is difficult and often leads to oscillatory behavior, divergence, or modeling only part
of the data distribution. Recently, several modifications have been proposed that make GAN training
more stable. Denton et al. [14] employ a multi-scale approach, gradually generating higher resolution
images. Radford et al. [15] make use of an upconvolutional architecture and batch normalization.
GANs can be trained conditionally by feeding the conditioning variable to both the discriminator and
the generator [16]. Usually this conditioning variable is a one-hot encoding of the object class in the
input image. Such GANs learn to generate images of objects from a given class. Recently Mathieu
et al. [17] used GANs for predicting future frames in videos by conditioning on previous frames. Our
approach looks similar to a conditional GAN. However, in a GAN there is no loss directly comparing
the generated image to some ground truth. As Fig. 1 shows, the feature loss introduced in the present
paper is essential to train on complicated tasks we are interested in.
2
Several concurrent works [18?20] share the general idea ? to measure the similarity not in the image
space but rather in a feature space. These differ from our work both in the details of the method and
in the applications. Larsen et al. [18] only run relatively small-scale experiments on images of faces,
and they measure the similarity between features extracted from the discriminator, while we study
different ?comparators? (in fact, we also experimented with features from the disciminator and were
not able to get satisfactory results on our applications with those). Lamb et al. [19] and Johnson et al.
[20] use features from different layers, including the lower ones, to measure image similarity, and
therefore do not need the adversarial loss. While this approach may be suitable for tasks which allow
for nearly perfect solutions (e.g. super-resolution with low magnification), it is not applicable to
more complicated problems such as extreme super-resolution or inversion of highly invariant feature
representations.
3
Model
Suppose we are given a supervised image generation task and a training set of input-target pairs
{yi , xi }, consisting of high-level image representations yi ? RI and images xi ? RW ?H?C .
The aim is to learn the parameters ? of a differentiable generator function G? (?) : RI ? RW ?H?C
which optimally approximates the input-target dependency according to a loss function L(G? (y), x).
Typical choices are squared Euclidean (SE ) loss L2 (G? (y), x) = ||G? (y) ? x||22 or `1 loss
L1 (G? (y), x) = ||G? (y) ? x||1 , but these lead to blurred results in many image generation tasks.
We propose a new class of losses, which we call deep perceptual similarity metrics (DeePSiM ). These
go beyond simple distances in image space and can capture complex and perceptually important
properties of images. These losses are weighted sums of three terms: feature loss Lf eat , adversarial
loss Ladv , and image space loss Limg :
L = ?f eat Lf eat + ?adv Ladv + ?img Limg .
(1)
They correspond to a network architecture, an overview of which is shown in Fig. 2 . The architecture
consists of three convolutional networks: the generator G? that implements the generator function,
the discriminator D? that discriminates generated images from natural images, and the comparator C
that computes features used to compare the images.
Loss in feature space. Given a differentiable comparator C : RW ?H?C ? RF , we define
X
Lf eat =
||C(G? (yi )) ? C(xi )||22 .
(2)
i
C may be fixed or may be trained; for example, it can be a part of the generator or the discriminator.
Lf eat alone does not provide a good loss for training. Optimizing just for similarity in a high-level
feature space typically leads to high-frequency artifacts [21]. This is because for each natural image
there are many non-natural images mapped to the same feature vector 2 . Therefore, a natural image
prior is necessary to constrain the generated images to the manifold of natural images.
Adversarial loss. Instead of manually designing a prior, as in Mahendran and Vedaldi [21], we learn
it with an approach similar to Generative Adversarial Networks (GANs) of Goodfellow et al. [1] .
Namely, we introduce a discriminator D? which aims to discriminate the generated images from real
ones, and which is trained concurrently with the generator G? . The generator is trained to ?trick? the
discriminator network into classifying the generated images as real. Formally, the parameters ? of
the discriminator are trained by minimizing
X
Ldiscr = ?
log(D? (xi )) + log(1 ? D? (G? (yi ))),
(3)
i
and the generator is trained to minimize
Ladv = ?
X
log D? (G? (yi )).
(4)
i
2
This is unless the feature representation is specifically designed to map natural and non-natural images far
apart, such as the one extracted from the discriminator of a GAN.
3
Loss in image space. Adversarial training is unstable and sensitive to hyperparameter values. To
suppress oscillatory behavior and provide strong gradients during training, we add to our loss function
a small squared error term:
X
Limg =
||G? (yi ) ? xi ||22 .
(5)
i
We found that this term makes hyperparameter tuning significantly easier, although it is not strictly
necessary for the approach to work.
3.1
Architectures
Generators. All our generators make use of up-convolutional (?deconvolutional?) layers [8] . An upconvolutional layer can be seen as up-sampling and a subsequent convolution. We always up-sample
by a factor of 2 with ?bed of nails? upsampling. A basic generator architecture is shown in Table 1 .
In all networks we use leaky ReLU nonlinearities, that is, LReLU (x) = max(x, 0) + ? min(x, 0).
We used ? = 0.3 in our experiments. All generators have linear output layers.
Comparators. We experimented with three comparators:
1. AlexNet [22] is a network with 5 convolutional and 2 fully connected layers trained on image
classification. More precisely, in all experiments we used a variant of AlexNet called CaffeNet [23].
2. The network of Wang and Gupta [24] has the same architecture as CaffeNet, but is trained without
supervision. The network is trained to map frames of one video clip close to each other in the feature
space and map frames from different videos far apart. We refer to this network as VideoNet.
3. AlexNet with random weights.
We found using CONV5 features for comparison leads to best results in most cases. We used these
features unless specified otherwise.
Discriminator. In our setup the job of the discriminator is to analyze the local statistics of images.
Therefore, after five convolutional layers with occasional stride we perform global average pooling.
The result is processed by two fully connected layers, followed by a 2-way softmax. We perform
50% dropout after the global average pooling layer and the first fully connected layer. The exact
architecture of the discriminator is shown in the supplementary material.
3.2
Training details
Coefficients for adversarial and image loss were respectively ?adv = 100, ?img = 2 ? 10?6 . The
feature loss coefficient ?f eat depended on the comparator being used. It was set to 0.01 for the
AlexNet CONV5 comparator, which we used in most experiments. Note that a high coefficient in
front of the adversarial loss does not mean that this loss dominates the error function; it simply
compensates for the fact that both image and feature loss include summation over many spatial
locations. We modified the caffe [23] framework to train the networks. For optimization we used
Adam [25] with momentum ?1 = 0.9, ?2 = 0.999 and initial learning rate 0.0002. To prevent the
discriminator from overfitting during adversarial training we temporarily stopped updating it if the
ratio of Ldiscr and Ladv was below a certain threshold (0.1 in our experiments). We used batch size
64 in all experiments. The networks were trained for 500, 000-1, 000, 000 mini-batch iterations.
4
4.1
Experiments
Inverting AlexNet
As a main application, we trained networks to reconstruct images from their features extracted by
AlexNet. This is interesting for a number of reasons. First and most straightforward, this shows which
information is preserved in the representation. Second, reconstruction from artificial networks can
be seen as test-ground for reconstruction from real neural networks. Applying the proposed method
to real brain recordings is a very exciting potential extension of our work. Third, it is interesting to
see that in contrast with the standard scheme ?generative pretraining for a discriminative task?, we
show that ?discriminative pre-training for a generative task? can be fruitful. Lastly, we indirectly
show that our loss can be useful for unsupervised learning with generative models. Our version of
4
Type
fc
fc
fc reshape uconv conv uconv conv uconv conv uconv uconv uconv
InSize ?
?
?
1
4
8
8
16
16
32
32
64
128
OutCh 4096 4096 4096 256
256 512 256 256 128 128
64
32
3
Kernel ?
?
?
?
4
3
4
3
4
3
4
4
4
Stride
?
?
?
?
?2
1
?2
1
?2
1
?2
?2
?2
Table 1: Generator architecture for inverting layer FC6 of AlexNet.
Image
CONV 5
FC 6
FC 7
FC 8
Figure 3: Representative reconstructions from higher layers of AlexNet. General characteristics of
images are preserved very well. In some cases (simple objects, landscapes) reconstructions are nearly
perfect even from FC8. In the leftmost column the network generates dog images from FC7 and FC8.
reconstruction error allows us to reconstruct from very abstract features. Thus, in the context of
unsupervised learning, it would not be in conflict with learning such features.
We describe how our method relates to existing work on feature inversion. Suppose we are given a
feature representation ?, which we aim to invert, and an image I. There are two inverse mappings:
?1
?1
?1
??1
R such that ?(?R (?)) ? ?, and ?L such that ?L (?(I)) ? I. Recently two approaches to
inversion have been proposed, which correspond to these two variants of the inverse.
Mahendran and Vedaldi [21] apply gradient-based optimization to find an image eI which minimizes
||?(I) ? ?(eI)||22 + P (eI),
(6)
where P is a simple natural image prior, such as the total variation (TV) regularizer. This method produces images which are roughly natural and have features similar to the input features, corresponding
to ??1
R . However, due to the simplistic prior, reconstructions from fully connected layers of AlexNet
do not look much like natural images (Fig. 4 bottom row).
Dosovitskiy and Brox [26] train up-convolutional networks on a large training set of natural images to
perform the inversion task. They use squared Euclidean distance in the image space as loss function,
which leads to approximating ??1
L . The networks learn to reconstruct the color and rough positions of
objects, but produce over-smoothed results because they average all potential reconstructions (Fig. 4
middle row).
Our method combines the best of both worlds, as shown in the top row of Fig. 4. The loss in
the feature space helps preserve perceptually important image features. Adversarial training keeps
reconstructions realistic.
Technical details. The generator in this setup takes the features ?(I) extracted by AlexNet and
generates the image I from them, that is, y = ?(I). In general we followed Dosovitskiy and Brox
[26] in designing the generators. The only modification is that we inserted more convolutional layers,
giving the network more capacity. We reconstruct from outputs of layers CONV5 ?FC8. In each layer
we also include processing steps following the layer, that is, pooling and non-linearities. For example,
CONV 5 means pooled features (pool5), and FC 6 means rectified values (relu6).
5
Image
CONV 5
FC 6
FC 7
FC 8
Image
CONV 5
FC 6
FC 7
FC 8
Our
D&B
M&V
Figure 4: AlexNet inversion: comparison with Dosovitskiy and Brox [26] and Mahendran and Vedaldi
[21] . Our results are significantly better, even our failure cases (second image).
The generator used for inverting FC6 is shown in Table 1 . Architectures for other layers are similar,
except that for reconstruction from CONV5, fully connected layers are replaced by convolutional ones.
We trained on 227 ? 227 pixel crops of images from the ILSVRC-2012 training set and evaluated on
the ILSVRC-2012 validation set.
Ablation study. We tested if all components of the loss are necessary. Results with some of these
components removed are shown in Fig. 1 . Clearly the full model performs best. Training just with
loss in the image space leads to averaging all potential reconstructions, resulting in over-smoothed
images. One might imagine that adversarial training makes images sharp. This indeed happens, but
the resulting reconstructions do not correspond to actual objects originally contained in the image.
The reason is that any ?natural-looking? image which roughly fits the blurry prediction minimizes this
loss. Without the adversarial loss, predictions look very noisy because nothing enforces the natural
image prior. Results without the image space loss are similar to the full model (see supplementary
material), but training was more sensitive to the choice of hyperparameters.
Inversion results. Representative reconstructions from higher layers of AlexNet are shown in Fig. 3 .
Reconstructions from CONV5 are nearly perfect, combining the natural colors and sharpness of details.
Reconstructions from fully connected layers are still strikingly good, preserving the main features of
images, colors, and positions of large objects. More results are shown in the supplementary material.
For quantitative evaluation we compute the normalized Euclidean error ||a ? b||2 /N . The normalization coefficient N is the average of Euclidean distances between all pairs of different samples from
the test set. Therefore, the error of 100% means that the algorithm performs the same as randomly
drawing a sample from the test set. Error in image space and in feature space (that is, the distance
between the features of the image and the reconstruction) are shown in Table 2 . We report all numbers
for our best approach, but only some of them for the variants, because of limited computational
resources.
The method of Mahendran&Vedaldi performs well in feature space, but not in image space, the
method of Dosovitskiy&Brox ? vice versa. The presented approach is fairly good on both metrics.
This is further supported by iterative image re-encoding results shown in Fig. 5 . To generate these, we
compute the features of an image, apply our "inverse" network to those, compute the features of the
resulting reconstruction, apply the "inverse" net again, and iterate this procedure. The reconstructions
start to change significantly only after 4 ? 8 iterations of this process.
Nearest neighbors Does the network simply memorize the training set? For several validation
images we show nearest neighbors (NNs) in the training set, based on distances in different feature
spaces (see supplementary material). Two main conclusions are: 1) NNs in feature spaces are much
more meaningful than in the image space, and 2) The network does more than just retrieving the NNs.
Interpolation. We can morph images into each other by linearly interpolating between their features
and generating the corresponding images. Fig. 7 shows that objects shown in the images smoothly
warp into each other. This capability comes ?for free? with our generator networks, but in fact it is
very non-trivial, and to the best of our knowledge has not been previously demonstrated to this extent
on general natural images. More examples are shown in the supplementary material.
6
CONV 5
FC 6
FC 7
CONV 5
FC 8
M & V [21]
71/19 80/19 82/16 84/09
D & B [26]
35/? 51/? 56/? 58/?
Our image loss
?/? 46/79 ?/? ?/?
AlexNet CONV5 43/37 55/48 61/45 63/29
VideoNet CONV5 ?/? 51/57 ?/? ?/?
Alex5
Alex6
Video5
FC 7
FC 8
1
2
4
Table 2: Normalized inversion error (in %)
when reconstructing from different layers of
AlexNet with different methods. First in each
pair ? error in the image space, second ? in the
feature space.
Image
FC 6
8
Figure 5: Iteratively re-encoding images with
AlexNet and reconstructing. Iteration number
shown on the left.
Rand5
Image pair 1
Image pair 2
FC 6
Figure 6: Reconstructions from FC6 with different comparators. The number indicates the
layer from which features were taken.
Figure 7: Interpolation between images by interpolating between their FC6 features.
Different comparators. The AlexNet network we used above as comparator has been trained on
a huge labeled dataset. Is this supervision really necessary to learn a good comparator? We show
here results with several alternatives to CONV5 features of AlexNet: 1) FC6 features of AlexNet, 2)
CONV 5 of AlexNet with random weights, 3) CONV 5 of the network of Wang and Gupta [24] which
we refer to as VideoNet. The results are shown in Fig. 6 . While the AlexNet CONV5 comparator
provides best reconstructions, other networks preserve key image features as well.
Sampling pre-images. Given a feature vector y, it would be interesting to not just generate a single
reconstruction, but arbitrarily many samples from the distribution p(I|y). A straightforward approach
would inject noise into the generator along with the features, so that the network could randomize its
outputs. This does not yield the desired result, even if the discriminator is conditioned on the feature
vector y. Nothing in the loss function forces the generator to output multiple different reconstructions
per feature vector. An underlying problem is that in the training data there is only one image per
feature vector, i.e., a single sample per conditioning vector. We did not attack this problem in this
paper, but we believe it is an interesting research direction.
4.2
Variational autoencoder
We also show an example application of our loss to generative modeling of images, demonstrating its
superiority to the usual image space loss. A standard VAE consists of an encoder Enc and a decoder
Dec. The encoder maps an input sample x to a distribution over latent variables z ? Enc(x) =
q(z|x). Dec maps from this latent space to a distribution over images x
? ? Dec(z) = p(x|z). The
loss function is
X
?Eq(z|xi ) log p(xi |z) + DKL (q(z|xi )||p(z)),
(7)
i
where p(z) is a prior distribution of latent variables and DKL is the Kullback-Leibler divergence.
The first term in Eq. 7 is a reconstruction error. If we assume that the decoder predicts a Gaussian
distribution at each pixel, then it reduces to squared Euclidean error in the image space. The second
term pulls the distribution of latent variables towards the prior. Both q(z|x) and p(z) are commonly
7
(a)
(b)
(c)
Figure 8: Samples from VAEs: (a) with the squared Euclidean loss, (b), (c) with DeePSiM loss with
AlexNet CONV5 and VideoNet CONV5 comparators, respectively.
assumed to be Gaussian, in which case the KL divergence can be computed analytically. Please
see Kingma and Welling [7] for details.
We use the proposed loss instead of the first term in Eq. 7 . This is similar to Larsen et al. [18], but
the comparator need not be a part of the discriminator. Technically, there is little difference from
training an ?inversion? network. First, we allow the encoder weights to be adjusted. Second, instead
of predicting a single latent vector z, we predict two vectors ? and ? and sample z = ? + ? ?,
where ? is standard Gaussian (zero mean, unit variance) and is element-wise multiplication. Third,
we add the KL divergence term to the loss:
1X
LKL =
||?i ||22 + ||?i ||22 ? hlog ?i2 , 1i .
(8)
2 i
We manually set the weight ?KL of the KL term in the overall loss (we found ?KL = 20 to work
well). Proper probabilistic derivation in presence of adversarial training is non-straightforward, and
we leave it for future research.
We trained on 227 ? 227 pixel crops of 256 ? 256 pixel ILSVRC-2012 images. The encoder
architecture is the same as AlexNet up to layer FC6, and the decoder architecture is same as in
Table 1 . We initialized the encoder with AlexNet weights when using AlexNet as comparator, and at
random when using VideoNet as comparator. We sampled from the model by sampling the latent
variables from a standard Gaussian z = ? and generating images from that with the decoder.
Samples generated with the usual SE loss, as well as two different comparators (AlexNet CONV5,
VideoNet CONV5) are shown in Fig. 8 . While Euclidean loss leads to very blurry samples, our
method yields images with realistic statistics. Global structure is lacking, but we believe this can be
solved by combining the approach with a GAN. Interestingly, the samples trained with the VideoNet
comparator and random initialization look qualitatively similar to the ones with AlexNet, showing
that supervised training may not be necessary to yield a good loss function for generative model.
5
Conclusion
We proposed a class of loss functions applicable to image generation that are based on distances in
feature spaces and adversarial training. Applying these to two tasks ? feature inversion and random
natural image generation ? reveals that our loss is clearly superior to the typical loss in image space.
In particular, it allows us to generate perceptually important details even from very low-dimensional
image representations. Our experiments suggest that the proposed loss function can become a useful
tool for generative modeling.
Acknowledgements
We acknowledge funding by the ERC Starting Grant VideoLearn (279401).
8
References
[1] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio.
Generative adversarial nets. In NIPS, 2014.
[2] L. A. Gatys, A. S. Ecker, and M. Bethge. Image style transfer using convolutional neural networks. In
CVPR, 2016.
[3] G. E. Hinton and T. J. Sejnowski. Learning and relearning in boltzmann machines. In Parallel Distributed
Processing: Volume 1: Foundations, pages 282?317. MIT Press, Cambridge, 1986.
[4] G. E. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural Comput.,
18(7):1527?1554, 2006.
[5] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep belief networks for scalable
unsupervised learning of hierarchical representations. In ICML, pages 609?616, 2009.
[6] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science,
313(5786):504?507, July 2006.
[7] D. P. Kingma and M. Welling. Auto-encoding variational bayes. In ICLR, 2014.
[8] A. Dosovitskiy, J. T. Springenberg, and T. Brox. Learning to generate chairs with convolutional neural
networks. In CVPR, 2015.
[9] S. Daly. Digital images and human vision. chapter The Visible Differences Predictor: An Algorithm for
the Assessment of Image Fidelity, pages 179?206. MIT Press, 1993.
[10] C. J. van den Branden Lambrecht and O. Verscheure. Perceptual quality measure using a spatio-temporal
model of the human visual system. Electronic Imaging: Science & Technology, 1996.
[11] S. Winkler. A perceptual distortion metric for digital color images. In in Proc. SPIE, pages 175?184, 1998.
[12] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: From error visibility
to structural similarity. IEEE Transactions on Image Processing, 13(4):600?612, 2004.
[13] K. Ridgeway, J. Snell, B. Roads, R. S. Zemel, and M. C. Mozer. Learning to generate images with
perceptual similarity metrics. arxiv:1511.06409, 2015.
[14] E. L. Denton, S. Chintala, arthur Szlam, and R. Fergus. Deep Generative Image Models using a Laplacian
Pyramid of Adversarial Networks. In NIPS, pages 1486?1494, 2015.
[15] A. Radford, L. Metz, and S. Chintala. Unsupervised Representation Learning with Deep Convolutional
Generative Adversarial Networks. In ICLR, 2016.
[16] M. Mirza and S. Osindero. Conditional generative adversarial nets. arxiv:1411.1784, 2014.
[17] M. Mathieu, C. Couprie, and Y. LeCun. Deep multi-scale video prediction beyond mean square error. In
ICLR, 2016.
[18] A. B. L. Larsen, S. K. S?nderby, H. Larochelle, and O. Winther. Autoencoding beyond pixels using a
learned similarity metric. In ICML, pages 1558?1566, 2016.
[19] A. Lamb, V. Dumoulin, and A. Courville.
arXiv:1602.03220, 2016.
Discriminative regularization for generative models.
[20] J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In
ECCV, pages 694?711, 2016.
[21] A. Mahendran and A. Vedaldi. Understanding deep image representations by inverting them. In CVPR,
2015.
[22] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural
networks. In NIPS, pages 1106?1114, 2012.
[23] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe:
Convolutional architecture for fast feature embedding. arXiv:1408.5093, 2014.
[24] X. Wang and A. Gupta. Unsupervised learning of visual representations using videos. In ICCV, 2015.
[25] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
[26] A. Dosovitskiy and T. Brox. Inverting visual representations with convolutional networks. In CVPR, 2016.
9
| 6158 |@word middle:1 version:2 inversion:10 solid:1 initial:1 interestingly:1 deconvolutional:1 existing:2 guadarrama:1 comparing:1 activation:1 yet:1 videolearn:1 visible:2 subsequent:1 realistic:5 visibility:1 designed:1 alone:3 generative:17 provides:1 location:4 attack:1 five:1 along:1 become:2 retrieving:1 consists:2 combine:1 introduce:2 indeed:1 roughly:2 behavior:2 surge:1 gatys:2 multi:2 brain:1 salakhutdinov:1 verscheure:1 automatically:1 little:2 actual:1 conv:11 spain:1 linearity:1 underlying:1 alexnet:30 what:2 minimizes:2 transformation:1 temporal:2 quantitative:1 alahi:1 unit:1 grant:1 szlam:1 superiority:1 local:5 depended:1 encoding:4 interpolation:2 black:1 alexey:1 might:1 initialization:1 resembles:1 limited:1 contractive:1 lecun:1 enforces:1 practice:1 implement:1 lf:4 procedure:2 significantly:4 vedaldi:5 pre:3 road:1 suggest:1 get:1 cannot:1 close:1 context:1 applying:2 fruitful:1 deterministic:1 map:5 demonstrated:1 maximizing:1 ecker:1 go:1 straightforward:3 starting:1 resolution:7 sharpness:1 pouget:1 insight:1 pull:1 embedding:1 variation:1 target:2 suppose:3 imagine:1 exact:2 designing:2 goodfellow:4 trick:2 element:1 magnification:1 updating:1 nderby:1 predicts:1 labeled:1 bottom:1 inserted:1 wang:4 capture:1 solved:1 adv:3 connected:6 removed:1 discriminates:1 mozer:1 warde:1 trained:16 solving:1 technically:2 strikingly:1 chapter:1 regularizer:1 derivation:1 train:5 fast:2 describe:1 pool5:1 sejnowski:1 artificial:1 zemel:1 caffe:2 widely:2 supplementary:5 cvpr:4 distortion:2 drawing:1 reconstruct:5 compressed:2 otherwise:1 compensates:1 statistic:6 winkler:2 encoder:5 noisy:1 autoencoding:1 karayev:1 differentiable:2 net:4 propose:2 reconstruction:25 combining:3 ablation:1 enc:2 caffenet:2 bed:1 sutskever:1 darrell:1 assessing:1 produce:3 generating:5 perfect:3 adam:2 leave:1 object:7 help:1 measured:1 nearest:2 conv5:13 eq:3 job:1 strong:1 c:1 predicted:1 memorize:1 come:1 larochelle:1 differ:1 direction:1 stochastic:2 human:2 material:5 feeding:1 really:1 preliminary:1 snell:1 summation:1 adjusted:1 strictly:1 extension:1 practically:1 lkl:1 ground:2 mapping:1 predict:1 daly:1 proc:1 applicable:2 sensitive:3 concurrent:1 vice:1 tool:1 reflects:1 weighted:1 mit:2 rough:1 clearly:3 concurrently:1 always:1 aim:4 modified:2 rather:1 super:3 gaussian:4 vae:1 indicates:1 contrast:1 adversarial:21 typically:2 interested:1 pixel:6 overall:1 classification:2 fidelity:1 spatial:1 softmax:1 fairly:1 brox:8 aware:1 ng:1 sampling:3 manually:2 look:5 unsupervised:6 denton:2 thin:1 comparators:7 nearly:3 future:3 icml:2 report:1 mirza:2 dosovitskiy:7 inherent:1 employ:1 randomly:1 preserve:2 divergence:4 replaced:1 consisting:1 interest:1 huge:1 highly:2 evaluation:1 extreme:1 farley:1 edge:1 necessary:5 arthur:1 unless:2 euclidean:8 limg:3 re:2 desired:1 initialized:1 deformation:1 girshick:1 stopped:1 column:1 modeling:4 measuring:1 predictor:2 krizhevsky:1 successful:1 johnson:2 osindero:2 too:1 front:1 optimally:1 dependency:1 morph:1 nns:3 winther:1 sensitivity:1 probabilistic:2 lee:1 bethge:1 gans:6 squared:6 again:1 inject:1 style:2 potential:3 nonlinearities:1 de:1 stride:2 pooled:1 coefficient:4 blurred:1 explicitly:1 dumoulin:1 analyze:2 start:1 bayes:1 metz:1 complicated:2 capability:1 parallel:1 jia:1 minimize:1 square:1 branden:1 convolutional:17 variance:1 characteristic:1 yield:6 correspond:3 landscape:1 produced:1 rectified:1 history:1 oscillatory:2 failure:1 frequency:2 larsen:3 chintala:2 spie:1 sampled:1 dataset:1 popular:2 color:5 knowledge:1 dimensionality:1 higher:3 originally:1 supervised:2 evaluated:1 just:4 lastly:1 autoencoders:2 ei:3 assessment:3 quality:3 artifact:2 reveal:1 believe:2 contain:1 true:1 normalized:2 hence:1 analytically:1 regularization:1 iteratively:1 satisfactory:1 leibler:1 i2:1 conditionally:1 during:2 inferior:1 please:1 leftmost:1 prominent:2 freiburg:2 demonstrate:2 ladv:4 performs:3 l1:1 image:127 variational:5 upconvolutional:2 wise:1 recently:5 funding:1 superior:1 overview:1 conditioning:4 volume:1 approximates:1 refer:2 versa:1 cambridge:1 tuning:1 erc:1 moving:1 stable:1 similarity:20 supervision:2 add:2 fc7:1 recent:1 optimizing:1 irrelevant:1 apart:2 certain:1 arbitrarily:1 yi:6 seen:2 preserving:1 dashed:2 july:1 relates:1 full:2 desirable:1 multiple:1 reduces:1 simoncelli:1 smooth:1 technical:1 long:2 dkl:2 laplacian:1 schematic:1 prediction:5 variant:4 basic:1 simplistic:1 crop:2 scalable:1 metric:12 vision:1 arxiv:4 iteration:3 normalization:2 kernel:1 pyramid:1 invert:2 achieved:1 dec:3 preserved:3 fine:1 crucial:1 bovik:1 pooling:4 recording:1 mahendran:5 flow:1 spirit:1 call:2 structural:2 presence:1 bengio:1 variety:1 iterate:1 relu:1 fit:1 architecture:12 perfectly:1 idea:1 pretraining:1 deep:12 useful:2 se:3 clip:1 processed:1 rw:3 generate:10 per:3 hyperparameter:2 key:1 salient:1 nevertheless:1 threshold:1 demonstrating:1 prevent:1 imaging:1 sum:1 run:1 inverse:4 uncertainty:1 springenberg:1 lamb:2 nail:1 electronic:1 patch:1 dropout:1 layer:24 followed:2 distinguish:2 courville:2 precisely:1 constrain:1 fei:2 ri:2 generates:3 min:1 chair:1 eat:6 relatively:1 tv:1 according:1 combination:1 reconstructing:2 sheikh:1 shallow:1 making:1 modification:2 happens:1 den:1 invariant:2 restricted:1 gradually:1 iccv:1 taken:1 resource:1 previously:1 needed:1 end:1 studying:1 apply:3 occasional:1 hierarchical:1 appropriate:2 indirectly:1 blurry:4 reshape:1 alternative:2 batch:3 thomas:1 original:1 top:1 relu6:1 include:2 gan:5 uconv:6 giving:1 especially:1 approximating:1 objective:1 print:1 randomize:1 usual:2 gradient:3 iclr:4 distance:11 mapped:1 upsampling:2 capacity:1 decoder:4 manifold:1 extent:1 unstable:1 trivial:1 reason:2 ozair:1 mini:1 ratio:1 minimizing:1 difficult:1 setup:2 hlog:1 ba:1 suppress:1 proper:1 boltzmann:2 perform:3 allowing:1 teh:1 ssim:3 convolution:2 acknowledge:1 hinton:4 looking:1 precise:1 frame:4 smoothed:2 sharp:3 introduced:1 inverting:5 pair:5 namely:1 specified:1 dog:1 kl:5 discriminator:17 imagenet:1 conflict:1 learned:2 barcelona:1 kingma:3 nip:4 able:1 beyond:3 usually:1 below:1 rf:1 including:1 max:1 video:6 belief:2 hot:1 suitable:2 natural:17 force:2 predicting:2 fc8:3 scheme:1 technology:1 picture:1 mathieu:2 autoencoder:4 auto:1 prior:8 understanding:1 l2:1 acknowledgement:1 multiplication:1 lacking:1 loss:60 fully:6 generation:7 interesting:5 generator:22 validation:2 foundation:1 digital:2 shelhamer:1 exciting:1 classifying:1 share:1 row:3 eccv:1 supported:1 free:1 allow:2 warp:1 wide:1 neighbor:2 face:1 leaky:1 distributed:1 van:1 world:1 rich:1 computes:1 forward:1 commonly:1 qualitatively:1 far:2 welling:2 transaction:1 ranganath:1 uni:1 feat:1 kullback:1 keep:1 global:3 overfitting:1 reveals:1 img:5 assumed:1 spatio:2 xi:8 discriminative:3 fergus:1 un:1 iterative:1 latent:6 table:6 fc6:7 nature:1 learn:5 transfer:2 complex:1 interpolating:2 did:1 main:4 linearly:1 arrow:1 lrelu:1 hyperparameters:1 noise:1 nothing:2 body:1 xu:1 fig:16 representative:2 grosse:1 position:3 momentum:1 comput:1 perceptual:12 third:2 donahue:1 showing:1 experimented:2 abadie:1 gupta:3 dominates:1 essential:1 texture:2 perceptually:4 conditioned:1 relearning:1 easier:1 suited:1 smoothly:1 fc:20 simply:2 likely:1 visual:3 contained:1 temporarily:1 radford:2 truth:1 extracted:5 conditional:2 comparator:11 towards:1 couprie:1 change:1 typical:2 except:2 specifically:1 reducing:1 averaging:2 ridgeway:2 called:1 total:1 pas:1 invariance:2 discriminate:1 meaningful:1 dosovits:1 vaes:1 formally:1 ilsvrc:3 tested:1 |
5,701 | 6,159 | Coin Betting and Parameter-Free Online Learning
Francesco Orabona
Stony Brook University, Stony Brook, NY
[email protected]
D?avid P?al
Yahoo Research, New York, NY
[email protected]
Abstract
In the recent years, a number of parameter-free algorithms have been developed
for online linear optimization over Hilbert spaces and for learning with expert advice. These algorithms achieve optimal regret bounds that depend on the unknown
competitors, without having to tune the learning rates with oracle choices.
We present a new intuitive framework to design parameter-free algorithms for both
online linear optimization over Hilbert spaces and for learning with expert advice,
based on reductions to betting on outcomes of adversarial coins. We instantiate
it using a betting algorithm based on the Krichevsky-Trofimov estimator. The
resulting algorithms are simple, with no parameters to be tuned, and they improve
or match previous results in terms of regret guarantee and per-round complexity.
1
Introduction
We consider the Online Linear Optimization (OLO) [4, 25] setting. In each round t, an algorithm
chooses a point wt from a convex decision set K and then receives a reward vector gt . The algorithm?s goal is to keep its regret small, defined as the difference between its cumulative reward and
the cumulative reward of a fixed strategy u ? K, that is
RegretT (u) =
T
X
t=1
hgt , ui ?
T
X
hgt , wt i .
t=1
We focus on two particular decision sets, the N -dimensional probability simplex ?N = {x ?
RN : x ? 0, kxk1 = 1} and a Hilbert space H. OLO over ?N is referred to as the problem of
Learning with Expert Advice (LEA). We assume bounds on the norms of the reward vectors: For
OLO over H, we assume that kgt k ? 1, and for LEA we assume that gt ? [0, 1]N .
OLO is a basic building block of many machine learning problems. For example, Online Convex
Optimization (OCO), the problem analogous to OLO where hgt , ui is generalized to an arbitrary
convex function `t (u), is solved through a reduction to OLO [25]. LEA [17, 27, 5] provides a
way of combining classifiers and it is at the heart of boosting [12]. Batch and stochastic convex
optimization can also be solved through a reduction to OLO [25].
To achieve optimal regret, most of the existing online algorithms require the user to set the learning
rate (step size) ? to an unknown/oracle value. For example, to obtain the optimal bound for Online
Gradient Descent (OGD), the learning rate has to be set with the knowledge of the norm of the
competitor u, kuk; second entry in Table 1. Likewise, the optimal learning rate for Hedge depends on
the KL divergence between the prior weighting ? and the unknown competitor u, D (uk?); seventh
entry in Table 1. Recently, new parameter-free algorithms have been proposed, both for LEA [6, 8,
18, 19, 15, 11] and for OLO/OCO over Hilbert spaces [26, 23, 21, 22, 24]. These algorithms adapt
to the number of experts and to the norm of the optimal predictor, respectively, without the need
to tune parameters. However, their design and underlying intuition is still a challenge. Foster et al.
[11] proposed a unified framework, but it is not constructive. Furthermore, all existing algorithms for
LEA either have sub-optimal regret bound (e.g. extra O(log log T ) factor) or sub-optimal running
time (e.g. requiring solving a numerical problem in every round, or with extra factors); see Table 1.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Algorithm
OGD, ? = ?1T [25]
OGD, ? = ?UT [25]
[23]
[22, 24]
This paper, Sec. 7.1
q
Hedge, ? = lnTN , ?i =
1
N
[12]
Hedge, ? = ?UT [12]
[6]
[8]
[8, 19, 15]2
[11]
This paper, Sec. 7.2
Worst-case regret guarantee
2 ?
O((1 + kuk ) T ), ?u ? H
?
U T for any u ? H s.t. kuk ? U
?
O(kukp
ln(1 + kuk T ) T ), ?u ? H
O(kuk pT ln(1 + kuk T )), ?u ? H
O(kuk T ln(1 + kuk T )), ?u ? H
?
O( T ln N ), ?u ? ?N
p
?
O(U T ) for any u ? ?N s.t. D (uk?) ? U
p
2
(1 + D (uk?)) + ln N ), ?u ? ?N
O( T p
O(
p T (1 + D (uk?))), ?u ? ?N
O( Tp(ln ln T + D (uk?))), ?u ? ?N
O(pT (1 + D (uk?))), ?u ? ?N
O( T (1 + D (uk?))), ?u ? ?N
Per-round time
complexity
O(1)
O(1)
O(1)
O(1)
O(1)
Adaptive
Unified
analysis
X
X
X
X
X
X
X
X
X
X
X
O(N )
O(N )
O(N K)1
O(N K)1
O(N )
O(N ln maxu??N D (uk?))3
O(N )
Table 1: Algorithms for OLO over Hilbert space and LEA.
Contributions. We show that a more fundamental notion subsumes both OLO and LEA parameterfree algorithms. We prove that the ability to maximize the wealth in bets on the outcomes of coin
flips implies OLO and LEA parameter-free algorithms. We develop a novel potential-based framework for betting algorithms. It gives intuition to previous constructions and, instantiated with the
Krichevsky-Trofimov estimator, provides new and elegant algorithms for OLO and LEA. The new
algorithms also have optimal worst-case guarantees on regret and time complexity; see Table 1.
2
Preliminaries
We begin by providing some definitions.
PThe Kullback-Leibler (KL) divergence between two discrete distributions p and q is D (pkq) = i pi ln (pi /qi ). If p, q are real numbers in [0, 1], we denote
by D (pkq) = p ln (p/q)+(1?p) ln ((1 ? p)/(1 ? q)) the KL divergence between two Bernoulli distributions with parameters p and q. We denote by H a Hilbert space, by h?, ?i its inner product, and by
k?k the induced norm. We denote by k?k1 the 1-norm in RN . A function F : I ? R+ is called logarithmically convex iff f (x) = ln(F (x)) is convex. Let f : V ? R ? {??}, the Fenchel conjugate
of f is f ? : V ? ? R?{??} defined on the dual vector space V ? by f ? (?) = supx?V h?, xi?f (x).
A function f : V ? R ? {+?} is said to be proper if there exists x ? V such that f (x) is finite. If
f is a proper lower semi-continuous convex function then f ? is also proper lower semi-continuous
convex and f ?? = f .
Coin Betting. We consider a gambler making repeated bets on the outcomes of adversarial coin
flips. The gambler starts with an initial endowment > 0. In each round t, he bets on the outcome
of a coin flip gt ? {?1, 1}, where +1 denotes heads and ?1 denotes tails. We do not make any
assumption on how gt is generated, that is, it can be chosen by an adversary.
The gambler can bet any amount on either heads or tails. However, he is not allowed to borrow any
additional money. If he loses, he loses the betted amount; if he wins, he gets the betted amount back
and, in addition to that, he gets the same amount as a reward. We encode the gambler?s bet in round t
by a single number wt . The sign of wt encodes whether he is betting on heads or tails. The absolute
value encodes the betted amount. We define Wealtht as the gambler?s wealth at the end of round t
and Rewardt as the gambler?s net reward (the difference of wealth and initial endowment), that is
Wealtht = +
t
X
wi gi
and
Rewardt = Wealtht ? .
(1)
i=1
In the following, we will also refer to a bet with ?t , where ?t is such that
wt = ?t Wealtht?1 .
(2)
The absolute value of ?t is the fraction of the current wealth to bet, and sign of ?t encodes whether
he is betting on heads or tails. The constraint that the gambler cannot borrow money implies that
?t ? [?1, 1]. We also generalize the problem slightly by allowing the outcome of the coin flip gt to
be any real number in the interval [?1, 1]; wealth and reward in (1) remain exactly the same.
1
These algorithms require to solve a numerical problem at each step. The number K is the number of steps
needed to reach the required precision. Neither the precision nor K are calculated in these papers.
2
The proof in [15] can be modified to prove a KL bound, see http://blog.wouterkoolen.info.
3
A variant of the algorithm in [11] can be implemented with the stated time complexity [10].
2
3
Warm-Up: From Betting to One-Dimensional Online Linear Optimization
In this section, we sketch how to reduce one-dimensional OLO to betting on a coin. The reasoning
for generic Hilbert spaces (Section 5) and for LEA (Section 6) will be similar. We will show that
the betting view provides a natural way for the analysis and design of online learning algorithms,
where the only design choice is the potential function of the betting algorithm (Section 4). A specific
example of coin betting potential and the resulting algorithms are in Section 7.
As a warm-up, let us consider an algorithm for OLO over one-dimensional Hilbert space R. Let
{wt }?
of rewards {gt }?
t=1 be its sequence of predictions on a sequenceP
t=1 , gt ? [?1, 1]. The total
t
reward of the algorithm after t rounds is Rewardt = i=1 gi wi . Also, even if in OLO there is no
concept of ?wealth?, define the wealth of the OLO algorithm as Wealtht = + Rewardt , as in (1).
We now restrict our attention to algorithms whose predictions wt are of the form of a bet, that is
wt = ?t Wealtht?1 , where ?t ? [?1, 1]. We will see that the restriction on ?t does not prevent us
from obtaining parameter-free algorithms with optimal bounds.
Given the above, it is immediate to see that any coin betting algorithm that, on a sequence of
coin flips {gt }?
t=1 , gt ? [?1, 1], bets the amounts wt can be used as an OLO algorithm in a onedimensional Hilbert space R. But, what would be the regret of such OLO algorithms?
PT
Assume that the betting algorithm at hand guarantees that its wealth is at least F ( t=1 gt ) starting
from an endowment , for a given potential function F , then
!
T
T
X
X
gt wt = WealthT ? ? F
gt ? .
(3)
RewardT =
t=1
t=1
Intuitively, if the reward is big we can expect the regret to be small. Indeed, the following lemma
converts the lower bound on the reward to an upper bound on the regret.
Lemma 1 (Reward-Regret relationship [22]). Let V, V ? be a pair of dual vector spaces. Let F :
V ? R ? {+?} be a proper convex lower semi-continuous function and let F ? : V ? ? R ? {+?}
be its Fenchel conjugate. Let w1 , w2 , . . . , wT ? V and g1 , g2 , . . . , gT ? V ? . Let ? R. Then,
!
T
T
T
X
X
X
gt ?
if and only if
?u ? V ? ,
hgt , u ? wt i ? F ? (u) + .
hgt , wt i ? F
t=1
t=1
|
{z
RewardT
t=1
}
|
{z
RegretT (u)
}
Applying the lemma, we get a regret upper bound: RegretT (u) ? F ? (u) + for all u ? H.
PT
To summarize, if we have a betting algorithm that guarantees a minimum wealth of F ( t=1 gt ),
it can be used to design and analyze a one-dimensional OLO algorithm. The faster the growth of
the wealth, the smaller the regret will be. Moreover, the lemma also shows that trying to design an
PT
algorithm that is adaptive to u is equivalent to designing an algorithm that is adaptive to t=1 gt .
Also, most importantly, methods that guarantee optimal wealth for the betting scenario are already
known, see, e.g., [4, Chapter 9]. We can just re-use them to get optimal online algorithms!
4
Designing a Betting Algorithm: Coin Betting Potentials
For sequential betting on i.i.d. coin flips, an optimal strategy has been proposed by Kelly [14].
The strategy assumes that the coin flips {gt }?
t=1 , gt ? {+1, ?1}, are generated i.i.d. with known
probability of heads. If p ? [0, 1] is the probability of heads, the Kelly bet is to bet ?t = 2p ? 1 at
each round. He showed that, in the long run, this strategy will provide more wealth than betting any
other fixed fraction of the current wealth [14].
For adversarial coins, Kelly betting does not make sense. With perfect knowledge of the future, the
gambler could always bet everything on the right outcome. Hence, after T rounds from an initial
endowment , the maximum wealth he would get is 2T . Instead, assume he bets the same fraction ?
of its wealth at each round. Let Wealtht (?) the wealth of such strategy after t rounds. As observed
PT
in [21], the optimal fixed fraction to bet is ? ? = ( t=1 gt )/T and it gives the wealth
PT
PT
gt
1
( t=1 gt )2
?
exp
,
(4)
WealthT (? ? ) = exp T ? D 21 + t=1
2T
2
2T
3
where the inequality follows from Pinsker?s inequality [9, Lemma 11.6.1].
However, even without knowledge of the future, it is possible to go very close to the wealth in (4).
This problem was studied by Krichevsky and Trofimov [16],
who proposed that after seeing the coin
P
t?1
1/2+
1[g =+1]
i
i=1
flips g1 , g2 , . . . , gt?1 the empirical estimate kt =
should be used instead of p.
t
Their estimate is commonly called KT estimator.1 The KT estimator results in the betting
?t = 2kt ? 1 =
Pt?1
i=1
gi
(5)
t
which we call adaptive Kelly betting based on the KT estimator. It looks like an online and slightly
biased version of the oracle choice of ? ? . This strategy guarantees2
PT
(? ? )
1
t=1 gt
1
?T
?
+
.
WealthT ? Wealth
=
exp
T
?
D
2
2
2T
2 T
2 T
This guarantee is optimal up to constant factors [4] and mirrors the guarantee of the Kelly bet.
Here, we propose a new set of definitions that allows to generalize the strategy of adaptive Kelly
betting based on the KT estimator. For these strategies it will be possible to prove that, for any
!
g1 , g2 , . . . , gt ? [?1, 1],
t
X
Wealtht ? Ft
gi ,
(6)
i=1
where Ft (x) is a certain function. We call such functions potentials. The betting strategy will be
determined uniquely by the potential (see (c) in the Definition 2), and we restrict our attention to
potentials for which (6) holds. These constraints are specified in the definition below.
Definition 2 (Coin Betting Potential). Let > 0. Let {Ft }?
t=0 be a sequence of functions
Ft : (?at , at ) ? R+ where at > t. The sequence {Ft }?
is
called
a sequence of coin betting
t=0
potentials for initial endowment , if it satisfies the following three conditions:
(a) F0 (0) = .
(b) For every t ? 0, Ft (x) is even, logarithmically convex, strictly increasing on [0, at ), and
limx?at Ft (x) = +?.
(c) For every t ? 1, every x ? [?(t ? 1), (t ? 1)] and every g ? [?1, 1], (1 + g?t ) Ft?1 (x) ?
Ft (x + g), where
t (x+1)?Ft (x?1)
(7)
?t = F
Ft (x+1)+Ft (x?1) .
The sequence {Ft }?
t=0 is called a sequence of excellent coin betting potentials for initial endowment if it satisfies conditions (a)?(c) and the condition (d) below.
(d) For every t ? 0, Ft is twice-differentiable and satisfies x ? Ft00 (x) ? Ft0 (x) for every x ? [0, at ).
Let?s give some intuition on this definition. First, let?s show by induction on t that (b) and (c) of the
definition together with (2) give a betting strategy that satisfies (6). The base case t = 0 is trivial.
At time t ? 1, bet wt = ?t Wealtht?1 where ?t is defined in (7), then
Wealtht = Wealtht?1 +wt gt = (1 + gt ?t ) Wealtht?1
!
!
t?1
t?1
X
X
? (1 + gt ?t )Ft?1
gi ? Ft
gi + gt = Ft
i=1
i=1
t
X
!
gi
.
i=1
The formula for the potential-based strategy (7) might seem strange. However, it is derived?see
Theorem 8 in Appendix B?by minimizing the worst-case value of the right-hand side of the int (x+gt )
equality used w.r.t. to gt in the induction proof above: Ft?1 (x) ? F1+g
.
t ?t
The last point, (d), is a technical condition that allows us to seamlessly reduce OLO over a Hilbert
space to the one-dimensional problem, characterizing the worst case direction for the reward vectors.
1
Pt?1
1[g =+1]
Compared to the maximum likelihood estimate i=1 t?1i
, KT estimator shrinks slightly towards 1/2.
2
See Appendix A for a proof. For lack of space, all the appendices are in the supplementary material.
4
Regarding the design of coin betting potentials, we expect any potential that approximates
the best
?
possible wealth in (4) to be a good candidate. In fact, Ft (x) = exp x2 /(2t) / t, essentially
the potential used in the parameter-free algorithms in [22, 24] for OLO and in [6, 18, 19] for LEA,
approximates (4) and it is an excellent coin betting potential?see Theorem 9 in Appendix B. Hence,
our framework provides intuition to previous constructions and in Section 7 we show new examples
of coin betting potentials.
In the next two sections, we presents the reductions to effortlessly solve both the generic OLO case
and LEA with a betting potential.
5
From Coin Betting to OLO over Hilbert Space
In this section, generalizing the one-dimensional construction in Section 3, we show how to use
a sequence of excellent coin betting potentials {Ft }?
t=0 to construct an algorithm for OLO over a
Hilbert space and how to prove a regret bound for it.
Pt
We define reward and wealth analogously to the one-dimensional case: Rewardt = i=1 hgi , wi i
?
and Wealtht = + Rewardt . Given a sequence of coin betting potentials {Ft }t=0 , using (7) we
define the fraction
P
P
Ft (k t?1 gi k+1)?Ft (k t?1
i=1 gi k?1)
Pt?1
.
(8)
?t = F Pi=1
t?1
t (k
i=1 gi k?1)
i=1 gi k+1)+Ft (k
The prediction of the OLO algorithm is defined similarly to the one-dimensional case, but now we
also need a direction in the Hilbert space:
!
Pt?1
Pt?1
t?1
X
gi
gi
i=1
i=1
wt = ?t Wealtht?1
hgi , wi i .
(9)
Pt?1
= ?t
Pt?1
+
i=1 gi
i=1 gi
i=1
Pt?1
If i=1 gi is the zero vector, we define wt to be the zero vector as well. For this prediction strategy
we can prove the following regret guarantee, proved in Appendix C. The proof reduces the general
Hilbert case to the 1-d case, thanks to (d) in Definition 2, then it follows the reasoning of Section 3.
Theorem 3 (Regret Bound for OLO in Hilbert Spaces). Let {Ft }?
t=0 be a sequence of excellent coin
betting potentials. Let {gt }?
t=1 be any sequence of reward vectors in a Hilbert space H such that
kgt k ? 1 for all t. Then, the algorithm that makes prediction wt defined by (9) and (8) satisfies
?T ? 0
6
?u ? H
RegretT (u) ? FT? (kuk) + .
From Coin Betting to Learning with Expert Advice
In this section, we show how to use the algorithm for OLO over one-dimensional Hilbert space R
from Section 3?which is itself based on a coin betting strategy?to construct an algorithm for LEA.
Let N ? 2 be the number of experts and ?N be the N -dimensional probability simplex. Let
? = (?1 , ?2 , . . . , ?N ) ? ?N be any prior distribution. Let A be an algorithm for OLO over the
one-dimensional Hilbert space R, based on a sequence of the coin betting potentials {Ft }?
t=0 with
initial endowment3 1. We instantiate N copies of A.
Consider any round t. Let wt,i ? R be the prediction of the i-th copy of A. The LEA algorithm
computes pbt = (b
pt,1 , pbt,2 , . . . , pbt,N ) ? RN
0,+ as
pbt,i = ?i ? [wt,i ]+ ,
(10)
where [x]+ = max{0, x} is the positive part of x. Then, the LEA algorithm predicts pt =
(pt,1 , pt,2 , . . . , pt,N ) ? ?N as
pt = kbppbttk .
(11)
1
If kb
pt k1 = 0, the algorithm predicts the prior ?. Then, the algorithm receives the reward vector
gt = (gt,1 , gt,2 , . . . , gt,N ) ? [0, 1]N . Finally, it feeds the reward to each copy of A. The reward for
3
Any initial endowment > 0 can be rescaled to 1. Instead of Ft (x) we would use Ft (x)/. The wt would
become wt /, but pt is invariant to scaling of wt . Hence, the LEA algorithm is the same regardless of .
5
the i-th copy of A is get,i ? [?1, 1] defined as
gt,i ? hgt , pt i
get,i =
[gt,i ? hgt , pt i]+
if wt,i > 0 ,
if wt,i ? 0 .
(12)
The construction above defines a LEA algorithm defined by the predictions pt , based on the algorithm A. We can prove the following regret bound for it.
Theorem 4 (Regret Bound for Experts). Let A be an algorithm for OLO over the one-dimensional
Hilbert space R, based on the coin betting potentials {Ft }?
t=0 for an initial endowment of 1. Let
ft?1 be the inverse of ft (x) = ln(Ft (x)) restricted to [0, ?). Then, the regret of the LEA algorithm
with prior ? ? ?N that predicts at each round with pt in (11) satisfies
?T ? 0
?u ? ?N
RegretT (u) ? fT?1 (D (uk?)) .
PN
The proof, in Appendix D, is based on the fact that (10)?(12) guarantee that i=1 ?i get,i wt,i ? 0
and on a variation of the change of measure lemma used in the PAC-Bayes literature, e.g. [20].
7
Applications of the Krichevsky-Trofimov Estimator to OLO and LEA
In the previous sections, we have shown that a coin betting potential with a guaranteed rapid growth
of the wealth will give good regret guarantees for OLO and LEA. Here, we show that the KT
estimator has associated an excellent coin betting potential, which we call KT potential. Then, the
optimal wealth guarantee of the KT potentials will translate to optimal parameter-free regret bounds.
The sequence of excellent coin betting potentials for an initial endowment corresponding to the
adaptive Kelly betting strategy ?t defined by (5) based on the KT estimator are
t+1 x
t+1 x
2t ??
2 + 2 ??
2 ?2
t ? 0, x ? (?t ? 1, t + 1),
(13)
Ft (x) =
??t!
R ? x?1 ?t
where ?(x) = 0 t
e dt is Euler?s gamma function?see Theorem 13 in Appendix E. This
potential was used to prove regret bounds for online prediction with the logarithmic loss [16][4,
Chapter 9.7]. Theorem 13 also shows that the KT betting strategy ?t as defined by (5) satisfies (7).
This potential has the nice property that is satisfies the inequality in (c) of Definition 2 with equality
when gt ? {?1, 1}, i.e. Ft (x + gt ) = (1 + gt ?t ) Ft?1 (x).
We also generalize the KT potentials to ?-shifted KT potentials, where ? ? 0, defined as
Ft (x) =
t+?+1 x
t+?+1 x
2t ??(?+1)??
+
??
?
2
2
2
2
2
?+1
??(t+?+1)
?
2
.
The reason for its name is that, up to a multiplicative constant, Ft is equal to the KT potential
shifted in time by ?. Theorem 13 also proves that the ?-shifted KT potentials are excellent
coin
P
betting potentials with initial endowment 1, and the corresponding betting fraction is ?t =
7.1
t?1
j=1
?+t
gj
.
OLO in Hilbert Space
We apply the KT potential for the construction of an OLO algorithm over a Hilbert space H. We
will use (9), and we just need
to calculate ?t . According to Theorem 13 in Appendix E, the formula
Pt?1
P
Pt?1
gi k
k i=1
t?1
for ?t simplifies to ?t =
so that wt = 1t + i=1 hgi , wi i
i=1 gi .
t
The resulting algorithm is stated as Algorithm 1. We derive a regret bound for it as a very simple
corollary of Theorem 3 to the KT potential (13). The only technical part of the proof, in Appendix F,
is an upper bound on Ft? since it cannot be expressed as an elementary function.
Corollary 5 (Regret Bound for Algorithm 1). Let > 0. Let {gt }?
t=1 be any sequence of reward
vectors in a Hilbert space H such that kgt k ? 1. Then Algorithm 1 satisfies
r
2
2
? T ? 0 ?u ? H
RegretT (u) ? kuk T ln 1 + 24T 2kuk + 1 ? e?1?T .
6
Algorithm 1 Algorithm for OLO over Hilbert space H based on KT potential
Require: Initial endowment > 0
1: for t = 1, 2, . . . do
P
Pt?1
t?1
2:
Predict with wt ? 1t + i=1 hgi , wi i
i=1 gi
3:
Receive reward vector gt ? H such that kgt k ? 1
4: end for
Algorithm 2 Algorithm for Learning with Expert Advice based on ?-shifted KT potential
Require: Number of experts N , prior distribution ? ? ?N , number of rounds T
1: for t = 1, 2, . . . , T do
Pt?1
Pt?1
ej,i
j=1 g
1 + j=1 gej,i wj,i
2:
For each i ? [N ], set wt,i ? t+T
/2
3:
4:
5:
6:
7:
For each i ? [N ], set pbt,i ? ?i [wt,i ]+
pbt / kpbt k1 if kb
pt k1 > 0
Predict with pt ?
?
if kb
pt k1 = 0
Receive reward vector gt ? [0, 1]N
gt,i ? hgt , pt i
For each i ? [N ], set get,i ?
[gt,i ? hgt , pt i]+
end for
if wt,i > 0
if wt,i ? 0
It is worth noting the elegance and extreme simplicity of Algorithm 1 and contrast it with the algorithms in [26, 22?24]. Also, the regret bound is optimal [26, 23]. The parameter can be safely set
to any constant, e.g. 1. Its role is equivalent to the initial guess used in doubling tricks [25].
7.2
Learning with Expert Advice
We will now construct an algorithm for LEA based on the ?-shifted KT potential. We set ? to T /2,
requiring the algorithm to know the number of rounds T in advance; we will fix this later with the
standard doubling trick.
To use the construction in Section 6, we need an OLO algorithm for the 1-d Hilbert space R. Using
the ?-shifted KT potentials, the algorithm predicts for any sequence {e
gt }?
t=1 of reward
?
?
?
?
Pt?1
t?1
t?1
X
X
g
e
i
gej wj ? .
gej wj ? = i=1 ?1 +
wt = ?t Wealtht?1 = ?t ?1 +
T
/2
+
t
j=1
j=1
Then, following the construction in Section 6, we arrive at the final algorithm, Algorithm 2. We can
derive a regret bound for Algorithm 2 by applying Theorem 4 to the ?-shifted KT potential.
Corollary 6 (Regret Bound for Algorithm 2). Let N ? 2 and T ? 0 be integers. Let ? ? ?N be a
prior. Then Algorithm 2 with input N, ?, T for any rewards vectors g1 , g2 , . . . , gT ? [0, 1]N satisfies
p
?u ? ?N
RegretT (u) ? 3T (3 + D (uk?)) .
Hence, the Algorithm 2 has both the best known guarantee on worst-case regret and per-round time
complexity, see Table 1. Also, it has the advantage of being very simple.
The proof of the corollary is in the Appendix F. The only technical part of the proof is an upper
bound on ft?1 (x), which we conveniently do by lower bounding Ft (x).
The reason for using the p
shifted potential comes from the analysis of ft?1 (x). The unshifted algorithm
p would have a O( T (log T + D (uk?)) regret bound; the shifting improves the bound to
O( T (1 + D (uk?)). By changing T /2 in Algorithm 2 to another constant fraction of T , it is possible to trade-off between the two constants 3 present in the square root in the regret upper bound.
The requirement of knowing the number of rounds T in advance can be lifted by the standard doubling trick [25, Section 2.3.1], obtaining an anytime guarantee with a bigger leading constant,
? p
2
? T ? 0 ?u ? ?N
RegretT (u) ? ?2?1
3T (3 + D (uk?)) .
7
cpusmall dataset, absolute loss
4
5
x 10
YearPredictionMSD dataset, absolute loss
3.45
DFEG
Adaptive Normal
3.35
x 10
p
1/t
8
x 10
cadata dataset, absolute loss
OGD, ?t = U
2
DFEG
Adaptive Normal
PiSTOL
KT-based
3.25
KT-based
7.5
Total loss
KT-based
p
1/t
DFEG
Adaptive Normal
1.95
PiSTOL
PiSTOL
3.3
9
2.05
OGD, ?t = U
8.5
p
OGD, ?t = U 1/t
3.4
Total loss
9
Total loss
3.5
7
1.9
1.85
3.2
3.15
6.5
1.8
6
1.75
3.1
3.05
3 ?2
10
?1
0
10
1
10
U
5.5
2
10
?1
10
10
0
1
10
10
1.7
2
10
0
10
2
4
10
10
U
6
10
U
Figure 1: Total loss versus learning rate parameter of OGD (in log scale), compared with parameter-free
algorithms DFEG [23], Adaptive Normal [22], PiSTOL [24] and the KT-based Algorithm 1.
Replicated Hadamard matrices, N=126, k=2 good experts
Replicated Hadamard matrices, N=126, k=8 good experts
NormalHedge
400
AdaNormalHedge
Squint
KT-based
350
300
250
350
380
Regret to best expert after T=32768
Regret to best expert after T=32768
Replicated Hadamard matrices, N=126, k=32 good experts
400
p
Hedge, ?t = U 1/t
p
Hedge, ?t = U 1/t
360
NormalHedge
340
AdaNormalHedge
Hedge, ?t = U
Regret to best expert after T=32768
450
Squint
320
KT-based
300
280
260
240
220
p
1/t
NormalHedge
300
AdaNormalHedge
Squint
KT-based
250
200
150
200
200
0
1
10
180
10
0
1
10
10
U
U
0
1
10
10
U
Figure 2: Regrets to the best expert after T = 32768 rounds, versus learning rate parameter of Hedge (in
log scale). The ?good? experts are = 0.025 better than the others. The competitor algorithms are NormalHedge [6], AdaNormalHedge [19], Squint [15], and the KT-based Algorithm 2. ?i = 1/N for all algorithms.
8
Discussion of the Results
We have presented a new interpretation of parameter-free algorithms as coin betting algorithms. This
interpretation, far from being just a mathematical gimmick, reveals the common hidden structure
of previous parameter-free algorithms for both OLO and LEA and also allows the design of new
algorithms. For example, we show that the characteristic of parameter-freeness is just a consequence
of having an algorithm that guarantees the maximum reward possible. The reductions in Sections 5
and 6 are also novel and they are in a certain sense optimal. In fact, the obtained Algorithms 1 and 2
achieve the optimal worst case upper bounds on the regret, see [26, 23] and [4] respectively.
We have also run an empirical evaluation to show that the theoretical difference between classic
online learning algorithms and parameter-free ones is real and not just theoretical. In Figure 1, we
have used three regression datasets4 , and solved the OCO problem through OLO. In all the three
cases, we have used the absolute loss and normalized the input vectors to have L2 norm equal to 1.
From the empirical results, it is clear that the optimal learning rate is completely data-dependent, yet
parameter-free algorithms have performance very close to the unknown optimal tuning of the learning rate. Moreover, the KT-based Algorithm 1 seems to dominate all the other similar algorithms.
For LEA, we have used the synthetic setting in [6]. The dataset is composed of Hadamard matrices
of size 64, where the row with constant values is removed, the rows are duplicated to 126 inverting
their signs, 0.025 is subtracted to k rows, and the matrix is replicated in order to generate T = 32768
samples. For more details, see [6]. Here, the KT-based algorithm is the one in Algorithm 2, where
the term T /2 is removed, so that the final regret bound has an additional ln T term. Again, we
see that the parameter-free algorithms have a performance close or even better than Hedge with an
oracle tuning of the learning rate, with no clear winners among the parameter-free algorithms.
Notice that since the adaptive Kelly strategy based on KT estimator is very close to optimal, the only
possible improvement is to have a data-dependent bound, for example like the ones in [24, 15, 19].
In future work, we will extend our definitions and reductions to the data-dependent case.
4
Datasets available at https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/.
8
Acknowledgments. The authors thank Jacob Abernethy, Nicol`o Cesa-Bianchi, Satyen Kale, Chansoo Lee, Giuseppe Molteni, and Manfred Warmuth for useful discussions on this work.
References
[1] E. Artin. The Gamma Function. Holt, Rinehart and Winston, Inc., 1964.
[2] N. Batir. Inequalities for the gamma function. Archiv der Mathematik, 91(6):554?563, 2008.
[3] H. H. Bauschke and P. L. Combettes. Convex Analysis and Monotone Operator Theory in Hilbert Spaces.
Springer Publishing Company, Incorporated, 1st edition, 2011.
[4] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006.
[5] N. Cesa-Bianchi, Y. Freund, D. Haussler, D. P. Helmbold, R. E. Schapire, and M. K. Warmuth. How to
use expert advice. J. ACM, 44(3):427?485, 1997.
[6] K. Chaudhuri, Y. Freund, and D. Hsu. A parameter-free hedging algorithm. In Advances in Neural
Information Processing Systems 22, pages 297?305, 2009.
[7] C.-P. Chen. Inequalities for the polygamma functions with application. General Mathematics, 13(3):
65?72, 2005.
[8] A. Chernov and V. Vovk. Prediction with advice of unknown number of experts. In Proc. of the 26th
Conf. on Uncertainty in Artificial Intelligence. AUAI Press, 2010.
[9] T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley & Sons, 2nd edition, 2006.
[10] D. J. Foster. personal communication, 2016.
[11] D. J. Foster, A. Rakhlin, and K. Sridharan. Adaptive online learning. In Advances in Neural Information
Processing Systems 28, pages 3375?3383. Curran Associates, Inc., 2015.
[12] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application
to boosting. J. Computer and System Sciences, 55(1):119?139, 1997.
[13] A. Hoorfar and M. Hassani. Inequalities on the Lambert W function and hyperpower function. J. Inequal.
Pure and Appl. Math, 9(2), 2008.
[14] J. L. Kelly. A new interpretation of information rate. Information Theory, IRE Trans. on, 2(3):185?189,
September 1956.
[15] W. M. Koolen and T. van Erven. Second-order quantile methods for experts and combinatorial games. In
Proc. of the 28th Conf. on Learning Theory, pages 1155?1175, 2015.
[16] R. E. Krichevsky and V. K. Trofimov. The performance of universal encoding. IEEE Trans. on Information
Theory, 27(2):199?206, 1981.
[17] N. Littlestone and M. K. Warmuth. The weighted majority algorithm. Information and Computation, 108
(2):212?261, 1994.
[18] H. Luo and R. E. Schapire. A drifting-games analysis for online learning and applications to boosting. In
Advances in Neural Information Processing Systems 27, pages 1368?1376, 2014.
[19] H. Luo and R. E. Schapire. Achieving all with no parameters: AdaNormalHedge. In Proc. of the 28th
Conf. on Learning Theory, pages 1286?1304, 2015.
[20] D. McAllester. A PAC-Bayesian tutorial with a dropout bound, 2013. arXiv:1307.2118.
[21] H. B. McMahan and J. Abernethy. Minimax optimal algorithms for unconstrained linear optimization. In
Advances in Neural Information Processing Systems 26, pages 2724?2732, 2013.
[22] H. B. McMahan and F. Orabona. Unconstrained online linear learning in Hilbert spaces: Minimax algorithms and normal approximations. In Proc. of the 27th Conf. on Learning Theory, pages 1020?1039,
2014.
[23] F. Orabona. Dimension-free exponentiated gradient. In Advances in Neural Information Processing
Systems 26 (NIPS 2013), pages 1806?1814. Curran Associates, Inc., 2013.
[24] F. Orabona. Simultaneous model selection and optimization through parameter-free stochastic learning.
In Advances in Neural Information Processing Systems 27 (NIPS 2014), pages 1116?1124, 2014.
[25] S. Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine
Learning, 4(2):107?194, 2011.
[26] M. Streeter and B. McMahan. No-regret algorithms for unconstrained online convex optimization. In
Advances in Neural Information Processing Systems 25 (NIPS 2012), pages 2402?2410, 2012.
[27] V. Vovk. A game of prediction with expert advice. J. Computer and System Sciences, 56:153?173, 1998.
[28] E. T. Whittaker and G. N. Watson. A Course of Modern Analysis. Cambridge University Press, fourth
edition, 1962. Reprinted.
[29] F. M. J. Willems, Y. M. Shtarkov, and T. J. Tjalkens. The context tree weighting method: Basic properties.
IEEE Trans. on Information Theory, 41:653?664, 1995.
9
| 6159 |@word version:1 norm:6 seems:1 nd:1 trofimov:5 jacob:1 reduction:6 initial:12 tuned:1 erven:1 existing:2 current:2 com:2 luo:2 yet:1 stony:2 john:1 numerical:2 intelligence:1 instantiate:2 guess:1 warmuth:3 manfred:1 provides:4 boosting:3 math:1 ire:1 mathematical:1 shtarkov:1 become:1 batir:1 prove:7 indeed:1 rapid:1 nor:1 company:1 increasing:1 spain:1 begin:1 underlying:1 moreover:2 what:1 developed:1 unified:2 guarantee:15 safely:1 every:7 auai:1 growth:2 exactly:1 classifier:1 uk:13 positive:1 consequence:1 encoding:1 lugosi:1 might:1 twice:1 studied:1 appl:1 acknowledgment:1 wealtht:18 regret:37 block:1 universal:1 empirical:3 holt:1 seeing:1 get:9 cannot:2 close:4 selection:1 operator:1 context:1 applying:2 restriction:1 equivalent:2 www:1 go:1 attention:2 starting:1 regardless:1 convex:13 kale:1 tjalkens:1 simplicity:1 helmbold:1 pure:1 rinehart:1 estimator:11 haussler:1 importantly:1 borrow:2 dominate:1 classic:1 notion:1 variation:1 analogous:1 pt:41 construction:7 user:1 ogd:7 designing:2 curran:2 trick:3 logarithmically:2 element:1 associate:2 trend:1 predicts:4 kxk1:1 observed:1 ft:43 role:1 csie:1 solved:3 worst:6 calculate:1 wj:3 trade:1 rescaled:1 removed:2 intuition:4 complexity:5 ui:2 reward:24 pinsker:1 personal:1 depend:1 solving:1 completely:1 chapter:2 instantiated:1 artificial:1 outcome:6 shalev:1 abernethy:2 whose:1 supplementary:1 solve:2 ability:1 satyen:1 gi:19 g1:4 itself:1 final:2 online:19 sequence:15 differentiable:1 advantage:1 net:1 propose:1 product:1 combining:1 hadamard:4 pthe:1 iff:1 achieve:3 translate:1 chaudhuri:1 intuitive:1 requirement:1 perfect:1 rewardt:8 derive:2 develop:1 implemented:1 implies:2 come:1 direction:2 kgt:4 stochastic:2 kb:3 mcallester:1 libsvmtools:1 material:1 everything:1 require:4 f1:1 fix:1 generalization:1 preliminary:1 ntu:1 elementary:1 strictly:1 unshifted:1 hold:1 effortlessly:1 normal:5 exp:4 datasets4:1 maxu:1 predict:2 proc:4 hgi:4 combinatorial:1 weighted:1 always:1 modified:1 pn:1 ej:1 lifted:1 bet:16 corollary:4 encode:1 derived:1 focus:1 artin:1 improvement:1 bernoulli:1 likelihood:1 seamlessly:1 contrast:1 adversarial:3 sense:2 dependent:3 hidden:1 dual:2 among:1 yahoo:2 equal:2 construct:3 having:2 look:1 oco:3 future:3 simplex:2 others:1 modern:1 composed:1 gamma:3 divergence:3 polygamma:1 limx:1 evaluation:1 extreme:1 kt:34 tree:1 littlestone:1 re:1 theoretical:2 fenchel:2 cover:1 tp:1 hgt:9 entry:2 euler:1 cpusmall:1 predictor:1 seventh:1 bauschke:1 supx:1 chansoo:1 synthetic:1 chooses:1 thanks:1 st:1 fundamental:1 lee:1 off:1 together:1 analogously:1 w1:1 again:1 cesa:3 conf:4 expert:22 leading:1 potential:43 sec:2 subsumes:1 int:1 inc:4 depends:1 hedging:1 multiplicative:1 view:1 later:1 root:1 analyze:1 start:1 bayes:1 contribution:1 square:1 who:1 likewise:1 characteristic:1 generalize:3 bayesian:1 lambert:1 worth:1 yearpredictionmsd:1 simultaneous:1 reach:1 definition:10 competitor:4 elegance:1 proof:8 associated:1 hsu:1 proved:1 dataset:4 duplicated:1 normalhedge:4 knowledge:3 ut:2 improves:1 anytime:1 hilbert:26 hassani:1 back:1 feed:1 dt:1 shrink:1 furthermore:1 just:5 sketch:1 receives:2 betted:3 hand:2 lack:1 defines:1 name:1 building:1 requiring:2 concept:1 normalized:1 hence:4 equality:2 leibler:1 round:19 game:4 uniquely:1 generalized:1 trying:1 theoretic:1 reasoning:2 novel:2 recently:1 common:1 koolen:1 winner:1 tail:4 he:12 approximates:2 interpretation:3 onedimensional:1 extend:1 refer:1 cambridge:2 tuning:2 unconstrained:3 mathematics:1 similarly:1 f0:1 money:2 gj:1 gt:47 pkq:2 base:1 recent:1 showed:1 freeness:1 scenario:1 certain:2 inequality:6 blog:1 watson:1 kukp:1 der:1 minimum:1 additional:2 maximize:1 semi:3 reduces:1 chernov:1 technical:3 match:1 adapt:1 faster:1 constructive:1 long:1 bigger:1 qi:1 prediction:11 variant:1 basic:2 regression:1 essentially:1 arxiv:1 lea:23 receive:2 addition:1 interval:1 wealth:23 extra:2 w2:1 biased:1 induced:1 elegant:1 sridharan:1 seem:1 call:3 integer:1 noting:1 restrict:2 inner:1 reduce:2 avid:1 regarding:1 simplifies:1 knowing:1 reprinted:1 gambler:8 whether:2 york:1 regrett:8 useful:1 clear:2 tune:2 amount:6 giuseppe:1 http:2 generate:1 schapire:4 tutorial:1 shifted:8 notice:1 sign:3 per:3 discrete:1 achieving:1 changing:1 prevent:1 neither:1 kuk:11 monotone:1 fraction:7 year:1 convert:1 run:2 inverse:1 uncertainty:1 fourth:1 arrive:1 strange:1 decision:3 appendix:10 scaling:1 dropout:1 bound:29 guaranteed:1 winston:1 oracle:4 constraint:2 x2:1 encodes:3 archiv:1 betting:49 according:1 conjugate:2 remain:1 slightly:3 smaller:1 son:1 wi:6 tw:1 making:1 intuitively:1 invariant:1 restricted:1 heart:1 ln:15 mathematik:1 cjlin:1 needed:1 know:1 flip:8 end:3 available:1 apply:1 generic:2 subtracted:1 batch:1 coin:35 drifting:1 thomas:1 denotes:2 running:1 assumes:1 publishing:1 k1:5 prof:1 quantile:1 already:1 strategy:16 said:1 september:1 gradient:2 win:1 krichevsky:5 thank:1 majority:1 trivial:1 reason:2 induction:2 relationship:1 providing:1 minimizing:1 info:1 stated:2 design:8 proper:4 squint:4 unknown:5 allowing:1 upper:6 bianchi:3 francesco:2 datasets:2 willems:1 finite:1 descent:1 immediate:1 dpal:1 incorporated:1 head:6 communication:1 rn:3 arbitrary:1 inverting:1 pair:1 required:1 kl:4 specified:1 barcelona:1 nip:4 brook:2 trans:3 adversary:1 below:2 challenge:1 summarize:1 max:1 shifting:1 natural:1 warm:2 minimax:2 improve:1 prior:6 kelly:9 literature:1 nice:1 l2:1 nicol:1 freund:3 loss:9 expect:2 pistol:4 versus:2 foundation:1 foster:3 pi:3 endowment:11 row:3 course:1 last:1 free:18 copy:4 side:1 exponentiated:1 characterizing:1 absolute:6 van:1 calculated:1 dimension:1 cumulative:2 computes:1 author:1 commonly:1 adaptive:12 replicated:4 far:1 cadata:1 kullback:1 keep:1 reveals:1 xi:1 shwartz:1 continuous:3 streeter:1 table:6 pbt:6 obtaining:2 excellent:7 ft0:1 big:1 bounding:1 edition:3 allowed:1 repeated:1 advice:9 referred:1 ny:2 wiley:1 combettes:1 precision:2 sub:2 candidate:1 mcmahan:3 weighting:2 formula:2 theorem:10 specific:1 pac:2 rakhlin:1 exists:1 sequential:1 mirror:1 chen:1 generalizing:1 logarithmic:1 conveniently:1 expressed:1 g2:4 doubling:3 dfeg:4 springer:1 loses:2 satisfies:10 acm:1 hedge:8 whittaker:1 goal:1 towards:1 orabona:5 adanormalhedge:5 change:1 determined:1 vovk:2 wt:33 lemma:6 called:4 parameterfree:1 total:5 olo:37 |
5,702 | 616 | A Method for Learning from Hints
Yaser s. Abu-Mostafa
Departments of Electrical Engineering, Computer Science,
and Computation and Neural Systems
California Institute of Technology
Pasadena, CA 91125
e-mail: [email protected]
Abstract
We address the problem of learning an unknown function by
pu tting together several pieces of information (hints) that we know
about the function. We introduce a method that generalizes learning from examples to learning from hints. A canonical representation of hints is defined and illustrated for new types of hints. All
the hints are represented to the learning process by examples, and
examples of the function are treated on equal footing with the rest
of the hints. During learning, examples from different hints are
selected for processing according to a given schedule. We present
two types of schedules; fixed schedules that specify the relative emphasis of each hint, and adaptive schedules that are based on how
well each hint has been learned so far. Our learning method is
compatible with any descent technique that we may choose to use.
1
INTRODUCTION
The use of hints is coming to the surface in a number of research communities dealing
with learning and adaptive systems. In the learning-from-examples paradigm, one
often has access not only to examples of the function, but also to a number of
hints (prior knowledge, or side information) about the function. The most common
difficulty in taking advantage of these hints is that they are heterogeneous and
cannot be easily integrated into the learning process. This paper is written with the
specific goal of addressing this problem. The paper develops a systematic method
73
74
Abu-Mostafa
for incorporating different hints in the usuallearning-from-examples process.
Without such a systematic method, one can still take advantage of certain types of
hints. For instance, one can implement an invariance hint by preprocessing the input
to achieve the invariance through normalization. Alternatively, one can structure
the learning model in a way that directly implements the invariance (Minsky and
Papert, 1969). Whenever direct implementation is feasible, the full benefit of the
hint is realized. This paper does not attempt to offer a superior alternative to
direct implementation. However, when direct implementation is not an option, we
prescribe a systematic method lor incorporating practically any hint in any descent
technique lor learning. The goal is to automate the use of hints in learning to a
degree where we can effectively utilize a large number of different hints that may
be available in a practical situation. As the use of hints becomes routine, we are
encouraged to exploit even the simplest observations that we may have about the
function we are trying to learn.
The notion of hints is quite general and it is worthwhile to formalize what we mean
by a hint as far as our method is concerned. Let I be the function that we are
trying to learn. A hint is a property that I is known to have. Thus, all that is
needed to qualify as a hint is to have a litmus test that I passes and that can be
applied to different functions. Formally, a hint is a given subset of functions that
includes I.
We start by introducing the basic nomenclature and notation. The environment X
is the set on which the function I is defined. The points in the environment are
distributed according to some probability distribution P. I takes on values from
some set Y
I:X-+Y
Often, Y is just {O, I} or the interval [0, 1]. The learning process takes pieces of
information about (the otherwise unknown) I as input and produces a hypothesis g
g:X-+Y
that attempts to approximate f. The degree to which a hypothesis g is considered
an approximation of I is measured by a distance or 'error'
E(g, !)
The error E is based on the disagreement between g and
of the probability distribution P.
I as seen through the eyes
Two popular forms of the error measure are
E
= Pr[g(x) =F f(x)]
and
E = ?[(g(x) - l(x))2]
where Pr[.] denotes the probability of an event, and ?[.J denotes the expected value
of a random variable. The underlying probability distribution is P. E will always
be a non-negative quantity, and we will take E(g,!) = t.o mean that 9 and I
are identical for all intents and purposes. We will also assume that when the set
of hypotheses is parameterized by real-valued parameters (e.g., the weights in the
case of a neural network), E will be well-behaved as a function of the parameters
?
A Method for Learning from Hints
(in order to allow for derivative-based descent techniques). We make the same
assumptions about the error measures that will be introduced in section 2 for the
hints.
In this paper, the 'pieces of information' about f that are input to the learning
process are more general than in the learning-from-examples paradigm. In that
paradigm, a number of points Xl, ... , X N are picked from X (usually independently
according to the probability distribution P) and the values of f on these points are
provided. Thus, the input to the learning process is the set of examples
(Xl, f(XI)),"', (XN' f(XN))
and these examples are used to guide the search for a good hypothesis. We will
consider the set of examples of f as only one of the available hints and denote it by
Ho. The other hints HI,' .. ,HM will be additional known facts about f, such as
invariance properties for instance.
The paper is organized as follows. Section 2 develops a canonical way for representing different hints. This is the first step in dealing with any hint that we encounter
in a practical situation. Section 3 develops the basis for learning from hints and
describes our method, including specific learning schedules.
2
REPRESENTATION OF HINTS
As we discussed before, a hint Hm is defined by a litmus test that f satisfies and
that can be applied to the set of hypotheses. This definition of Hm can be extended
to a definition of 'approximation of Hm' in several ways. For instance, 9 can be
considered to approximate Hm within f. if there is a function h that strictly satisfies
H~ for which E(g, h) ::; f.. In the context of learning, it is essential to have a
notion of approximation since exact learning is seldom achievable. Our definitions
for approximating different hints will be part of the scheme for representing those
hints.
The first step in representing Hm is to choose a way of generating 'examples' of the
hint. For illustration, suppose that Hm asserts that
f:
[-1,+1]- [-1,+1]
is an odd function. An example of Hm would have the form
f(-x)
= -f(x)
for a particular X E [-1, +1]. To generate N examples of this hint, we generate
Xl,'" ,XN and assert for each Xn that f( -x n) = - f(x n). Suppose that we are
in the middle of a learning process, and that the current hypothesis is 9 when the
example f( -x)
f(x) is presented. We wish to measure how much 9 disagrees
with this example. This leads to the second component of the representation, the
error measure em. For the oddness hint, em can be defined as
=-
em = (g(x)
+ g( _x))2
so that em = 0 reflects total agreement with the example (i.e., g( -x) = -g(x)).
Once the disagreement between 9 and an example of Hm has been quantified
75
76
Abu-Mostafa
through em, the disagreement between 9 and Hm as a whole is automatically quantified through Em, where
Em = ?(e m )
The expected value is taken w.r.t. the probability rule for picking the examples.
Therefore, Em can be estimated by a.veraging em over a number of examples that
are independently picked.
The choice of representation of H m is not unique, and Em will depend on the form
of examples, the probability rule for picking the examples, and the error measure
em. A minimum requirement on Em is that it should be zero when E = O. This
requirement guarantees that a hypothesis for which E = 0 (perfect hypothesis) will
not be excluded by the condition Em = O.
Let us illustrate how to represent different types of hints. Perhaps the most common
type of hint is the invariance hint. This hint asserts that I(x) = I(x') for certain
pairs x, x'. For instance, "I is shift-invariant" is formalized by the pairs x, x' that
are shifted versions of each other. To represent the invariance hint, an invariant
pair (x, x') is picked as an example. The error associated with this example is
em
= (g(x) -
g(x'))2
Another related type of hint is the monotonicity hint (or inequality hint). The
hint asserts for certain pairs x, x' that I(x) :S I(x') . For instance, "I is monotonically nondecreasing in x" is formalized by all pairs x, x' such that x < x'. To
represent the monotonicity hint, an example (x, x') is picked, and the error associated with this example is given by
_ {(g(x) - g(X'?2
em 0
if g(x) > g(x')
if g(x) :S g(x ' )
The third type of hint we discuss here is the approximation hint. The hint
asserts for certain points x E X that I(x) E [ax, bx ]. In other words, the value of 1
at x is known only approximately. The error associated with an example x of the
approximation hint is
if g(x) < ax
if g(x) > bx
if g(x) E [ax,b x ]
Another type of hints arises when the learning model allows non-binary values for
9 where 1 itself is known to be binary. This gives rise to the binary hint. Let
X ~ X be the set where 1 is known to be binary (for Boolean functions, X is the
set of binary input vectors). The binary hint is represented by examples of the form
x, where x E X. The error function associated with an example x (assuming 0/1
binary convention, and assuming g( x) E [0, 1]) is
em
= g(x)(l- g(x?
This choice of em forces it to be zero when g(x) is either 0 or 1, while it would be
positive if g( x) is between 0 and 1.
A Method for Learning from Hints
It is worth noting that the set of examples of f can be formally treated as a hint,
too. Given (Xl, f(xt)},???, (XN' f(XN )), the examples hint asserts that these are
the correct values of f at those particular points. Now, to generate an 'example' of
this hint, we pick a number n from I to N and use the corresponding (xn, f(x n )).
The error associated with this example is eo (we fix the convention that m = 0 for
the examples hint)
eo = (g(x n ) - f(:c n ))2
Assuming that the probability rule for picking n is uniform over {I,??? ,N},
1 N
Eo
= E(eo) = N
I)g(x n) - f(xn))2
n=l
In this case, Eo is also the best estimator of E = E[(g(x) - f(x))2] given Xl , ??? ,xN
that are independently picked according to the original probability distribution P .
This way of looking at the examples of f justifies their treatment exactly as one of
the hints, and underlines the distinction between E and Eo.
In a practical situation, we try to infer as many hints about f as the situation
will allow. Next , we represent each hint according to the scheme discussed in this
section. This leads to a list H o, H l ,??? ,HM of hints that are ready to produce
examples upon the request of the learning algorithm. We now address how the
algorithm should pick and choose between these examples as it moves along.
3
LEARNING SCHEDULES
If the learning algorithm had complete information about f, it would search for a
hypothesis g for which E(g, f) = o. However, f being unknown means that the
point E 0 cannot be directly identified. The most any learning algorithm can do
given the hints H o, HI,??? ,HM is to reach a hypothesis g for which all the error
measures Eo, El, ? ?? , EM are Zeros. Indeed, we have required that E = 0 implies
that Em = 0 for all m.
=
If that point is reached, regardless of how it is reached, the job is done. However, it
is seldom the case that we can reach the zero-error point because either (1) it does
not exist (i.e., no hypothesis can satisfy all the hints simultaneously, which implies
that no hypothesis can replicate f exactly), or (2) it is difficult to reach (Le., the
computing resources do not allow us to exhaustively search the space of hypotheses
looking for that point). In either case, we will have to settle for a point where the
Em's are 'as small as possible'.
How small should each Em be? A balance has to be struck, otherwise some Em's
may become very small at the expense of the others. This situation would mean
that some hints are over-learned while the others are under-learned. We will discuss
learning schedules that use different criteria for balancing between the hints. The
schedules are used by the learning algorithm to simultaneously minimize the Em's.
Let us start by exploring how simultaneous minimization of a number of quantities
is done in general.
Perhaps the most common approach is that of penalty functions (Wismer and Chat-
77
78
Abu- Mostafa
tel'gy, 1978). In order to minimize Eo, E 1 ,? ?? ,EM, we minimize the penalty function
M
L
am Em
m=O
where each am is a non-negative number that may be constant (exact penalty
function) or variable (sequential penalty function). Any descent technique can be
employed to minimize the penalty function once the am's are selected. The am's
are weights that reflect the relative emphasis or 'importance' of the corresponding
Em's. The choice of the weights is usually crucial to the quality of the solution.
Even if the am's are determined, we still do not have the explicit values of the Em's
in our case (recall that Em is the expected value of the error em on an example of
the hint). Instead, we will estimate Em by drawing several examples and averaging
their error. Suppose that we draw N m examples of Hm. The estimate for Em would
then be
1 Nm
_
~ e(n)
N m L- m
n=l
where e~) is the error on the nth example. Consider a batch of examples consisting
of No examples of H o, Nl examples of HI, ... , and NM examples of HM. The
total error of this batch is
m=O n=l
If we take N m ex: am, this total error will be a proportional estimate of the penalty
function
M
L
am Em
m=O
In effect, we translated the weights into a schedule, where different hints are emphasized, not by magnifying their error, but by representing them with more examples.
A batch of examples can be either a uniform batch that consist of N examples of
one hint at a time, or, more generally, a mixed batch where examples of different
hints are allowed within the same batch. If the descent technique is linear and the
learning rate is small, a schedule that uses mixed batches is equivalent to a schedule
that alternates between uniform batches (wit.h frequency equal to the frequency
of examples in the mixed batch). If we are using a nonlinear descent technique,
it is generally more difficult to ascertain a direct translation from mixed batches
to uniform batches, but there may be compelling heuristic correspondences. All
schedules discussed here are expressed in terms of uniform batches for simplicity.
The implementation of a given schedule goes as follows: (1) The algorithm decides
which hint (which m for m = 0,1,???, M) to work on next, according to some
criterion; (2) The algorithm then requests a batch of examples of this hint; (3) It
performs its descent on this batch; and (4) When it is done, it goes back to step
(1). We make a distinction between fixed schedules, where the criterion for selecting
the hint can be 'evaluated' ahead of time (albeit time-invariant or time-varying,
A Method for Learning from Hints
deterministic or stochastic), and adaptive schedules, where the criterion depends on
what happens as the algorithm runs. Here are some fixed and adaptive schedules:
Simple Rotation: This is the simplest possible schedule that tries to balance
between the hints. It is a fixed schedule that rotates between H o, HI'???' HM. Thus,
at step k, a batch of N examples of Hm is processed, where m = k mod (M + 1).
This simple-minded algorithm tends to do well in situations where the Em's are
somewhat similar.
Weighted Rotation: This is the next step in fixed schedules that tries to give
different emphasis to different Em's. The schedule rotates between the hints, visiting
Hm with frequency V m . The choice of the vm's can achieve balance by emphasizing
the hints that are more important or harder to learn.
Maximum Error: This is the simplest adaptive schedule that tries to achieve the
same type of balance as simple rotation. At each step k, the algorithm processes
the hint with the largest error Em. The algorithm uses estimates of the Em's to
make its selection.
Maximum Weighted Error: This is the adaptive counterpart to weighted rotation. It selects the hint with the largest value of vmEm. The choice of the vm's can
achieve balance by making up for disparities between the numerical ranges of the
Em's. Again, the algorithm uses estimates of the Em's.
Adaptive schedules attempt to answer the question: Given a set of values for the
Em's, which hint is the most under-learned? The above schedules answer the question by comparing the individual Em's. Althongh this works well in simple cases,
it does not take into consideration the correlation between different hints. As we
deal with more and more hints, the correlation between the Em's becomes more
significant. This leads us to the final schedule that achieves the balance between
the Em's through their relation to the actual error E.
Adaptive Minimization: Given the estimates of Eo, EI , ... , EM, make M
estimates of E, each based on all but one of the hints:
+1
E(., Ell E 2 ,???, EM)
E(Eo,., E2,???, EM)
E(Eo, E I ,.,???, EM)
E (Eo, E I , E 2, ... , ? )
and choose the hint for which the corresponding estimate is the smallest.
In other words, E becomes the common thread between the Em's. Knowing that
we are really trying to minimize E, and that the Em's are merely a vehicle to this
end, the criterion for balancing the Em's should be based on what is happening to
E as far as we can tell.
79
80
Abu-Mostafa
CONCLUSION
This paper developed a systematic method for using different hints as input to
the learning process, generalizing the case of invariance hints (Abu-Mostafa, 1990).
The method treats all hints on equal footing, including the examples of the function. Hints are represented in a canonical way that is compatible with the common
learning-from-examples paradigm. No restrictions are made on the learning model
or the descent technique to be used.
The hints are captured by the error measures Eo, El,"', EM, and the learning algorithm attempts to simultaneously minimize these quantities. The simultaneous
minimization of the Em's gives rise to the idea of balancing between the different
hints. A number of algorithms that minimize the Em's while maintaining this balance were discussed in the paper. Adaptive schedules in particular are worth noting
because they automatically compensate against many artifacts of the learning process.
It is worthwhile to distinguish between the quality of the hints and the quality of
the learning algorithm that uses these hints. The quality of the hints is determined
by how reliably one can predict that the actual error E will be close to zero for a
given hypothesis based on the fact that Eo, E 1 , ??? , EM are close to zero for that
hypothesis. The quality of the algorithm is determined by how likely it is that the
Em's will become nearly as small as they can be within a reasonable time.
Acknowledgements
The author would like to thank Ms. Zehra Kok for her valuable input. This work
was supported by the AFOSR under grant number F49620-92-J-0398.
References
Abu-Mostafa, Y. S. (1990), Learning from hints in neural networks, Journal of
Complexity 6, 192-198.
AI-Mashouq, K. and Reed, 1. (1991), Including hints in training neural networks,
Neural Computation 3, 418-427.
Minsky, M. L. and Papert, S. A. (1969), "Perceptrons," MIT Press.
Omlin, C. and Giles, C. L. (1992), Training second-order recurrent neural networks
using hints, Machine Learning: Proceedings of the Ninth International Conference
(ML-92), D. Sleeman and P. Edwards (ed.), Morgan Kaufmann.
Suddarth, S. and Holden, A. (1991), Symbolic neural systems and the use of hints
for developing complex systems, International Journal of Machine Studies 35, p.
291.
Wismer, D. A. and Chattergy, R. (1978), "Introduction to Nonlinear Optimization,"
North Holland.
| 616 |@word version:1 middle:1 achievable:1 replicate:1 underline:1 pick:2 harder:1 disparity:1 selecting:1 current:1 comparing:1 written:1 numerical:1 selected:2 footing:2 lor:2 along:1 direct:4 become:2 introduce:1 indeed:1 expected:3 automatically:2 actual:2 becomes:3 provided:1 notation:1 underlying:1 what:3 developed:1 guarantee:1 assert:1 exactly:2 grant:1 before:1 positive:1 engineering:1 treat:1 tends:1 approximately:1 emphasis:3 quantified:2 range:1 practical:3 unique:1 implement:2 word:2 symbolic:1 cannot:2 close:2 selection:1 context:1 restriction:1 equivalent:1 deterministic:1 go:2 regardless:1 independently:3 wit:1 formalized:2 simplicity:1 rule:3 estimator:1 notion:2 tting:1 suppose:3 exact:2 us:4 prescribe:1 hypothesis:15 agreement:1 electrical:1 valuable:1 zehra:1 environment:2 complexity:1 exhaustively:1 depend:1 upon:1 basis:1 translated:1 easily:1 represented:3 tell:1 quite:1 heuristic:1 valued:1 drawing:1 otherwise:2 nondecreasing:1 itself:1 final:1 advantage:2 coming:1 achieve:4 asserts:5 requirement:2 produce:2 generating:1 perfect:1 illustrate:1 recurrent:1 measured:1 odd:1 job:1 edward:1 implies:2 convention:2 correct:1 stochastic:1 settle:1 fix:1 really:1 strictly:1 exploring:1 practically:1 considered:2 litmus:2 predict:1 mostafa:7 automate:1 achieves:1 smallest:1 purpose:1 largest:2 minded:1 reflects:1 weighted:3 minimization:3 mit:1 always:1 varying:1 ax:3 am:7 el:2 integrated:1 holden:1 pasadena:1 relation:1 her:1 selects:1 ell:1 equal:3 once:2 encouraged:1 identical:1 nearly:1 others:2 develops:3 hint:101 simultaneously:3 individual:1 minsky:2 consisting:1 attempt:4 nl:1 instance:5 boolean:1 compelling:1 giles:1 introducing:1 addressing:1 subset:1 uniform:5 too:1 answer:2 international:2 systematic:4 vm:2 picking:3 together:1 again:1 reflect:1 nm:2 choose:4 derivative:1 bx:2 gy:1 includes:1 north:1 satisfy:1 depends:1 piece:3 vehicle:1 try:4 picked:5 reached:2 start:2 option:1 minimize:7 kaufmann:1 worth:2 simultaneous:2 reach:3 whenever:1 ed:1 definition:3 against:1 frequency:3 e2:1 associated:5 treatment:1 popular:1 recall:1 knowledge:1 organized:1 schedule:25 formalize:1 routine:1 back:1 specify:1 done:3 evaluated:1 just:1 correlation:2 ei:1 nonlinear:2 chat:1 artifact:1 perhaps:2 quality:5 behaved:1 effect:1 counterpart:1 excluded:1 illustrated:1 deal:1 during:1 criterion:5 m:1 trying:3 complete:1 performs:1 consideration:1 common:5 superior:1 rotation:4 discussed:4 significant:1 ai:1 seldom:2 had:1 access:1 surface:1 pu:1 certain:4 inequality:1 binary:7 qualify:1 caltech:1 morgan:1 seen:1 minimum:1 additional:1 somewhat:1 captured:1 employed:1 eo:14 paradigm:4 monotonically:1 full:1 suddarth:1 infer:1 offer:1 compensate:1 basic:1 heterogeneous:1 normalization:1 represent:4 interval:1 crucial:1 rest:1 pass:1 mod:1 noting:2 concerned:1 identified:1 idea:1 knowing:1 shift:1 thread:1 penalty:6 yaser:2 mashouq:1 nomenclature:1 generally:2 kok:1 processed:1 simplest:3 generate:3 exist:1 canonical:3 shifted:1 estimated:1 abu:7 utilize:1 merely:1 run:1 parameterized:1 reasonable:1 draw:1 hi:4 distinguish:1 correspondence:1 ahead:1 department:1 developing:1 according:6 alternate:1 request:2 describes:1 ascertain:1 em:54 making:1 happens:1 invariant:3 pr:2 taken:1 resource:1 discus:2 needed:1 know:1 end:1 generalizes:1 available:2 worthwhile:2 disagreement:3 alternative:1 encounter:1 ho:1 batch:15 original:1 denotes:2 maintaining:1 exploit:1 approximating:1 move:1 question:2 realized:1 quantity:3 visiting:1 distance:1 thank:1 rotates:2 mail:1 assuming:3 reed:1 illustration:1 balance:7 difficult:2 expense:1 negative:2 rise:2 intent:1 implementation:4 reliably:1 unknown:3 observation:1 oddness:1 descent:8 situation:6 extended:1 looking:2 ninth:1 community:1 introduced:1 pair:5 required:1 struck:1 california:1 learned:4 distinction:2 address:2 usually:2 including:3 event:1 omlin:1 treated:2 difficulty:1 force:1 nth:1 representing:4 scheme:2 technology:1 eye:1 ready:1 hm:17 prior:1 disagrees:1 acknowledgement:1 vmem:1 relative:2 afosr:1 mixed:4 proportional:1 degree:2 balancing:3 translation:1 compatible:2 supported:1 side:1 allow:3 guide:1 institute:1 taking:1 magnifying:1 benefit:1 distributed:1 f49620:1 xn:9 author:1 made:1 adaptive:9 preprocessing:1 far:3 approximate:2 veraging:1 dealing:2 monotonicity:2 ml:1 decides:1 xi:1 alternatively:1 search:3 learn:3 ca:1 tel:1 complex:1 whole:1 allowed:1 papert:2 wish:1 explicit:1 xl:5 third:1 emphasizing:1 specific:2 xt:1 emphasized:1 list:1 incorporating:2 essential:1 consist:1 albeit:1 sequential:1 effectively:1 importance:1 justifies:1 generalizing:1 likely:1 happening:1 expressed:1 holland:1 satisfies:2 goal:2 feasible:1 determined:3 averaging:1 total:3 invariance:7 perceptrons:1 formally:2 arises:1 ex:1 |
5,703 | 6,160 | Temporal Regularized Matrix Factorization for
High-dimensional Time Series Prediction
Hsiang-Fu Yu
University of Texas at Austin
[email protected]
Nikhil Rao
Technicolor Research
[email protected]
Inderjit S. Dhillon
University of Texas at Austin
[email protected]
Abstract
Time series prediction problems are becoming increasingly high-dimensional in
modern applications, such as climatology and demand forecasting. For example,
in the latter problem, the number of items for which demand needs to be forecast
might be as large as 50,000. In addition, the data is generally noisy and full of
missing values. Thus, modern applications require methods that are highly scalable,
and can deal with noisy data in terms of corruptions or missing values. However,
classical time series methods usually fall short of handling these issues. In this
paper, we present a temporal regularized matrix factorization (TRMF) framework
which supports data-driven temporal learning and forecasting. We develop novel
regularization schemes and use scalable matrix factorization methods that are
eminently suited for high-dimensional time series data that has many missing values.
Our proposed TRMF is highly general, and subsumes many existing approaches
for time series analysis. We make interesting connections to graph regularization
methods in the context of learning the dependencies in an autoregressive framework.
Experimental results show the superiority of TRMF in terms of scalability and
prediction quality. In particular, TRMF is two orders of magnitude faster than
other methods on a problem of dimension 50,000, and generates better forecasts on
real-world datasets such as Wal-mart E-commerce datasets.
1
Introduction
Time series analysis is a central problem in many applications such as demand forecasting and
climatology. Often, such applications require methods that are highly scalable to handle a very large
number (n) of possibly inter-dependent one-dimensional time series and/or have a large time frame
(T ). For example, climatology applications involve data collected from possibly thousands of sensors,
every hour (or less) over several years. Similarly, a store tracking its inventory would track thousands
of items every day for multiple years. Not only is the scale of such problems huge, but they might
also involve missing values, due to sensor malfunctions, occlusions or simple human errors. Thus,
modern time series applications present two challenges to practitioners: scalability to handle large n
and T and the flexibility to handle missing values.
Most approaches in the traditional time series literature such as autoregressive (AR) models or
dynamic linear models (DLM)[7, 21] focus on low-dimensional time-series data and fall short of
handling the two aforementioned issues. For example, an AR model of order L requires O(T L2 n4 +
L3 n6 ) time to estimate O(Ln2 ) parameters, which is prohibitive even for moderate values of n.
Similarly, Kalman filter based DLM approaches need O(kn2 T + k 3 T ) computation cost to update
parameters, where k is the latent dimensionality, which is usually chosen to be larger than n in many
situations [13]. As a specific example, the maximum likelihood estimator implementation in the
widely used R-DLM package [12], which relies on a general optimization solver, cannot scale beyond
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
n in the tens. (See Appendix D for details). On the other hand, for models such as AR, the flexibility
to handle missing values can also be very challenging even for one-dimensional time series [1], let
alone the difficulty to handle high dimensional time series.
A natural way to model high-dimensional time series data is in the form of a matrix, with rows
corresponding to each one-dimensional time series and columns corresponding to time points. In light
of the observation that n time series are usually highly correlated with each other, there have been
some attempts to apply low-rank matrix factorization (MF) or matrix completion (MC) techniques
to analyze high-dimensional time series [2, 14, 16, 23, 26]. Unlike the AR and DLM models above,
state-of-the-art MF methods scale linearly in n, and hence can handle large datasets. Let Y 2 Rn?T
be the matrix for the observed n-dimensional time series with Yit being the observation at the t-th
time point of the i-th time series. Under the standard MF approach, Yit is estimated by the inner
product fi> xt , where fi 2 Rk is a k-dimensional latent embedding for the i-th time series, and
xt 2 Rk is a k-dimensional latent temporal embedding for the t-th time point. We can stack the xt s
into the columns into a matrix X 2 Rk?T and fi> into the rows of F 2 Rn?k (Figure 1) to get
Y ? F X. We can then solve:X
2
min
Yit fi> xt + f Rf (F ) + x Rx (X),
(1)
F,X
(i,t)2?
Time- dependent variables
where ? is the set of the observed entries. Rf (F ),
Time
Rx (X) are regularizers for F and X, which usux
X
X
ally play a role to avoid overfitting and/or to encourage some specific temporal structures among
?
Y
Y
F
the embeddings. It is clear that the common choice
f
of the regularizer Rx (X) = kXkF is no longer
appropriate for time series applications, as it does
not take into account the ordering among the tem- Figure 1: Matrix Factorization model for mulporal embeddings {xt }. Most existing MF ap- tiple time series. F captures features for each
proaches [2, 14, 16, 23, 26] adapt graph-based ap- time series in the matrix Y , and X captures
proaches to handle temporal dependencies. Specif- the latent and time-varying variables.
ically, the dependencies are described by a weighted similarity graph and incorporated through
a Laplacian regularizer [18]. However, graph-based regularization fails in cases where there are
negative correlations between two time points. Furthermore, unlike scenarios where explicit graph
information is available with the data (such as a social network or product co-purchasing graph
for recommender systems), explicit temporal dependency structure is usually unavailable and has
to be inferred or approximated, which causes practitioners to either perform a separate procedure
to estimate the dependencies or consider very short-term dependencies with simple fixed weights.
Moreover, existing MF approaches, while yielding good estimations for missing values in past points,
are poor in terms of forecasting future values, which is the problem of interest in time series analysis.
In this paper, we propose a novel temporal regularized matrix factorization framework (TRMF) for
high-dimensional time series analysis. In TRMF, we consider a principled approach to describe the
structure of temporal dependencies among latent temporal embeddings {xt } and design a temporal
regularizer to incorporate this temporal structure into the standard MF formulation. Unlike most
existing MF approaches, our TRMF method supports data-driven temporal dependency learning
and also brings the ability to forecast future values to a matrix factorization approach. In addition,
inherited from the property of MF approaches, TRMF can easily handle high-dimensional time series
data even in the presence of many missing values. As a specific example, we demonstrate a novel
autoregressive temporal regularizer which encourages AR structure among temporal embeddings
{xt }. We also make connections between the proposed regularization framework and graph-based
approaches [18], where even negative correlations can be accounted for. This connection not only
leads to better understanding about the dependency structure incorporated by our framework but also
brings the benefit of using off-the-shelf efficient solvers such as GRALS [15] directly to solve TRMF.
Paper Organization. In Section 2, we review the existing approaches and their limitations on data
with temporal dependencies. We present the proposed TRMF framework in Section 3, and show that
the method is highly general and can be used for a variety of time series applications. We introduce
a novel AR temporal regularizer in Section 4, and make connections to graph-based regularization
approaches. We demonstrate the superiority of the proposed approach via extensive experimental
results in Section 5 and conclude the paper in Section 6.
Items
t
new
>
i
2
new
2
Motivations: Existing Approaches and Limitations
2.1 Classical Time-Series Models
Models such as AR and DLM are not suitable for modern multiple high-dimensional time series data
(i.e., both n and T are large) due to their inherent computational inefficiency (see Section 1). To avoid
overfitting in AR models, there have been studies with various structured transition matrices such
as low rank and sparse matrices [5, 10, 11]. The focus of this research has been on obtaining better
statistical guarantees. The scalability issue of AR models remains open. On the other hand, it is also
challenging for many classic time-series models to deal with data that has many missing values [1].
In many situations where the model parameters are either given or designed by practitioners, the
Kalman filter approach is used to perform forecasting, while the Kalman smoothing approach is
used to impute missing entries. When model parameters are unknown, EM algorithms are applied to
estimate both the model parameters and latent embeddings for DLM [3, 8, 9, 17, 19]. As most EM
approaches for DLM contain the Kalman filter as a building block, they cannot scale to very high
dimensional time series data. Indeed, as shown in Section 5, the popular R package for DLM?s does
not scale beyond data with tens of dimensions.
2.2 Existing Matrix Factorization Approaches for Data with Temporal Dependencies
PT
2
2
In standard MF (1), the squared Frobenius norm Rx (X) = kXkF = t=1 kxt k is usually the
regularizer of choice for X. Because squared Frobenius norm assumes no dependencies among {xt },
standard MF formulation is invariant to column permutation and not applicable to data with temporal
dependencies. Hence most existing temporal MF approaches turn to the framework of graph-based
regularization [18] for temporally dependent {xt }, with a graph encoding the temporal dependencies.
An exception is the work in [22], where the authors use specially designed regularizers to encourage
a log-normal structure on the temporal coefficients.
Graph regularization for temporal dependencies:The framework of graph-based regularization is
an approach to describe and incorporate general dependencies among variables. Let G be a graph
over {xt } and Gts be the edge weight between the t-th node and s-th node. A popular regularizer to
include as part of an objective function is the following:
Rx (X) = G(X | G, ?) :=
1X
Gts kxt
2 t?s
xs k 2 +
?X
kxt k2 ,
2 t
(2)
w4
w4
where t ? s denotes an edge between t-th node and
s-th node, and the second summation term is used to
guarantee strong convexity. A large Gts will ensure ? ? ? t 4 w1 t 3 w1 t 2 w1 t 1 w1 t w1 t + 1 ? ? ?
that xt and xs are close to each other in Euclidean Figure 2: Graph-based regularization for temdistance, when (2) is minimized. Note that to guaran- poral dependencies.
tee the convexity of G(X | G, ?), we need Gts 0.
To apply graph-based regularizers to temporal dependencies, we need to specify the (repeating)
dependency pattern by a lag set L and a weight vector w such that all the edges t ? s of distance
l (i.e., |s t| = l) share the same weight Gts = wl . See Figure 2 for an example with L = {1, 4}.
Given L and w, the corresponding graph regularizer becomes
G(X | G, ?) =
1XX
wl (xt
2
xt l ) 2 +
l2L t:t>l
?X
kxt k2 .
2 t
(3)
This direct use of graph-based approach, while intuitive, has two issues: a) there might be negatively
correlated dependencies between two time points; b) unlike many applications where such regularizers
are used, the explicit temporal dependency structure is usually not available and has to be inferred.
As a result, most existing approaches consider only very simple temporal dependencies such as a
small size of L (e.g., L = {1}) and/or uniform weights (e.g., wl = 1, 8l 2 L). For example, a
simple chain graph is considered to design the smoothing regularizer in TCF [23]. This leads to poor
forecasting abilities of existing MF methods for large-scale time series applications.
2.3 Challenges to Learn Temporal Dependencies
One could try to learn the weights wl automatically, by using the same regularizer as in (3) but with
the weights unknown. This would lead to the following optimization problem:
X
X X
X
2
x
x?
2
2
min
Yit fi> xt + f Rf (F )+
wl (xt xt l ) +
kxt k , (4)
F,X,w 0
2
2 t
l2L t:t l>0
(i,t)2?
where 0 is the zero vector, and w
0 is the constraint imposed by graph regularization.
3
It is not hard to see that the above optimization yields the trivial all-zero solution for w? , meaning
the objective function is minimized when no temporal dependencies exist!
P To avoid the all zero
solution, one might want to impose a simplex constraint on w (i.e., l2L wl = 1). Again, it
is not hard to seePthat this will result in w? being a 1-sparse vector, with wl? being 1, where
l? = arg minl2L t:t>l kxt xt l k2 . Thus, looking to learn the weights automatically by simply
plugging in the regularizer in the MF formulation is not a viable option.
3
Temporal Regularized Matrix Factorization
In order to resolve the limitations mentioned in Sections 2.2 and 2.3, we propose the Temporal
Regularized Matrix Factorization (TRMF) framework, which is a novel approach to incorporate
temporal dependencies into matrix factorization models. Unlike the aforementioned graph-based
approaches, we propose to use well-studied time series models to describe temporal dependencies
among {xt } explicitly. Such models take the form:
xt = M? ({xt l : l 2 L}) + ?t ,
(5)
where ?t is a Gaussian noise vector, and M? is the time-series model parameterized by L and ?. L
is a set containing the lag indices l, denoting a dependency between t-th and (t l)-th time points,
while ? captures the weighting information of temporal dependencies (such as the transition matrix
in AR models). To incorporate the temporal dependency into the standard MF formulation (1), we
propose to design a new regularizer TM (X | ?) which encourages the structure induced by M? .
Taking a standard approach to model time series, we set TM (X | ?) be the negative log likelihood of
observing a particular realization of the {xt } for a given model M? :
TM (X | ?) = log P(x1 , . . . , xT | ?).
(6)
When ? is given, we can use Rx (X) = TM (X | ?) in the MF formulation (1) to encourage {xt } to
follow the temporal dependency induced by M? . When the ? is unknown, we can treat ? as another
set of variables and include another regularizer R? (?) into (1):
X
2
min
Yit fi> xt + f Rf (F ) + x TM (X | ?) + ? R? (?),
(7)
F,X,?
(i,t)2?
which be solved by an alternating minimization procedure over F , X, and ?.
Data-driven Temporal Dependency Learning in TRMF:Recall that in Section 2.3, we showed
that directly using graph based regularizers to incorporate temporal dependencies leads to trivial
solutions for the weights. TRMF circumvents this issue. When F and X are fixed, (7) is reduced to:
min
(8)
x TM (X | ?) + ? R? (?),
?
which is a maximum-a-posterior (MAP) estimation problem (in the Bayesian sense) to estimate the
best ? for a given {xt } under the M? model. There are well-developed algorithms to solve (8) and
obtain non-trivial ?. Thus, unlike most existing temporal matrix factorization approaches where the
strength of dependencies is fixed, ? in TRMF can be learned automatically from data.
Time Series Analysis with TRMF:We can see that TRMF (7) lends itself seamlessly to handle a
variety of commonly encountered tasks in analyzing data with temporal dependency:
? Time-series Forecasting: Once we have M? for latent embeddings {xt : 1, . . . , T }, we can
use it to predict future latent embeddings {xt : t > T } and have the ability to obtain non-trivial
forecasting results for yt = F xt for t > T .
? Missing-value Imputation: In some time-series applications, some entries in Y might be unobserved, for example, due to faulty sensors in electricity usage monitoring or occlusions in the
case of motion recognition in video. We can use fi> xt to impute these missing entries, much like
standard matrix completion, and is useful in recommender systems [23] and sensor networks [26].
Extensions to Incorporate Extra Information:Like matrix factorization, TRMF (7) can be extended to incorporate additional information. For example, pairwise relationships between the time
series can be incorporated using structural regularizers on F . Furthermore, when features are known
for the time series, we can make use of interaction models such as those in [6, 24, 25]. Also, TRMF
can be extended to tensors. More details on these extensions can be found in Appendix B.
4
A Novel Autoregressive Temporal Regularizer
In Section 3, we described the TRMF framework in a very general sense, with the regularizer
TM (X | ?) incorporating dependencies specified by the time series model M? . In this section,
we specialize this to the case of AR models, which are parameterized by a lag set L and weights
W = W (l) 2 Rk?k : l 2 L . Assume that xt is a noisy linear combination of some previous
4
P
points; that is, xt = l2L W (l) xt l + ?t , where ?t is a Gaussian noise vector. For simplicity, we
assume that the ?t ? N (0, 2 Ik ), where Ik is the k ? k identity matrix1 . The temporal regularizer
TM (X | ?) corresponding to this AR model can be written as:
2
T
X
1X
?X
2
(l)
TAR (X |L, W, ?) :=
xt
W xt l +
kxt k ,
(9)
2 t=m
2 t
l2L
where m := 1 + L, L := max(L), and ? > 0 to guarantee the strong convexity of (9).
TRMF allows us to learn the weights W (l) when they are unknown. Since each W (l) 2 Rk?k ,
there will be |L|k 2 variables to learn, which may lead to overfitting. To prevent this and to yield
more interpretable results, we consider diagonal W (l) , reducing the number of parameters to |L|k.
To simplify notation, we use W to denote the k ? L matrix where the l-th column constitutes
the diagonal elements of W (l) . Note that for l 2
/ L, the l-th column of W is a zero vector. Let
?>
? r> = [? ? ? , Wrl , ? ? ? ] be the r-th row of W. Then
x
and w
r = [? ? ? , Xrt , ? ? ? ] be the r-th row of X
Pk
? r |L, w
? r , ?), where we define
(9) can be written as TAR (X |L, W, ?) = r=1 TAR (x
!2
T
X
1X
?
? |L, w,
? ?) =
? 2,
TAR (x
xt
w l xt l
+ kxk
(10)
2 t=m
2
l2L
? and wl being the l-th element of w.
?
with xt being the t-th element of x,
Correlations among Multiple Time Series. Even when W l is diagonal, TRMF retains the power
to capture the correlations among time series via the factors {fi }, since it has an effect only on the
structure of latent embeddings {xt }. Indeed, as the i-th dimension of {yt } is modeled by fi> X
in (7), the low rank F is a k dimensional latent embedding of multiple time series. This embedding
captures correlations among multiple time series. Furthermore, {fi } acts as time series features,
which can be used to perform classification/clustering even in the presence of missing values.
Choice of Lag Index Set L. Unlike most approaches mentioned in Section 2.2, the choice of L in
TRMF is more flexible. Thus, TRMF can provide important advantages: First, because there is no
need to specify the weight parameters W, L can be chosen to be larger to account for long range
dependencies, which also yields more accurate and robust forecasts. Second, the indices in L can be
discontinuous so that one can easily embed domain knowledge about periodicity or seasonality. For
example, one might consider L = {1, 2, 3, 51, 52, 53} for weekly data with a one year seasonality.
? |L, w,
? ?)
Connections to Graph Regularization. We now establish connections between TAR (x
and graph regularization (2) for matrix factorization. Let L? := L [ {0}, w0 = 1 so that (10) is
0
12
T
1 X @X
?
? |L, w,
? ?) =
? 2,
TAR (x
wl xt l A + kxk
2 t=m
2
?
and let (d) := l 2 L? : l
l2L
d 2 L? . We then have the following result:
? 2 RL , and x
? 2 RT , there is a weighted
Theorem 1. Given a lag index set L, weight vector w
signed graph GAR with T nodes and a diagonal matrix D 2 RT ?T such that
1 >
? |L, w,
? ?) = G x
? | GAR , ? + x
? Dx,
?
TAR (x
(11)
2
AR
AR
? | G , ? is the graph regularization (2) with G = G . Furthermore, 8t and d
where G x
X
8X
0
10
1
<
wl wl d if (d) 6= ,
X
X
GAR
and Dtt = @
w l A@
wl [m ? t + l ? T ]A
l2 (d) m?t+l?T
t,t+d =
:
?
?
l2L
l2L
0
otherwise,
w4
w4
See Appendix C.1 for a detailed proof. From
Theorem 1, we see that (d) is non-empty if and
? ? ? t 4 w1 t 3 w1 t 2 w1 t 1 w1 t w1 t + 1 ? ? ?
only if there are edges between time points sepAR
arated by d in G . Thus, we can construct the
w1 w4
w1 w4
w1 w4
? |L, w,
? ?) by checkdependency graph for TAR (x
ing whether (d) is empty. Figure 3 demon- Figure 3: The graph structure induced by the AR
strates an example with L = {1, 4}. We can see temporal regularizer (10) with L = {1, 4}.
that besides edges of distance d = 1 and d = 4, there are also edges of distance d = 3 (dotted edges
in Figure 3) because 4 3 2 L? and (3) = {4}.
1
If the (known) covariance matrix is not identity, we can suitably modify the regularizer.
5
Table 1: Data statistics.
synthetic electricity traffic walmart-1 walmart-2
n
16
370
963
1,350
1,582
T
128
26,304 10,560
187
187
missing ratio
0%
0%
0%
55.3%
49.3%
Although Theorem 1 shows that AR-based regularizers are similar to the graph-based regularization
framework, we note the following key differences:
? The graph GAR in Theorem 1 contains both positive and negative edges. This implies that the
AR temporal regularizer is able to support negative correlations, which the standard graph-based
? | GAR , ? non-convex. The addition of the second term in
regularizer cannot. This can make G x
? |L, w,
? ?).
(11), however, still leads to a convex regularizer TAR (x
? Unlike (3) where there is freedom to specify a weight for each distance, in the graph GAR , the
weight values for the edges are more structured (e.g., the weight for d = 3 in Figure 3 is w1 w4 ).
Hence, minimization w.r.t. w0 s is not trivial, and neither are the obtained solutions.
Plugging TM (X | ?) = TAR (X |L, W, ?) into (7), we obtain the following problem:
k
X
X
2
? r |L, w
? r , ?) + w Rw (W),
min
Yit fi> xt + f Rf (F ) +
(12)
x TAR (x
F,X,W
r=1
(i,t)2?
where Rw (W) is a regularizer for W. We will refer to (12) as TRMF-AR. We can apply alternating
minimization to solve (12). In fact, solving for each variable reduces to well known methods, for
which highly efficient algorithms exist:
Updates for F . When X and W are fixed, the subproblem of updating F is the same as updating F
while X fixed in (1). Thus, fast algorithms such as alternating least squares or coordinate descent can
be applied directly to find F , which costs O(|?|k 2 ) time.
P
Pk
2
? r |L, w
? r , ?). From
Updates for X. We solve arg minX (i,t)2? Yit fi> xt + x r=1 TAR (x
? |L, w,
? ?) shares the same form as the graph regularizer, and we can
Theorem 1, we see that TAR (x
apply GRALS [15] to find X, which costs O(|L|T k 2 ) time.
Updates for W. How to update W while F and X fixed depends on the choice of Rw (W). There
are many parameter estimation techniques developed for AR with various regularizers [11, 20]. For
2
?r
simplicity, we consider the squared Frobenius norm: Rw (W) = kWkF . As a result, each row of w
of W can be updated by solving the following one-dimensional autoregressive problem.
!2
T
X
X
w
2
? r |L, w,
? ?) + w kwk
? ? arg min
? 2,
arg min x TAR (x
xt
w l xt l
+
kwk
?
w
?
w
t=m
l2L
x
which is a simple |L| dimensional ridge regression problem with T m + 1 instances, which can be
3
2
solved efficiently by Cholesky factorization in O(|L| + T |L| ) time
Note that since our method is highly modular, one can resort to any method to solve the optimization
subproblems that arise for each module. Moreover, as mentioned in Section 3, TRMF can also be
used with different regularization structures, making it highly adaptable.
4.1 Connections to Existing MF Approaches
TRMF-AR is a generalization of many existing MF approaches to handle data with temporal dependencies. Specifically, Temporal Collaborative Filtering [23] corresponds to W (1) = Ik on {xt }. The
NMF method of [2] is an AR(L) model with W (l) = ?l 1 (1 ?)Ik , 8l, where ? is pre-defined.
The AR(1) model of [16, 26] has W (1) = In on {F xt }. Finally the DLM [7] is a latent AR(1)
model with a general W (1) , which can be estimated by EM algorithms.
4.2 Connections to Learning Gaussian Markov Random Fields
The Gaussian Markov Random Field (GMRF) is a general way to model multivariate data with
dependencies. GMRF assumes that data are generated from a multivariate Gaussian distribution
with a covariance matrix ? which describes the dependencies among T dimensional variables i.e.,
? ? N (0, ?). If the unknown x
? is assumed to be generated from this model, The negative log
x
? > ? 1 x,
? ignoring the constants and where ? 1 is the inverse
likelihood of the data can be written as x
covariance matrix of the Gaussian distribution. This prior can be incorporated into an empirical risk
minimization framework as a regularizer. Furthermore, it is known that if ? 1 st = 0, xt and xs
are conditionally independent, given the other variables. In Theorem 1 we established connections
6
Table 2: Forecasting results: ND/ NRMSE for each approach. Lower values are better. ?-? indicates
an unavailability due to scalability or an inability to handle missing values.
synthetic
electricity
traffic
walmart-1
walmart-2
Forecasting with Full Observation
Matrix Factorization Models
Time Series Models
TRMF-AR
SVD-AR(1)
TCF
AR(1)
DLM
R-DLM
0.373/ 0.487 0.444/ 0.872 1.000/ 1.424 0.928/ 1.401 0.936/ 1.391 0.996/ 1.420
0.255/ 1.397 0.257/ 1.865 0.349/ 1.838 0.219/ 1.439 0.435/ 2.753
-/ 0.187/ 0.423 0.555/ 1.194 0.624/ 0.931 0.275/ 0.536 0.639/ 0.951
-/ Forecasting with Missing Values
0.533/ 1.958
-/ 0.540/2.231
-/ 0.602/ 2.293
-/ 0.432/ 1.065
-/ 0.446/1.124
-/ 0.453/ 1.110
-/ -
Mean
1.000/ 1.424
1.410/ 4.528
0.560/ 0.826
1.239/3.103
1.097/2.088
to graph based regularizers, and that such methods can be seen as regularizing with the inverse
covariance matrix for Gaussians [27]. We thus have the following result:
? and ? > 0, the inverse covariance matrix ?AR1 of the GMRF model
Corollary 1. For any lag set L, w,
? := TAR (x
? |L, w,
? ?) shares the same off-diagonal
corresponding to the quadratic regularizer Rx (x)
? |L, w,
? ?) = x
? > ?AR1 x.
?
non-zero pattern as GAR defined in Theorem 1. Moreover, we have TAR (x
A detailed proof is in Appendix C.2. As a result, our proposed AR-based regularizer is equivalent to
? with a structured inverse covariance described by the matrix GAR
imposing a Gaussian prior on x
defined in Theorem 1. Moreover, the step to learn W has a natural interpretation: the lag set L
imposes the non-zero pattern of the graphical model on the data, and then we solve a simple least
squares problem to learn the weights corresponding to the edges. As an application of Theorem 1
2
from [15] and Corollary 1, when Rf (F ) = kF kF ,we can relate TAR to a weighted nuclear norm:
X
1
2
? r |L, w,
? ?),
kZBk? =
inf
kF kF +
TAR (x
(13)
2 F,X:Z=F X
r
where B = U S 1/2 and ?AR1 = U SU > is the eigen-decomposition of ?AR1 . (13) enables us to apply
the results from [15] to obtain guarantees for the use of AR temporal regularizer when W is given. For
? r = w,
? 8r and consider a relaxed convex formulation for (12) as follows:
simplicity, we assume w
1 X
2
?
Z = arg min
(Yij Zij ) + z kZBk? ,
(14)
Z2C N
(i,j)2?
where N = |?|, and C is a set of matrices with low spikiness. Full details are provided in Appendix C.3. As an application of Theorem 2 from [15], we have the following corollary.
Corollary 2. Let Z ? = F X be the ground truth n ? T time series matrix of rank k. Let Y be
the matrix with N = |?| randomly
observed entries corrupted with additive Gaussian noise with
q
(n+T
) log(n+T )
variance 2 . Then if z C1
, with high probability for the Z? obtained by (14),
N
k(n + T ) log(n + T )
+ O(?2 /N ),
N
F
where C1 ,C2 are positive constants, and ? depends on the product Z ? B.
Z?
Z?
? C2 ?2 max(1,
2
)
See Appendix C.3 for details. From the results in Table 3, we observe superior performance of
? learnt from our data-driven approach (12) does aid
TRMF-AR over standard MF, indicating that w
in recovering the missing entries for time series. We would like to point out that establishing a
theoretical guarantee for TRMF with W is unknown remains a challenging research direction.
5
Experimental Results
We consider five datasets (Table 1). For synthetic, we first
randomly generate F 2 R16?4 and generate {xt } following an AR process with L = {1, 8}. Then Y is obtained
by yt = F xt + ?t where ?t ? N (0, 0.1I). The data sets
electricity and traffic are obtained from the UCI repository, while walmart-1 and walmart-2 are two propriety
datasets from Walmart E-commerce containing weekly
sale information. Due to reasons such as out-of-stock,
55.3% and 49.3% of entries are missing respectively. To
evaluate the prediction performance, we consider the normalized deviation (ND) and normalized RMSE (NRMSE). Figure 4: Scalability: T = 512. n 2
See details for the description for each dataset and the {500, 1000, . . . , 50000}. AR({1, . . . , 8})
cannot finish in 1 day.
formal definition for each criterion in Appendix A.
7
Table 3: Missing value imputation results: ND/ NRMSE for each approach. Note that TRMF
outperforms all competing methods in almost all cases.
|?|
n?T
synthetic
electricity
traffic
20%
30%
40%
50%
20%
30%
40%
50%
20%
30%
40%
50%
Matrix Factorization Models
TRMF-AR
TCF
MF
0.467/ 0.661 0.713/ 1.030 0.688/ 1.064
0.336/ 0.455 0.629/ 0.961 0.595/ 0.926
0.231/ 0.306 0.495/ 0.771 0.374/ 0.548
0.201/ 0.270 0.289/ 0.464 0.317/ 0.477
0.245/ 2.395 0.255/ 2.427 0.362/ 2.903
0.235/ 2.415 0.245/ 2.436 0.355/ 2.766
0.231/ 2.429 0.242/ 2.457 0.348/ 2.697
0.223/ 2.434 0.233/ 2.459 0.319/ 2.623
0.190/ 0.427 0.208/ 0.448 0.310/ 0.604
0.186/ 0.419 0.199/ 0.432 0.299/ 0.581
0.185/ 0.416 0.198/ 0.428 0.292/ 0.568
0.184/ 0.415 0.193/ 0.422 0.251/ 0.510
Time Series Models
DLM
Mean
0.933/ 1.382 1.002/ 1.474
0.913/ 1.324 1.004/ 1.445
0.834/ 1.259 1.002/ 1.479
0.772/ 1.186 1.001/ 1.498
0.462/ 4.777 1.333/ 6.031
0.410/ 6.605 1.320/ 6.050
0.196/ 2.151 1.322/ 6.030
0.158/ 1.590 1.320/ 6.109
0.353/ 0.603 0.578/ 0.857
0.286/ 0.518 0.578/ 0.856
0.251/ 0.476 0.578/ 0.857
0.224/ 0.447 0.578/ 0.857
Methods/Implementations Compared:
2
? TRMF-AR: The proposed formulation (12) with Rw (W) = kWkF . For L, we use {1, 2, . . . , 8}
for synthetic, {1, . . . , 24}[{7 ? 24, . . . , 8 ? 24 1} for electricity and traffic, and {1, . . . , 10}[
{50, . . . , 56} for walmart-1 and walmart-2 to capture seasonality.
? SVD-AR(1): The rank-k approximation of Y = U SV > is first obtained by SVD. After setting
F = U S and X = V > , a k-dimensional AR(1) is learned on X for forecasting.
? TCF: Matrix factorization with the simple temporal regularizer proposed in [23].
? AR(1): n-dimensional AR(1) model.2
? DLM: two implementations: the widely used R-DLM package [12] and the code provided in [8].
? Mean: The baseline, which predicts everything to be the mean of the observed portion of Y .
For each method and data set, we perform a grid search over various parameters (such as k, values)
following a rolling validation approach described in [11].
Scalability: Figure 4 shows that traditional time-series approaches such as AR or DLM suffer
from the scalability issue for large n, while TRMF-AR scales much better with n. Specifically, for
n = 50, 000, TRMF is 2 orders of magnitude faster than competing AR/DLM methods. Note that
the results for R-DLM are not available because the R package cannot scale beyond n in the tens
(See Appendix D for more details.). Furthermore, the dlmMLE routine in R-DLM uses a general
optimization solver, which is orders of magnitude slower than the implementation provided in [8].
5.1 Forecasting
Forecasting with Full Observations. We first compare various methods on the task of forecasting
values in the test set, given fully observed training data. For synthetic, we consider one-point ahead
forecasting task and use the last ten time points as the test periods. For electricity and traffic, we
consider the 24-hour ahead forecasting task and use last seven days as the test periods. From Table 2,
we can see that TRMF-AR outperforms all the other methods on both metrics considered.
Forecasting with Missing Values. We next compare the methods on the task of forecasting in the
presence of missing values in the data. We use the Walmart datasets here, and consider 6-week ahead
forecasting and use last 54 weeks as the test periods. Note that SVD-AR(1) and AR(1) cannot handle
missing values. The second part of Table 2 shows that we again outperform other methods.
5.2 Missing Value Imputation
We next consider the case of imputing missing values in the data. As in [9], we assume that blocks of
data are missing, corresponding to sensor malfunctions for example, over a length of time. To create
data with missing entries, we first fixed the percentage of data that we were interested in observing,
and then uniformly at random occluded blocks of a predetermined length (2 for synthetic data and
5 for the real datasets). The goal was to predict the occluded values. Table 3 shows that TRMF
outperforms the methods we compared to on almost all cases.
6
Conclusions
We propose a novel temporal regularized matrix factorization framework (TRMF) for highdimensional time series problems with missing values. TRMF not only models temporal dependency
among the data points, but also supports data-driven dependency learning. TRMF generalizes several well-known methods and yields superior performance when compared to other state-of-the-art
methods on real-world datasets.
Acknowledgements: This research was supported by NSF grants (CCF-1320746, IIS-1546459, and CCF1564000) and gifts from Walmart Labs and Adobe. We thank Abhay Jha for the help on Walmart experiments.
2
In Appendix A, we also show a baseline which applies an independent AR model to each dimension.
8
References
[1] O. Anava, E. Hazan, and A. Zeevi. Online time series prediction with missing data. In Proceedings of the
International Conference on Machine Learning, pages 2191?2199, 2015.
[2] Z. Chen and A. Cichocki. Nonnegative matrix factorization with temporal smoothness and/or spatial
decorrelation constraints. Laboratory for Advanced Brain Signal Processing, RIKEN, Tech. Rep, 68, 2005.
[3] Z. Ghahramani and G. E. Hinton. Parameter estimation for linear dynamical systems. Technical report,
Technical Report CRG-TR-96-2, University of Totronto, Dept. of Computer Science, 1996.
[4] R. L. Graham, D. E. Knuth, and O. Patashnik. Concrete Mathematics: A Foundation for Computer Science.
Addison-Wesley Longman Publishing Co., Inc., 2nd edition, 1994.
[5] F. Han and H. Liu. Transition matrix estimation in high dimensional time series. In Proceedings of the
International Conference on Machine Learning, pages 172?180, 2013.
[6] P. Jain and I. S. Dhillon. Provable inductive matrix completion. arXiv preprint arXiv:1306.0626, 2013.
[7] R. E. Kalman. A new approach to linear filtering and prediction problems. Journal of Fluids Engineering,
82(1):35?45, 1960.
[8] L. Li and B. A. Prakash. Time series clustering: Complex is simpler! In Proceedings of the International
Conference on Machine Learning, pages 185?192, 2011.
[9] L. Li, J. McCann, N. S. Pollard, and C. Faloutsos. DynaMMo: Mining and summarization of coevolving
sequences with missing values. In ACM SIGKDD International Conference on Knowledge discovery and
data mining, pages 507?516. ACM, 2009.
[10] I. Melnyk and A. Banerjee. Estimating structured vector autoregressive model. In Proceedings of the
Thirty Third International Conference on Machine Learning (ICML), 2016.
[11] W. B. Nicholson, D. S. Matteson, and J. Bien. Structured regularization for large vector autoregressions.
Technical report, Technical Report, University of Cornell, 2014.
[12] G. Petris. An r package for dynamic linear models. Journal of Statistical Software, 36(12):1?16, 2010.
[13] G. Petris, S. Petrone, and P. Campagnoli. Dynamic Linear Models with R. Use R! Springer, 2009.
[14] S. Rallapalli, L. Qiu, Y. Zhang, and Y.-C. Chen. Exploiting temporal stability and low-rank structure
for localization in mobile networks. In International Conference on Mobile Computing and Networking,
MobiCom ?10, pages 161?172. ACM, 2010.
[15] N. Rao, H.-F. Yu, P. K. Ravikumar, and I. S. Dhillon. Collaborative filtering with graph information:
Consistency and scalable methods. In Advances in Neural Information Processing Systems 27, 2015.
[16] M. Roughan, Y. Zhang, W. Willinger, and L. Qiu. Spatio-temporal compressive sensing and internet traffic
matrices (extended version). IEEE/ACM Transactions on Networking, 20(3):662?676, June 2012.
[17] R. H. Shumway and D. S. Stoffer. An approach to time series smoothing and forecasting using the EM
algorithm. Journal of time series analysis, 3(4):253?264, 1982.
[18] A. J. Smola and R. Kondor. Kernels and regularization on graphs. In Learning theory and kernel machines,
pages 144?158. Springer, 2003.
[19] J. Z. Sun, K. R. Varshney, and K. Subbian. Dynamic matrix factorization: A state space approach. In
Proceedings of International Conference on Acoustics, Speech and Signal Processing, pages 1897?1900.
IEEE, 2012.
[20] H. Wang, G. Li, and C.-L. Tsai. Regression coefficient and autoregressive order shrinkage and selection via
the lasso. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 69(1):63?78, 2007.
[21] M. West and J. Harrison. Bayesian Forecasting and Dynamic Models. Springer Series in Statistics. Springer,
2013.
[22] K. Wilson, B. Raj, and P. Smaragdis. Regularized non-negative matrix factorization with temporal
dependencies for speech denoising. In Interspeech, pages 411?414, 2008.
[23] L. Xiong, X. Chen, T.-K. Huang, J. G. Schneider, and J. G. Carbonell. Temporal collaborative filtering
with Bayesian probabilistic tensor factorization. In SIAM International Conference on Data Mining, pages
223?234, 2010.
[24] M. Xu, R. Jin, and Z.-H. Zhou. Speedup matrix completion with side information: Application to multilabel learning. In C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Weinberger, editors, Advances
in Neural Information Processing Systems 26, pages 2301?2309, 2013.
[25] H.-F. Yu, P. Jain, P. Kar, and I. S. Dhillon. Large-scale multi-label learning with missing labels. In
Proceedings of the International Conference on Machine Learning, pages 593?601, 2014.
[26] Y. Zhang, M. Roughan, W. Willinger, and L. Qiu. Spatio-temporal compressive sensing and internet traffic
matrices. SIGCOMM Comput. Commun. Rev., 39(4):267?278, Aug. 2009. ISSN 0146-4833.
[27] T. Zhou, H. Shan, A. Banerjee, and G. Sapiro. Kernelized probabilistic matrix factorization: Exploiting
graphs and side information. In SDM, volume 12, pages 403?414. SIAM, 2012.
9
| 6160 |@word repository:1 kondor:1 version:1 norm:4 nd:4 suitably:1 open:1 nicholson:1 covariance:6 decomposition:1 tr:1 inefficiency:1 series:58 contains:1 zij:1 liu:1 denoting:1 past:1 existing:13 outperforms:3 com:1 gmail:1 dx:1 written:3 willinger:2 additive:1 predetermined:1 l2l:10 enables:1 designed:2 interpretable:1 update:5 alone:1 prohibitive:1 item:3 short:3 z2c:1 matrix1:1 node:5 simpler:1 zhang:3 five:1 c2:2 direct:1 viable:1 ik:4 specialize:1 introduce:1 mccann:1 pairwise:1 inter:1 indeed:2 multi:1 brain:1 automatically:3 resolve:1 solver:3 becomes:1 spain:1 xx:1 moreover:4 notation:1 provided:3 gift:1 estimating:1 developed:2 compressive:2 unobserved:1 guarantee:5 temporal:55 sapiro:1 every:2 act:1 prakash:1 weekly:2 k2:3 sale:1 grant:1 superiority:2 positive:2 engineering:1 treat:1 modify:1 encoding:1 analyzing:1 establishing:1 becoming:1 ap:2 might:6 signed:1 studied:1 challenging:3 co:2 factorization:24 range:1 commerce:2 thirty:1 block:3 totronto:1 procedure:2 w4:8 empirical:1 pre:1 get:1 cannot:6 close:1 selection:1 faulty:1 context:1 risk:1 equivalent:1 imposed:1 map:1 missing:30 yt:3 convex:3 simplicity:3 gmrf:3 estimator:1 nuclear:1 embedding:4 handle:12 classic:1 coordinate:1 stability:1 updated:1 pt:1 play:1 us:1 element:3 approximated:1 recognition:1 updating:2 predicts:1 observed:5 role:1 subproblem:1 module:1 preprint:1 solved:2 capture:6 mobicom:1 thousand:2 wang:1 sun:1 ordering:1 principled:1 mentioned:3 convexity:3 occluded:2 dynamic:5 multilabel:1 solving:2 negatively:1 localization:1 easily:2 stock:1 various:4 regularizer:28 riken:1 jain:2 fast:1 describe:3 lag:7 larger:2 widely:2 solve:7 nikhil:1 modular:1 otherwise:1 ability:3 statistic:2 noisy:3 itself:1 online:1 kxt:7 advantage:1 sequence:1 sdm:1 propose:5 interaction:1 product:3 uci:1 realization:1 flexibility:2 intuitive:1 frobenius:3 demon:1 proaches:2 scalability:7 description:1 exploiting:2 empty:2 help:1 develop:1 completion:4 aug:1 strong:2 recovering:1 c:2 implies:1 direction:1 discontinuous:1 filter:3 human:1 everything:1 require:2 generalization:1 summation:1 yij:1 extension:2 crg:1 considered:2 ground:1 normal:1 predict:2 week:2 zeevi:1 patashnik:1 estimation:5 applicable:1 label:2 utexas:2 wl:12 create:1 weighted:3 minimization:4 sensor:5 gaussian:8 avoid:3 shelf:1 cornell:1 tar:18 varying:1 mobile:2 wilson:1 zhou:2 shrinkage:1 corollary:4 focus:2 june:1 rank:6 likelihood:3 indicates:1 seamlessly:1 tech:1 sigkdd:1 baseline:2 sense:2 dependent:3 kernelized:1 interested:1 issue:6 aforementioned:2 among:12 arg:5 classification:1 flexible:1 art:2 smoothing:3 spatial:1 field:2 once:1 construct:1 poral:1 yu:3 icml:1 constitutes:1 tem:1 future:3 minimized:2 simplex:1 report:4 simplify:1 inherent:1 modern:4 randomly:2 occlusion:2 attempt:1 freedom:1 organization:1 huge:1 interest:1 highly:8 mining:3 stoffer:1 yielding:1 light:1 regularizers:9 chain:1 accurate:1 fu:1 encourage:3 edge:10 euclidean:1 theoretical:1 instance:1 column:5 rao:2 ar:44 kxkf:2 retains:1 electricity:7 cost:3 deviation:1 entry:8 rolling:1 uniform:1 wrl:1 dependency:42 corrupted:1 learnt:1 sv:1 synthetic:7 st:1 international:9 siam:2 probabilistic:2 off:2 concrete:1 w1:14 squared:3 central:1 again:2 containing:2 huang:1 possibly:2 r16:1 resort:1 li:3 account:2 subsumes:1 coefficient:2 jha:1 inc:1 explicitly:1 depends:2 try:1 lab:1 analyze:1 observing:2 traffic:8 kwk:2 portion:1 option:1 hazan:1 inherited:1 rmse:1 collaborative:3 square:2 variance:1 efficiently:1 yield:4 wal:1 bayesian:3 mc:1 rx:7 monitoring:1 corruption:1 networking:2 definition:1 proof:2 roughan:2 dataset:1 popular:2 recall:1 knowledge:2 dimensionality:1 malfunction:2 routine:1 adaptable:1 wesley:1 day:3 follow:1 methodology:1 specify:3 formulation:7 furthermore:6 smola:1 correlation:6 hand:2 ally:1 grals:2 su:1 banerjee:2 kn2:1 brings:2 quality:1 building:1 usage:1 effect:1 contain:1 normalized:2 ccf:1 inductive:1 regularization:17 hence:3 alternating:3 dhillon:4 laboratory:1 deal:2 conditionally:1 unavailability:1 impute:2 interspeech:1 encourages:2 criterion:1 ln2:1 ridge:1 demonstrate:2 climatology:3 motion:1 coevolving:1 meaning:1 novel:7 fi:12 common:1 superior:2 imputing:1 rl:1 melnyk:1 volume:1 interpretation:1 refer:1 imposing:1 smoothness:1 grid:1 mathematics:1 similarly:2 consistency:1 l3:1 han:1 longer:1 similarity:1 posterior:1 multivariate:2 showed:1 raj:1 moderate:1 driven:5 inf:1 scenario:1 store:1 commun:1 kar:1 rep:1 tee:1 seen:1 additional:1 relaxed:1 impose:1 schneider:1 period:3 signal:2 ii:1 full:4 multiple:5 reduces:1 ing:1 technical:4 faster:2 adapt:1 anava:1 long:1 ravikumar:1 sigcomm:1 plugging:2 laplacian:1 adobe:1 prediction:6 scalable:4 regression:2 metric:1 arxiv:2 kernel:2 c1:2 addition:3 want:1 spikiness:1 harrison:1 specif:1 extra:1 unlike:8 tiple:1 specially:1 induced:3 kwkf:2 gts:5 practitioner:3 structural:1 presence:3 embeddings:8 variety:2 finish:1 competing:2 lasso:1 inner:1 tm:9 texas:2 whether:1 forecasting:22 suffer:1 pollard:1 speech:2 cause:1 generally:1 useful:1 clear:1 involve:2 detailed:2 repeating:1 ten:4 rw:5 reduced:1 generate:2 outperform:1 exist:2 percentage:1 seasonality:3 nsf:1 dotted:1 estimated:2 track:1 nrmse:3 key:1 ar1:4 yit:7 imputation:3 prevent:1 neither:1 longman:1 graph:36 year:3 package:5 parameterized:2 inverse:4 almost:2 circumvents:1 appendix:9 graham:1 shan:1 internet:2 smaragdis:1 quadratic:1 encountered:1 nonnegative:1 strength:1 ahead:3 constraint:3 software:1 generates:1 min:8 speedup:1 structured:5 combination:1 poor:2 describes:1 increasingly:1 em:4 n4:1 making:1 rev:1 invariant:1 walmart:12 dlm:18 remains:2 turn:1 technicolor:1 addison:1 available:3 gaussians:1 generalizes:1 apply:5 observe:1 appropriate:1 xiong:1 faloutsos:1 weinberger:1 eigen:1 slower:1 assumes:2 denotes:1 include:2 ensure:1 clustering:2 graphical:1 publishing:1 ghahramani:2 establish:1 classical:2 society:1 tensor:2 objective:2 rt:2 traditional:2 diagonal:5 minx:1 lends:1 distance:4 separate:1 thank:1 w0:2 carbonell:1 seven:1 evaluate:1 collected:1 trivial:5 reason:1 provable:1 xrt:1 kalman:5 besides:1 index:4 relationship:1 modeled:1 ratio:1 code:1 length:2 issn:1 relate:1 subproblems:1 negative:7 fluid:1 abhay:1 implementation:4 design:3 summarization:1 unknown:6 perform:4 recommender:2 observation:4 datasets:8 markov:2 descent:1 jin:1 situation:2 extended:3 incorporated:4 looking:1 hinton:1 frame:1 rn:2 stack:1 nmf:1 inferred:2 specified:1 extensive:1 connection:9 acoustic:1 learned:2 propriety:1 established:1 barcelona:1 hour:2 nip:1 dtt:1 beyond:3 able:1 usually:6 pattern:3 dynamical:1 bien:1 challenge:2 gar:8 rf:6 max:2 royal:1 video:1 power:1 suitable:1 decorrelation:1 difficulty:1 natural:2 regularized:7 advanced:1 scheme:1 temporally:1 n6:1 cichocki:1 review:1 literature:1 l2:2 understanding:1 prior:2 kf:4 acknowledgement:1 discovery:1 rofuyu:1 autoregressions:1 fully:1 shumway:1 permutation:1 ically:1 interesting:1 limitation:3 filtering:4 subbian:1 validation:1 foundation:1 purchasing:1 imposes:1 editor:1 share:3 austin:2 row:5 periodicity:1 accounted:1 supported:1 last:3 formal:1 side:2 burges:1 fall:2 taking:1 sparse:2 benefit:1 dimension:4 world:2 transition:3 autoregressive:7 author:1 commonly:1 social:1 transaction:1 welling:1 varshney:1 overfitting:3 conclude:1 assumed:1 spatio:2 search:1 latent:11 table:8 learn:7 robust:1 correlated:2 ignoring:1 obtaining:1 unavailable:1 inventory:1 bottou:1 complex:1 domain:1 pk:2 linearly:1 motivation:1 noise:3 arise:1 edition:1 qiu:3 x1:1 xu:1 west:1 hsiang:1 aid:1 fails:1 explicit:3 comput:1 weighting:1 third:1 rk:5 theorem:10 embed:1 specific:3 xt:48 sensing:2 x:3 incorporating:1 knuth:1 magnitude:3 demand:3 forecast:4 chen:3 mf:19 suited:1 simply:1 kxk:2 tracking:1 inderjit:2 applies:1 springer:4 corresponds:1 truth:1 relies:1 acm:4 mart:1 identity:2 goal:1 hard:2 specifically:2 reducing:1 uniformly:1 denoising:1 experimental:3 svd:4 exception:1 indicating:1 highdimensional:1 support:4 cholesky:1 latter:1 inability:1 tsai:1 incorporate:7 dept:1 regularizing:1 handling:2 |
5,704 | 6,161 | Unsupervised Learning for Physical Interaction
through Video Prediction
Chelsea Finn?
UC Berkeley
[email protected]
Ian Goodfellow
OpenAI
[email protected]
Sergey Levine
Google Brain
UC Berkeley
[email protected]
Abstract
A core challenge for an agent learning to interact with the world is to predict
how its actions affect objects in its environment. Many existing methods for
learning the dynamics of physical interactions require labeled object information.
However, to scale real-world interaction learning to a variety of scenes and objects,
acquiring labeled data becomes increasingly impractical. To learn about physical
object motion without labels, we develop an action-conditioned video prediction
model that explicitly models pixel motion, by predicting a distribution over pixel
motion from previous frames. Because our model explicitly predicts motion, it
is partially invariant to object appearance, enabling it to generalize to previously
unseen objects. To explore video prediction for real-world interactive agents, we
also introduce a dataset of 59,000 robot interactions involving pushing motions,
including a test set with novel objects. In this dataset, accurate prediction of videos
conditioned on the robot?s future actions amounts to learning a ?visual imagination?
of different futures based on different courses of action. Our experiments show that
our proposed method produces more accurate video predictions both quantitatively
and qualitatively, when compared to prior methods.
1
Introduction
Object detection, tracking, and motion prediction are fundamental problems in computer vision,
and predicting the effect of physical interactions is a critical challenge for learning agents acting in
the world, such as robots, autonomous cars, and drones. Most existing techniques for learning to
predict physics rely on large manually labeled datasets (e.g. [18]). However, if interactive agents
can use unlabeled raw video data to learn about physical interaction, they can autonomously collect
virtually unlimited experience through their own exploration. Learning a representation which can
predict future video without labels has applications in action recognition and prediction and, when
conditioned on the action of the agent, amounts to learning a predictive model that can then be used
for planning and decision making.
However, learning to predict physical phenomena poses many challenges, since real-world physical
interactions tend to be complex and stochastic, and learning from raw video requires handling the
high dimensionality of image pixels and the partial observability of object motion from videos. Prior
video prediction methods have typically considered short-range prediction [17], small image patches
[22], or synthetic images [20]. Such models follow a paradigm of reconstructing future frames
from the internal state of the model. In our approach, we propose a method which does not require
the model to store the object and background appearance. Such appearance information is directly
available in the previous frame. We develop a predictive model which merges appearance information
?
Work was done while the author was at Google Brain.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
from previous frames with motion predicted by the model. As a result, the model is better able to
predict future video sequences for multiple steps, even involving objects not seen at training time.
To merge appearance and predicted motion, we output the motion of pixels relative to the previous
image. Applying this motion to the previous image forms the next frame. We present and evaluate
three motion prediction modules. The first, which we refer to as dynamic neural advection (DNA),
outputs a distribution over locations in the previous frame for each pixel in the new frame. The
predicted pixel value is then computed as an expectation under this distribution. A variant on this
approach, which we call convolutional dynamic neural advection (CDNA), outputs the parameters of
multiple normalized convolution kernels to apply to the previous image to compute new pixel values.
The last approach, which we call spatial transformer predictors (STP), outputs the parameters of
multiple affine transformations to apply to the previous image, akin to the spatial transformer network
previously proposed for supervised learning [11]. In the case of the latter two methods, each predicted
transformation is meant to handle separate objects. To combine the predictions into a single image,
the model also predicts a compositing mask over each of the transformations. DNA and CDNA are
simpler and easier to implement than STP, and while all models achieve comparable performance,
the object-centric CDNA and STP models also provide interpretable internal representations.
Our main contribution is a method for making long-range predictions in real-world videos by
predicting pixel motion. When conditioned on the actions taken by an agent, the model can learn
to imagine different futures from different actions. To learn about physical interaction from videos,
we need a large dataset with complex object interactions. We collected a dataset of 59,000 robot
pushing motions, consisting of 1.5 million frames and the corresponding actions at each time step.
Our experiments using this new robotic pushing dataset, and using a human motion video dataset [10],
show that models that explicitly transform pixels from previous frames better capture object motion
and produce more accurate video predictions compared to prior state-of-the-art methods. The dataset,
video results, and code are all available online: sites.google.com/site/robotprediction.
2
Related Work
Video prediction: Prior work on video prediction has tackled synthetic videos and short-term
prediction in real videos. Yuan et al. [30] used a nearest neighbor approach to construct predictions
from similar videos in a dataset. Ranzato et al. proposed a baseline for video prediction inspired by
language models [21]. LSTM models have been adapted for video prediction on patches [22], actionconditioned Atari frame predictions [20], and precipitation nowcasting [28]. Mathieu et al. proposed
new loss functions for sharper frame predictions [17]. Prior methods generally reconstruct frames
from the internal state of the model, and some predict the internal state directly, without producing
images [23]. Our method instead transforms pixels from previous frames, explicitly modeling motion
and, in the case of the CDNA and STP models, decomposing it over image segments. We found in
our experiments that all three of our models produce substantially better predictions by advecting
pixels from the previous frame and compositing them onto the new image, rather than constructing
images from scratch. This approach differs from recent work on optic flow prediction [25], which
predicts where pixels will move to using direct optical flow supervision. Boots et al. predict future
images of a robot arm using nonparametric kernel-based methods [4]. In contrast to this work, our
approach uses flexible parametric models, and effectively predicts interactions with objects, including
objects not seen during training. To our knowledge, no previous video prediction method has been
applied to predict real images with novel object interactions beyond two time steps into the future.
There have been a number of promising methods for frame prediction developed concurrently to
this work [16]. Vondrick et al. [24] combine an adversarial objective with a multiscale, feedforward
architecture, and use a foreground/background mask similar to the masking scheme proposed here. De
Brabandere et al. [6] propose a method similar to our DNA model, but use a softmax for sharper flow
distributions. The probabilistic model proposed by Xue et al. [29] predicts transformations applied to
latent feature maps, rather than the image itself, but only demonstrates single frame prediction.
Learning physics: Several works have explicitly addressed prediction of physical interactions,
including predicting ball motion [5], block falling [2], the effects of forces [19, 18], future human
interactions [9], and future car trajectories [26]. These methods require ground truth object pose
information, segmentation masks, camera viewpoint, or image patch trackers. In the domain of
reinforcement learning, model-based methods have been proposed that learn prediction on images [14,
27], but they have either used synthetic images or instance-level models, and have not demonstrated
2
5x5
conv 1
32 c
5x5 conv
LSTM 1
32 c
5x5 conv
LSTM 2
32x32
32x32
16x16
robot
state ac!on
32x32
5x5 conv
LSTM 4
64 c
64 c
stride
2
32 c
stride
2
64x64x3
5x5 conv
LSTM 3
5x5 conv
LSTM 5
128 c
stride
2
5x5 conv
LSTM 6
64 c
deconv
2
concatenate
RGB input
8x8
16x16
10 c
5 !le
16x16
32 c
deconv
2
8x8
deconv
2
32x32
convolve
fully connected,
reshape &
10 5x5
normalize
CDNA kernels
5
1x1
conv 2
5x5 conv
LSTM 7
11 c
64x64
composi!ng
masks
channel
soGmax
?
64x64
masked
composi!ng
10 64x64x3
transformed
images
64x64
RGB
predic!on
Figure 1: Architecture of the CDNA model, one of the three proposed pixel advection models. We
use convolutional LSTMs to process the image, outputting 10 normalized transformation kernels
from the smallest middle layer of the network and an 11-channel compositing mask from the last
layer (including 1 channel for static background). The kernels are applied to transform the previous
image into 10 different transformed images, which are then composited according to the masks. The
masks sum to 1 at each pixel due to a channel-wise softmax. Yellow arrows denote skip connections.
generalization to novel objects nor accurate prediction on real-world videos. As shown by our
comparison to LSTM-based prediction designed for Atari frames [20], models that work well on
synthetic domains do not necessarily succeed on real images.
Video datasets: Existing video datasets capture YouTube clips [12], human motion [10], synthetic
video game frames [20], and driving [8]. However, to investigate learning visual physics prediction,
we need data that exhibits rich object motion, collisions, and interaction information. We propose
a large new dataset consisting of real-world videos of robot-object interactions, including complex
physical phenomena, realistic occlusions, and a clear use-case for interactive robot learning.
3
Motion-Focused Predictive Models
In order to learn about object motion while remaining invariant to appearance, we introduce a class of
video prediction models that directly use appearance information from previous frames to construct
pixel predictions. Our model computes the next frame by first predicting the motions of image
segments, then merges these predictions via masking. In this section, we discuss our novel pixel
transformation models, and propose how to effectively merge predicted motion of multiple segments
into a single next image prediction. The architecture of the CDNA model is shown in Figure 1.
Diagrams of the DNA and STP models are in Appendix B.
3.1
Pixel Transformations for Future Video Prediction
The core of our models is a motion prediction module that predicts objects? motion without attempting
to reconstruct their appearance. This module is therefore partially invariant to appearance and can
generalize effectively to previously unseen objects. We propose three motion prediction modules:
Dynamic Neural Advection (DNA): In this approach, we predict a distribution over locations in
the previous frame for each pixel in the new frame. The predicted pixel value is computed as an
expectation under this distribution. We constrain the pixel movement to a local region, under the
regularizing assumption that pixels will not move large distances. This keeps the dimensionality of
the prediction low. This approach is the most flexible of the proposed approaches.
Formally, we apply the predicted motion transformation m
? to the previous image prediction I?t?1 for
?
every pixel (x, y) to form the next image prediction It as follows:
X
X
I?t (x, y) =
m
? xy (k, l)I?t?1 (x ? k, y ? l)
k?(??,?) l?(??,?)
where ? is the spatial extent of the predicted distribution. This can be implemented as a convolution
with untied weights. The architecture of this model matches the CDNA model in Figure 1, except that
3
the higher-dimensional transformation parameters m
? are outputted by the last (conv 2) layer instead
of the LSTM 5 layer used for the CDNA model.
Convolutional Dynamic Neural Advection (CDNA): Under the assumption that the same mechanisms can be used to predict the motions of different objects in different regions of the image,
we consider a more object-centric approach to predicting motion. Instead of predicting a different
distribution for each pixel, this model predicts multiple discrete distributions that are each applied
to the entire image via a convolution (with tied weights), which computes the expected value of the
motion distribution for every pixel. The idea is that pixels on the same rigid object will move together,
and therefore can share the same transformation. More formally, one predicted object transformation
m
? applied to the previous image It?1 produces image J?t for each pixel (x, y) as follows:
X
X
J?t (x, y) =
m(k,
?
l)I?t?1 (x ? k, y ? l)
k?(??,?) l?(??,?)
where ? is the spatial size of the normalized predicted convolution kernel m.
? Multiple transformations
(i)
{m
? (i) } are applied to the previous image I?t?1 to form multiple images {J?t }. These output images
are combined into a single prediction I?t as described in the next section and show in Figure 1.
Spatial Transformer Predictors (STP): In this approach, the model produces multiple sets of
parameters for 2D affine image transformations, and applies the transformations using a bilinear
? produces a warping grid between
sampling kernel [11]. More formally, a set of affine parameters M
previous image pixels (xt?1 , yt?1 ) and generated image pixels (xt , yt ).
!
xt
xt?1
? yt
=M
yt?1
1
This grid can be applied with a bilinear kernel to form an image J?t :
H
W X
X
I?t?1 (k, l) max(0, 1 ? |xt?1 ? k|) max(0, 1 ? |yt?1 ? l|)
J?t (xt , yt ) =
k
l
where W and H are the image width and height. While this type of operator has been applied
previously only to supervised learning tasks, it is well-suited for video prediction. Multiple transfor? (i) } are applied to the previous image I?t?1 to form multiple images {J?t(i) }, which are
mations {M
then composited based on the masks. The architecture matches the diagram in Figure 1, but instead
? (i) }.
of outputting CDNA kernels at the LSTM 5 layer, the model outputs the STP parameters {M
All of these models can focus on learning physics rather than object appearance. Our experiments
show that these models are better able to generalize to unseen objects compared to models that
reconstruct the pixels directly or predict the difference from the previous frame.
3.2
Composing Object Motion Predictions
CDNA and STP produce multiple object motion predictions, which need to be combined into a single
(i)
image. The composition of the predicted images J?t is modulated by a mask ?, which defines a
P (c)
weight on each prediction, for each pixel. Thus, I?t = c J?t ? ?c , where c denotes the channel
of the mask and the element-wise multiplication is over pixels. To obtain the mask, we apply a
channel-wise softmax to the final convolutional layer in the model (conv 2 in Figure 1), which ensures
that the channels of the mask sum to 1 for each pixel position.
In practice, our experiments show that the CDNA and STP models learn to mask out objects that
are moving in consistent directions. The benefit of this approach is two-fold: first, predicted motion
transformations are reused for multiple pixels in the image, and second, the model naturally extracts
a more object centric representation in an unsupervised fashion, a desirable property for an agent
learning to interact with objects. The DNA model lacks these two benefits, but instead is more flexible
as it can produce independent motions for every pixel in the image.
For each model, including DNA, we also include a ?background mask? where we allow the models
to copy pixels directly from the previous frame. Besides improving performance, this also produces
interpretable background masks that we visualize in Section 5. Additionally, to fill in previously
occluded regions, which may not be well represented by nearby pixels, we allowed the models to
generate pixels from an image, and included it in the final masking step.
4
3.3
Action-conditioned Convolutional LSTMs
Most existing physics and video prediction models use feedforward architectures [17, 15] or feedforward encodings of the image [20]. To generate the motion predictions discussed above, we employ
stacked convolutional LSTMs [28]. Recurrence through convolutions is a natural fit for multi-step
video prediction because it takes advantage of the spatial invariance of image representations, as the
laws of physics are mostly consistent across space. As a result, models with convolutional recurrence
require significantly fewer parameters and use those parameters more efficiently.
The model architecture is displayed in Figure 1 and detailed in Appendix B. In an interactive setting,
the agent?s actions and internal state (such as the pose of the robot gripper) influence the next image.
We integrate both into our model by spatially tiling the concatenated state and action vector across a
feature map, and concatenating the result to the channels of the lowest-dimensional activation map.
Note, though, that the agent?s internal state (i.e. the robot gripper pose) is only input into the network
at the beginning, and must be predicted from the actions in future timesteps. We trained the networks
using an l2 reconstruction loss. Alternative losses, such as those presented in [17] could complement
this method.
4
Robotic Pushing Dataset
One key application of action-conditioned video prediction
is to use the learned model for decision making in visionbased robotic control tasks. Unsupervised learning from
video can enable agents to learn about the world on their
own, without human involvement, a critical requirement
for scaling up interactive learning. In order to investigate
action-conditioned video prediction for robotic tasks, we
need a dataset with real-world physical object interactions.
We collected a new dataset using 10 robotic arms, shown in
Figure 2, pushing hundreds of objects in bins, amounting
to 57,000 interaction sequences with 1.5 million video
frames. Two test sets, each with 1,250 recorded motions,
were also collected. The first test set used two different Figure 2: Robot data collection setup
subsets of the objects pushed during training. The second (top) and example images captured from
test set involved two subsets of objects, none of which the robot?s camera (bottom).
were used during training. In addition to RGB images, we also record the corresponding gripper
poses, which we refer to as the internal state, and actions, which corresponded to the commanded
gripper pose. The dataset is publically available2 . Further details on the data collection procedure are
provided in Appendix A.
5
Experiments
We evaluate our method using the dataset in Section 4, as well as on videos of human motion in
the Human3.6M dataset [10]. In both settings, we evaluate our three models described in Section 3,
as well as prior models [17, 20]. For CDNA and STP, we used 10 transformers. While we show
stills from the predicted videos in the figures, the qualitative results are easiest to compare when the
predicted videos can be viewed side-by-side. For this reason, we encourage the reader to examine
the video results on the supplemental website2 . Code for training the model is also available on the
website.
Training details: We trained all models using the TensorFlow library [1], optimizing to convergence using ADAM [13] with the suggested hyperparameters. We trained all recurrent models
with and without scheduled sampling [3] and report the performance of the model with the best
validation error. We found that scheduled sampling improved performance of our models, but did not
substantially affect the performance of ablation and baseline models that did not model pixel motion.
2
See http://sites.google.com/site/robotprediction
5
FF, ms FC LSTM
5
9
13
17
FF, ms FC LSTM
GT
CDNA
CDNA
GT
t= 1
1
5
9
13
17
Figure 3: Qualitative and quantitative reconstruction performance of our models, compared with
[20, 17]. All models were trained for 8-step prediction, except [17], trained for 1-step prediction.
5.1
Action-conditioned prediction for robotic pushing
Our primary evaluation is on video prediction using our robotic interaction dataset, conditioned on
the future actions taken by the robot. In this setting, we pass in two initial images, as well as the
initial robot arm state and actions, and then sequentially roll out the model, passing in the future
actions and the model?s image and state prediction from the previous time step. We trained for 8
future time steps for all recurrent models, and test for up to 18 time steps. We held out 5% of the
training set for validation. To quantitatively evaluate the predictions, we measure average PSNR and
SSIM, as proposed in [17]. Unlike [17], we measure these metrics on the entire image. We evaluate
on two test sets described in Section 4, one with objects seen at training time, and one with previously
unseen objects.
Figure 3 illustrates the performance of our models compared to prior methods. We report the
performance of the feedforward multiscale model of [17] using an l1 +GDL loss, which was the best
performing model in our experiments ? full results of the multi-scale models are in Appendix C. Our
methods significantly outperform prior video prediction methods on all metrics. The FC LSTM model
[20] reconstructs the background and lacks the representational power to reconstruct the objects in the
bin. The feedforward multiscale model performs well on 1-step prediction, but performance quickly
drops over time, as it is only trained for 1-step prediction. It is worth noting that our models are
significantly more parameter efficient: despite being recurrent, they contain 12.5 million parameters,
which is slightly less than the feedforward model with 12.6 million parameters and significantly
less than the FC LSTM model which has 78 million parameters. We found that none of the models
suffered from significant overfitting on this dataset. We also report the baseline performance of
simply copying the last observed ground truth frame.
In Figure 4, we compare to models with the same stacked convolutional LSTM architecture, but
that predict raw pixel values or the difference between previous and current frames. By explicitly
modeling pixel motion, our method outperforms these ablations. Note that the model without skip
connections is most representative of the model by Xingjian et al. [28]. We show a second ablation in
Figure 5, illustrating the benefit of training for longer horizons and from conditioning on the action of
the robot. Lastly, we show qualitative results in Figure 6 of changing the action of the arm to examine
the model?s predictions about possible futures.
For all of the models, the prediction quality degrades over time, as uncertainty increases further into
the future. We use a mean-squared error objective, which optimizes for the mean pixel values. The
6
Figure 4: Quantitative comparison to models which reconstruct rather than predict motion. Notice
that on the novel objects test set, there is a larger gap between models which predict motion and those
which reconstruct appearance.
Figure 5: Ablation of DNA involving not including the action, and different prediction horizons
during training.
model thus encodes uncertainty as blur. Modeling this uncertainty directly through, for example,
stochastic neural networks is an interesting direction for future work. Note that prior video prediction
methods have largely focused on single-frame prediction, and most have not demonstrated prediction
of multiple real-world RGB video frames in sequence. Action-conditioned multi-frame prediction is
a crucial ingredient in model-based planning, where the robot could mentally test the outcomes of
various actions before picking the best one for a given task.
5.2
Human motion prediction
In addition to the action-conditioned prediction, we also evaluate our model on predicting future
video without actions. We chose the Human3.6M dataset, which consists of human actors performing
various actions in a room. We trained all models on 5 of the human subjects, held out one subject for
validation, and held out a different subject for the evaluations presented here. Thus, the models have
never seen this particular human subject or any subject wearing the same clothes. We subsampled
the video down to 10 fps such that there was noticeable motion in the videos within reasonable time
frames. Since the model is no longer conditioned on actions, we fed in 10 video frames and trained
the network to produce the next 10 frames, corresponding to 1 second each. Our evaluation measures
performance up to 20 timesteps into the future.
1.5x action 1x action 0 action
The results in Figure 7 show that our motion-predictive models quantitatively outperform prior
methods, and qualitatively produce plausible motions for at least 10 timesteps, and start to degrade
thereafter. We also show the masks predicted internally by the model for masking out the previous
t= 1
3
5
7
9
1
3
5
7
9
Figure 6: CDNA predictions from the same starting image, but different future actions, with objects
not seen in the training set. By row, the images show predicted future with zero action (stationary),
the original action, and an action 150% larger than the original. Note how the prediction shows no
motion with zero action, and with a larger action, predicts more motion, including object motion.
7
GT
STP
STP mask
t=1
4
7
10
13
1
4
7
10
13
Figure 7: Quantitative and qualitative results on human motion video predictions with a held-out
human subject. All recurrent models were trained for 10 future timesteps.
frame, which we refer to as the background mask. These masks illustrate that the model learns to
segment the human subject in the image without any explicit supervision.
6
Conclusion & Future Directions
In this work, we develop an action-conditioned video prediction model for interaction that incorporates
appearance information in previous frames with motion predicted by the model. To study unsupervised
learning for interaction, we also present a new video dataset with 59,000 real robot interactions and
1.5 million video frames. Our experiments show that, by learning to transform pixels in the initial
frame, our model can produce plausible video sequences more than 10 time steps into the future,
which corresponds to about one second. In comparisons to prior methods, our method achieves the
best results on a number of previous proposed metrics.
Predicting future object motion in the context of a physical interaction is a key building block of
an intelligent interactive system. The kind of action-conditioned prediction of future video frames
that we demonstrate can allow an interactive agent, such as a robot, to imagine different futures
based on the available actions. Such a mechanism can be used to plan for actions to accomplish a
particular goal, anticipate possible future problems (e.g. in the context of an autonomous vehicle),
and recognize interesting new phenomena in the context of exploration. While our model directly
predicts the motion of image pixels and naturally groups together pixels that belong to the same object
and move together, it does not explicitly extract an internal object-centric representation (e.g. as in
[7]). Learning such a representation would be a promising future direction, particularly for applying
efficient reinforcement learning algorithms that might benefit from concise state representations.
Acknowledgments
We would like to thank Vincent Vanhoucke, Mrinal Kalakrishnan, Jon Barron, Deirdre Quillen, and
our anonymous reviewers for helpful feedback and discussions. We would also like to thank Peter
Pastor for technical support with the robots.
References
[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, et al. Tensorflow:
Large-scale machine learning on heterogeneous systems, 2015. Software from tensorflow.org, 2015.
[2] P. W. Battaglia, J. B. Hamrick, and J. B. Tenenbaum. Simulation as an engine of physical scene understanding. Proceedings of the National Academy of Sciences, 110(45), 2013.
8
[3] S. Bengio, O. Vinyals, N. Jaitly, and N. Shazeer. Scheduled sampling for sequence prediction with recurrent
neural networks. In Advances in Neural Information Processing Systems, pages 1171?1179, 2015.
[4] B. Boots, A. Byravan, and D. Fox. Learning predictive models of a depth camera & manipulator from raw
execution traces. In International Conference on Robotics and Automation (ICRA), 2014.
[5] M. A. Brubaker, L. Sigal, and D. J. Fleet. Estimating contact dynamics. In International Conference on
Computer Vision (ICCV), 2009.
[6] B. De Brabandere, X. Jia, T. Tuytelaars, and L. Van Gool. Dynamic filter networks. In Neural Information
Processing Systems (NIPS). 2016.
[7] S. Eslami, N. Heess, T. Weber, Y. Tassa, K. Kavukcuoglu, and G. E. Hinton. Attend, infer, repeat: Fast
scene understanding with generative models. Neural Information Processing Systems (NIPS), 2016.
[8] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun. Vision meets robotics: The KITTI dataset. International
Journal of Robotics Research (IJRR), 2013.
[9] D.-A. Huang and K. M. Kitani. Action-reaction: Forecasting the dynamics of human interaction. In
European Conference on Computer Vision (ECCV). Springer, 2014.
[10] C. Ionescu, D. Papava, V. Olaru, and C. Sminchisescu. Human3.6m: Large scale datasets and predictive
methods for 3d human sensing in natural environments. PAMI, 36(7), 2014.
[11] M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial transformer networks. In Neural Information
Processing Systems (NIPS), 2015.
[12] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classification
with convolutional neural networks. In Compuer Vision and Pattern Recognition (CVPR), 2014.
[13] D. Kinga and J. Ba. Adam: A method for stochastic optimization. International Conference on Learning
Representations (ICLR), 2015.
[14] S. Lange, M. Riedmiller, and A. Voigtlander. Autonomous reinforcement learning on raw visual input data
in a real world application. In International Joint Conference on Neural Networks (IJCNN), 2012.
[15] A. Lerer, S. Gross, and R. Fergus. Learning physical intuition of block towers by example. International
Conference on Machine Learning (ICML), 2016.
[16] W. Lotter, G. Kreiman, and D. Cox. Deep predictive coding networks for video prediction and unsupervised
learning. arXiv preprint arXiv:1605.08104, 2016.
[17] M. Mathieu, C. Couprie, and Y. LeCun. Deep multi-scale video prediction beyond mean square error.
International Conference on Learning Representations (ICLR), 2016.
[18] R. Mottaghi, H. Bagherinezhad, M. Rastegari, and A. Farhadi. Newtonian image understanding: Unfolding
the dynamics of objects in static images. Computer Vision and Pattern Recognition (CVPR), 2015.
[19] R. Mottaghi, M. Rastegari, A. Gupta, and A. Farhadi. "What happens if..." learning to predict the effect of
forces in images. European Conference on Computer Vision (ECCV), 2016.
[20] J. Oh, X. Guo, H. Lee, R. L. Lewis, and S. Singh. Action-conditional video prediction using deep networks
in atari games. In Neural Information Processing Systems (NIPS), 2015.
[21] M. Ranzato, A. Szlam, J. Bruna, M. Mathieu, R. Collobert, and S. Chopra. Video (language) modeling: a
baseline for generative models of natural videos. arXiv preprint arXiv:1412.6604, 2014.
[22] N. Srivastava, E. Mansimov, and R. Salakhutdinov. Unsupervised learning of video representations using
lstms. International Conference on Machine Learning (ICML), 2015.
[23] C. Vondrick, H. Pirsiavash, and A. Torralba. Anticipating the future by watching unlabeled video. CoRR,
abs/1504.08023, 2015.
[24] C. Vondrick, H. Pirsiavash, and A. Torralba. Generating videos with scene dynamics. In Neural Information
Processing Systems (NIPS). 2016.
[25] J. Walker, C. Doersch, A. Gupta, and M. Hebert. An uncertain future: Forecasting from static images using
variational autoencoders. In European Conference on Computer Vision (ECCV), 2016.
[26] J. Walker, A. Gupta, and M. Hebert. Patch to the future: Unsupervised visual prediction. In Computer
Vision and Pattern Recognition (CVPR), 2014.
[27] M. Watter, J. Springenberg, J. Boedecker, and M. Riedmiller. Embed to control: A locally linear latent
dynamics model for control from raw images. In Neural Information Processing Systems (NIPS), 2015.
[28] S. Xingjian, Z. Chen, H. Wang, D. Yeung, W. Wong, and W. Woo. Convolutional LSTM network: A
machine learning approach for precipitation nowcasting. In Neural Information Processing Systems, 2015.
[29] T. Xue, J. Wu, K. L. Bouman, and W. T. Freeman. Visual dynamics: Probabilistic future frame synthesis
via cross convolutional networks. In Neural Information Processing Systems (NIPS). 2016.
[30] J. Yuen and A. Torralba. A data-driven approach for event prediction. In European Conference on
Computer Vision (ECCV), 2010.
9
| 6161 |@word illustrating:1 middle:1 cox:1 reused:1 simulation:1 rgb:4 concise:1 initial:3 outperforms:1 existing:4 reaction:1 current:1 com:4 activation:1 must:1 realistic:1 concatenate:1 blur:1 designed:1 interpretable:2 drop:1 stationary:1 generative:2 fewer:1 website:1 beginning:1 core:2 short:2 record:1 location:2 org:1 simpler:1 height:1 direct:1 fps:1 yuan:1 qualitative:4 consists:1 abadi:1 combine:2 introduce:2 mask:19 expected:1 planning:2 nor:1 multi:4 brain:2 examine:2 inspired:1 salakhutdinov:1 deirdre:1 freeman:1 toderici:1 farhadi:2 becomes:1 spain:1 precipitation:2 conv:11 provided:1 estimating:1 lowest:1 easiest:1 what:1 atari:3 kind:1 substantially:2 developed:1 supplemental:1 clothes:1 transformation:15 impractical:1 berkeley:3 every:3 quantitative:3 interactive:7 demonstrates:1 mansimov:1 control:3 szlam:1 internally:1 producing:1 before:1 attend:1 local:1 bilinear:2 encoding:1 despite:1 eslami:1 meet:1 merge:2 pami:1 might:1 chose:1 collect:1 commanded:1 range:2 acknowledgment:1 camera:3 lecun:1 practice:1 block:3 implement:1 differs:1 procedure:1 riedmiller:2 outputted:1 significantly:4 mrinal:1 onto:1 unlabeled:2 operator:1 context:3 applying:2 transformer:5 influence:1 sukthankar:1 wong:1 map:3 demonstrated:2 yt:6 reviewer:1 starting:1 focused:2 x32:4 kalakrishnan:1 fill:1 oh:1 handle:1 x64:3 autonomous:3 imagine:2 us:1 goodfellow:1 jaitly:1 element:1 recognition:4 particularly:1 predicts:9 labeled:3 bottom:1 levine:1 module:4 observed:1 preprint:2 wang:1 capture:2 region:3 ensures:1 connected:1 autonomously:1 ranzato:2 movement:1 gross:1 intuition:1 environment:2 nowcasting:2 occluded:1 dynamic:12 trained:10 singh:1 segment:4 predictive:7 joint:1 represented:1 various:2 stacked:2 fast:1 corresponded:1 outcome:1 larger:3 plausible:2 cvpr:3 reconstruct:6 simonyan:1 unseen:4 tuytelaars:1 transform:3 itself:1 final:2 online:1 sequence:5 advantage:1 propose:5 outputting:2 interaction:23 reconstruction:2 ablation:4 compositing:3 achieve:1 representational:1 academy:1 normalize:1 amounting:1 convergence:1 requirement:1 produce:12 generating:1 advection:5 adam:2 stiller:1 object:50 kitti:1 illustrate:1 develop:3 ac:1 pose:6 recurrent:5 newtonian:1 nearest:1 noticeable:1 implemented:1 predicted:18 skip:2 direction:4 filter:1 stochastic:3 exploration:2 human:14 enable:1 olaru:1 bin:2 require:4 generalization:1 anonymous:1 yuen:1 anticipate:1 transfor:1 tracker:1 considered:1 ground:2 predict:15 visualize:1 driving:1 achieves:1 torralba:3 smallest:1 battaglia:1 lenz:1 label:2 unfolding:1 concurrently:1 rather:4 mations:1 focus:1 contrast:1 adversarial:1 baseline:4 helpful:1 rigid:1 publically:1 leung:1 typically:1 entire:2 transformed:2 pixel:44 stp:12 flexible:3 classification:1 available2:1 plan:1 spatial:7 art:1 softmax:3 uc:2 construct:2 never:1 ng:2 sampling:4 manually:1 papava:1 unsupervised:7 icml:2 jon:1 foreground:1 future:34 report:3 quantitatively:3 intelligent:1 employ:1 recognize:1 national:1 subsampled:1 consisting:2 occlusion:1 lotter:1 ab:1 detection:1 investigate:2 evaluation:3 held:4 accurate:4 encourage:1 partial:1 experience:1 xy:1 fox:1 uncertain:1 bouman:1 instance:1 modeling:4 predic:1 subset:2 predictor:2 masked:1 hundred:1 eec:1 xue:2 synthetic:5 combined:2 accomplish:1 fundamental:1 lstm:17 international:8 probabilistic:2 physic:6 lee:1 picking:1 together:3 quickly:1 synthesis:1 squared:1 recorded:1 reconstructs:1 huang:1 watching:1 imagination:1 de:2 stride:3 coding:1 automation:1 explicitly:7 collobert:1 vehicle:1 start:1 masking:4 jia:1 contribution:1 square:1 convolutional:11 roll:1 largely:1 efficiently:1 yellow:1 generalize:3 raw:6 vincent:1 kavukcuoglu:1 none:2 trajectory:1 worth:1 involved:1 naturally:2 composited:2 gdl:1 static:3 dataset:20 knowledge:1 car:2 dimensionality:2 psnr:1 segmentation:1 anticipating:1 centric:4 higher:1 supervised:2 follow:1 zisserman:1 improved:1 done:1 though:1 lastly:1 autoencoders:1 lstms:4 multiscale:3 lack:2 google:5 defines:1 quality:1 scheduled:3 manipulator:1 building:1 effect:3 normalized:3 contain:1 spatially:1 kitani:1 x5:9 during:4 game:2 width:1 recurrence:2 davis:1 m:2 demonstrate:1 performs:1 motion:53 l1:1 vondrick:3 image:63 wise:3 weber:1 novel:5 variational:1 mentally:1 physical:14 conditioning:1 tassa:1 million:6 discussed:1 belong:1 cdna:17 refer:3 composition:1 significant:1 xingjian:2 doersch:1 grid:2 language:2 moving:1 robot:19 actor:1 supervision:2 longer:2 bruna:1 gt:3 chelsea:1 own:2 recent:1 involvement:1 optimizing:1 optimizes:1 pastor:1 driven:1 store:1 mottaghi:2 seen:5 captured:1 paradigm:1 corrado:1 multiple:13 desirable:1 full:1 infer:1 technical:1 match:2 hamrick:1 long:1 cross:1 x64x3:2 prediction:80 involving:3 variant:1 heterogeneous:1 vision:10 expectation:2 metric:3 arxiv:4 yeung:1 sergey:1 kernel:9 agarwal:1 robotics:3 background:7 addition:2 addressed:1 diagram:2 walker:2 suffered:1 crucial:1 unlike:1 subject:7 tend:1 virtually:1 flow:3 incorporates:1 call:2 chopra:1 noting:1 feedforward:6 visionbased:1 bengio:1 variety:1 affect:2 fit:1 timesteps:4 architecture:8 observability:1 idea:1 barham:1 lange:1 lerer:1 fleet:1 forecasting:2 akin:1 human3:3 peter:1 passing:1 action:44 deep:3 heess:1 generally:1 collision:1 clear:1 detailed:1 karpathy:1 amount:2 transforms:1 nonparametric:1 tenenbaum:1 locally:1 clip:1 dna:8 generate:2 http:1 outperform:2 notice:1 ionescu:1 discrete:1 group:1 key:2 thereafter:1 openai:2 falling:1 changing:1 sum:2 uncertainty:3 springenberg:1 reader:1 reasonable:1 wu:1 patch:4 geiger:1 decision:2 appendix:4 scaling:1 comparable:1 pushed:1 layer:6 tackled:1 drone:1 fold:1 kreiman:1 adapted:1 ijcnn:1 optic:1 constrain:1 fei:2 scene:4 untied:1 encodes:1 unlimited:1 software:1 nearby:1 attempting:1 performing:2 optical:1 according:1 ball:1 across:2 slightly:1 increasingly:1 reconstructing:1 making:3 happens:1 invariant:3 iccv:1 handling:1 taken:2 previously:6 discus:1 mechanism:2 finn:1 fed:1 tiling:1 available:4 decomposing:1 brevdo:1 apply:4 barron:1 reshape:1 alternative:1 shetty:1 original:2 convolve:1 remaining:1 denotes:1 include:1 top:1 pushing:6 composi:2 concatenated:1 icra:1 contact:1 warping:1 move:4 objective:2 deconv:3 parametric:1 primary:1 degrades:1 exhibit:1 iclr:2 distance:1 separate:1 thank:2 degrade:1 tower:1 evaluate:6 collected:3 extent:1 urtasun:1 reason:1 code:2 besides:1 copying:1 setup:1 mostly:1 sharper:2 trace:1 ba:1 boot:2 convolution:5 ssim:1 datasets:4 enabling:1 displayed:1 hinton:1 frame:39 shazeer:1 brubaker:1 complement:1 connection:2 engine:1 merges:2 learned:1 tensorflow:3 barcelona:1 nip:8 able:2 beyond:2 suggested:1 pattern:3 challenge:3 including:8 max:2 video:65 pirsiavash:2 gool:1 power:1 critical:2 event:1 natural:3 rely:1 force:2 predicting:9 arm:4 scheme:1 library:1 ijrr:1 mathieu:3 x8:2 woo:1 extract:2 prior:11 understanding:3 l2:1 multiplication:1 relative:1 law:1 loss:4 fully:1 interesting:2 ingredient:1 validation:3 integrate:1 agent:11 vanhoucke:1 affine:3 consistent:2 sigal:1 viewpoint:1 share:1 row:1 eccv:4 course:1 repeat:1 last:4 copy:1 hebert:2 side:2 allow:2 neighbor:1 benefit:4 van:1 feedback:1 depth:1 world:12 rich:1 computes:2 author:1 qualitatively:2 reinforcement:3 collection:2 jaderberg:1 keep:1 robotic:7 sequentially:1 overfitting:1 fergus:1 latent:2 additionally:1 promising:2 learn:8 channel:8 composing:1 rastegari:2 improving:1 interact:2 sminchisescu:1 complex:3 necessarily:1 constructing:1 domain:2 european:4 did:2 main:1 arrow:1 hyperparameters:1 allowed:1 x1:1 site:4 representative:1 ff:2 fashion:1 x16:3 position:1 explicit:1 concatenating:1 watter:1 tied:1 bagherinezhad:1 learns:1 ian:2 down:1 embed:1 xt:6 brabandere:2 sensing:1 gupta:3 gripper:4 effectively:3 corr:1 execution:1 conditioned:14 illustrates:1 horizon:2 gap:1 easier:1 chen:2 suited:1 fc:4 simply:1 appearance:12 explore:1 boedecker:1 visual:5 vinyals:1 tracking:1 partially:2 applies:1 acquiring:1 springer:1 corresponds:1 truth:2 lewis:1 succeed:1 conditional:1 viewed:1 goal:1 room:1 couprie:1 youtube:1 included:1 except:2 acting:1 pas:1 invariance:1 citro:1 formally:3 internal:8 support:1 guo:1 latter:1 modulated:1 meant:1 phenomenon:3 wearing:1 regularizing:1 scratch:1 srivastava:1 |
5,705 | 6,162 | Active Learning from Imperfect Labelers
Songbai Yan
University of California, San Diego
[email protected]
Kamalika Chaudhuri
University of California, San Diego
[email protected]
Tara Javidi
University of California, San Diego
[email protected]
Abstract
We study active learning where the labeler can not only return incorrect labels but
also abstain from labeling. We consider different noise and abstention conditions
of the labeler. We propose an algorithm which utilizes abstention responses,
and analyze its statistical consistency and query complexity under fairly natural
assumptions on the noise and abstention rate of the labeler. This algorithm is
adaptive in a sense that it can automatically request less queries with a more
informed or less noisy labeler. We couple our algorithm with lower bounds to show
that under some technical conditions, it achieves nearly optimal query complexity.
1
Introduction
In active learning, the learner is given an input space X , a label space L, and a hypothesis class H
such that one of the hypotheses in the class generates ground truth labels. Additionally, the learner
has at its disposal a labeler to which it can pose interactive queries about the labels of examples in
the input space. Note that the labeler may output a noisy version of the ground truth label (a flipped
label). The goal of the learner is to learn a hypothesis in H which is close to the hypothesis that
generates the ground truth labels.
There has been a significant amount of literature on active learning, both theoretical and practical.
Previous theoretical work on active learning has mostly focused on the above basic setting [2, 4,
7, 10, 25] and has developed algorithms under a number of different models of label noise. A
handful of exceptions include [3] which allows class conditional queries, [5] which allows requesting
counterexamples to current version spaces, and [23, 26] where the learner has access to a strong
labeler and one or more weak labelers.
In this paper, we consider a more general setting where, in addition to providing a possibly noisy
label, the labeler can sometimes abstain from labeling. This scenario arises naturally in difficult
labeling tasks and has been considered in computer vision by [11, 15]. Our goal in this paper is to
investigate this problem from a foundational perspective, and explore what kind of conditions are
needed, and how an abstaining labeler can affect properties such as consistency and query complexity
of active learning algorithms.
The setting of active learning with an abstaining noisy labeler was first considered by [24], who
looked at learning binary threshold classifiers based on queries to an labeler whose abstention rate is
higher closer to the decision boundary. They primarily looked at the case when the abstention rate at a
distance ? from the decision boundary is less than 1 ? ?(?? ), and the rate of label flips at the same
distance is less than 12 ? ?(?? ); under these conditions, they provided an active learning algorithm
? ???2? ) queries to the labeler.
that given parameters ? and ?, outputs a classifier with error ? using O(?
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
However, there are several limitations to this work. The primary limitation is that parameters ? and
? need to be known to the algorithm, which is not usually the case in practice. A second major
limitation is that even if the labeler has nice properties, such as, the abstention rates increase sharply
close to the boundary, their algorithm is unable to exploit these properties to reduce the number of
queries. A third and final limitation is that their analysis only applies to one dimensional thresholds,
and not to more general decision boundaries.
In this work, we provide an algorithm which is able to exploit nice properties of the labeler. Our
algorithm is statistically consistent under very mild conditions ? when the abstention rate is nondecreasing as we get closer to the decision boundary. Under slightly stronger conditions as in [24],
our algorithm has the same query complexity. However, if the abstention rate of the labeler increases
strictly monotonically close to the decision boundary, then our algorithm adapts and does substantially
better. It simply exploits the increasing abstention rate close to the decision boundary, and does not
even have to rely on the noisy labels! Specifically, when applied to the case where the noise rate is at
most 12 ? ?(?? ) and the abstention rate is 1 ? ?(?? ) at distance ? from the decision boundary,
? ?? ) queries.
our algorithm can output a classifier with error ? based on only O(?
An important property of our algorithm is that the improvement of query complexity is achieved
in a completely adaptive manner; unlike previous work [24], our algorithm needs no information
whatsoever on the abstention rates or rates of label noise. Thus our result also strengthens existing
results on active learning from (non-abstaining) noisy labelers by providing an adaptive algorithm
that achieves that same performance as [6] without knowledge of noise parameters.
We extend our algorithm so that it applies to any smooth d-dimensional decision boundary in a
non-parametric setting, not just one-dimensional thresholds, and we complement it with lower bounds
on the number of queries that need to be made to any labeler. Our lower bounds generalize the lower
bounds in [24], and shows that our upper bounds are nearly optimal. We also present an example
that shows that at least a relaxed version of the monotonicity property is necessary to achieve this
performance gain; if the abstention rate plateaus around the decision boundary, then our algorithm
needs to query and rely on the noisy labels (resulting in higher query complexity) in order to find a
hypothesis close to the one generating the ground truth labels.
1.1
Related work
There has been a considerable amount of work on active learning, most of which involves labelers
that are not allowed to abstain. Theoretical work on this topic largely falls under two categories ?
the membership query model [6, 13, 18, 19], where the learner can request label of any example in
the instance space, and the PAC model, where the learner is given a large set of unlabeled examples
from an underlying unlabeled data distribution, and can request labels of a subset of these examples.
Our work and also that of [24] builds on the membership query model.
There has also been a lot of work on active learning under different noise models. The problem is
relatively easy when the labeler always provides the ground truth labels ? see [8, 9, 12] for work
in this setting in the PAC model, and [13] for the membership query model. Perhaps the simplest
setting of label noise is random classification noise, where each label is flipped with a probability that
is independent of the unlabeled instance. [14] shows how to address this kind of noise in the PAC
model by repeatedly querying an example until the learner is confident of its label; [18, 19] provide
more sophisticated algorithms with better query complexities in the membership query model. A
second setting is when the noise rate increases closer to the decision boundary; this setting has been
studied under the membership query model by [6] and in the PAC model by [10, 4, 25]. A final
setting is agnostic PAC learning ? when a fixed but arbitrary fraction of labels may disagree with
the label assigned by the optimal hypothesis in the hypothesis class. Active learning is known to be
particularly difficult in this setting; however, algorithms and associated label complexity bounds have
been provided by [1, 2, 4, 10, 12, 25] among others.
Our work expands on the membership query model, and our abstention and noise models are related
to a variant of the Tsybakov noise condition. A setting similar to ours was considered by [6, 24]. [6]
considers a non-abstaining labeler, and provides a near-optimal binary search style active learning
algorithm; however, their algorithm is non-adaptive. [24] gives a nearly matching lower and upper
query complexity bounds for active learning with abstention feedback, but they only give a nonadaptive algorithm for learning one dimensional thresholds, and only study the situation where the
2
abstention rate is upper-bounded by a polynomial function. Besides [24] , [11, 15] study active
learning with abstention feedback in computer vision applications. However, these works are based
on heuristics and do not provide any theoretical guarantees.
2
Settings
Notation. 1 [A] is the indicator function: 1 [A] = 1 if A is true, and 0 otherwise. For x =
?. Define ln x = loge x, log x = log 43 x,
(x1 , . . . , xd ) ? Rd (d > 1), denote (x1 , . . . , xd?1 ) by x
e
?
?
[ln ln] (x) = ln ln max{x, e }. We use O and ? to hide logarithmic factors in 1 , 1 , and d.
+
?
?
Definition. Suppose ? ? 1. A function g : [0, 1]d?1 ? R is (K, ?)-H?lder smooth,
continuously differentiable
up to ???-th order, and for any x, y ? [0, 1]d?1 ,
if it is P
??? ? m g(x)
?
g(y) ? m=0 m! (y ? x)m ? K ky ? xk . We denote this class of functions by ?(K, ?).
We consider active learning for binary classification. We are given an instance space X = [0, 1]d and
a label space L = {0, 1}. Each instance x ? X is assigned to a label l ? {0, 1} by an underlying
function h? : X ? {0, 1} unknown to the learning algorithm in a hypothesis space H of interest. The
learning algorithm has access to any x ? X , but no access to their labels. Instead, it can only obtain
label information through interactions with a labeler, whose relation to h? is to be specified later. The
objective of the algorithm is to sequentially select the instances to query for label information and
? that is close to h? while making as few queries as possible.
output a classifier h
We consider a non-parametric setting as in [6, 17] where the hypothesis space is the smooth boundary
fragment class H = {hg (x) = 1 [xd > g(?
x)] | g : [0, 1]d?1 ? [0, 1] is (K, ?)-H?lder smooth}. In
other words, the decision boundaries of classifiers in this class are epigraph of smooth functions (see
Figure 1 for example). We assume h? (x) = 1 [xd > g ? (?
x)] ? H. When d = 1, H reduces to the
space of threshold functions {h? (x) = 1 [x > ?] : ? ? [0, 1]}.
The performance of a classifier h(x)
x)] is evaluated by the L1 distance between the
? = 1 [xd > g(?
?
decision boundaries kg ? g k = [0,1]d?1 |g(?
x) ? g ? (?
x)| d?
x.
The learning algorithm can only obtain label information by querying a labeler who is allowed
to abstain from labeling or return an incorrect label (flipping between 0 and 1). For each query
x ? [0, 1]d , the labeler L will return y ? Y = {0, 1, ?} (? means that the labeler abstains from
providing a 0/1 label) according to some distribution PL (Y = y | X = x). When it is clear from
the context, we will drop the subscript from PL (Y | X). Note that while the labeler can declare its
indecision by outputting ?, we do not allow classifiers in our hypothesis space to output ?.
In our active learning setting, our goal is to output a boundary g that is close to g ? while making as few
interactive queries to the labeler as possible. In particular, we want to find an algorithm with low query
complexity ?(?, ?, A, L, g ? ), which is defined as the minimum number of queries that Algorithm
A, acting on samples with ground truth g ? , should make to a? labeler L to ensure that the output
classifier hg (x) = 1 [xd > g(?
x)] has the property kg ? g ? k = [0,1]d?1 |g(?
x) ? g ? (?
x)| d?
x ? ? with
probability at least 1 ? ? over the responses of L.
2.1
Conditions
We now introduce three conditions on the response of the labeler with increasing strictness. Later we
will provide an algorithm whose query complexity improves with increasing strictness of conditions.
Condition 1. The response distribution of the labeler P (Y | X) satisfies:
? ? [0, 1]d?1 , xd , x?d ? [0, 1], if |xd ? g ? (?
x)| ? |x?d ? g ? (?
x)| then
? (abstention) For any x
?
P (?| (?
x, xd )) ? P (?| (?
x, xd ));
? (noise) For any x ? [0, 1]d , P (Y 6= 1 [xd > g ? (?
x)] | x, Y 6=?) ? 12 .
Condition 1 means that the closer x is to the decision boundary (?
x, g ? (?
x)), the more likely the labeler
is to abstain from labeling. This complies with the intuition that instances closer to the decision
boundary are harder to classify. We also assume the 0/1 labels can be flipped with probability as large
as 21 . In other words, we allow unbounded noise.
3
1
1
1
0
1
0.5
0
0.5
X
x1
Figure 1:
A classifier
with
boundary
2
g(?
x) = (x1 ? 0.4) + 0.1 for
d = 2. Label 1 is assigned
to the region above, 0 to the
below (red region)
P (Y =?| X = x)
P (Y = 1 | X = x)
P (Y = 0 | X = x)
P (Y | X)
x2
P (Y | X)
P (Y =?| X = x)
P (Y = 1 | X = x)
P (Y = 0 | X = x)
1
0.5
0
Figure 2: The distributions
above satisfy Conditions 1
and 2, but the abstention feedback is useless since P (?| x)
is flat between x = 0.2 and
0.4
0.5
X
1
Figure 3: Distributions above
satisfy Conditions 1, 2, and 3.
Condition 2. Let C, ? be non-negative constants, and f : [0, 1] ? [0, 1] be a nondecreasing function.
The response distribution P (Y | X) satisfies:
? (abstention) P (?| x) ? 1 ? f (|xd ? g ? (?
x)|);
? (noise) P (Y 6= 1 [xd > g ? (?
x)] | x, Y 6=?) ?
1
2
1 ? C |xd ? g ? (?
x)|
?
.
Condition 2 requires the abstention and noise probabilities to be upper-bounded, and these upper
bounds decrease as x moves further away from the decision boundary. The abstention rate can be 1
at the decision boundary, so the labeler may always abstain at the decision boundary. The condition
on the noise satisfies the popular Tsybakov noise condition [22].
Condition 3. Let f : [0, 1] ? [0, 1] be a nondecreasing function such that ?0 < c < 1, ?0 < a ? 1
(b)
? 1 ? c. The response distribution satisfies: P (?| x) = 1 ? f (|xd ? g ? (?
x)|).
?0 ? b ? 23 a, ff (a)
?
An example where Condition 3 holds is P (?| x) = 1 ? (x ? 0.3) (? > 0).
Condition 3 requires the abstention rate to increase monotonically close to the decision boundary
as in Condition 1. In addition, it requires the abstention probability P (? |(?
x, xd )) not to be too
flat with respect to xd . For example, when d = 1, P (?| x) = 0.68 for 0.2 ? x ? 0.4 (shown
as Figure 2) does not satisfy Condition 3, and abstention responses are not informative since this
abstention rate alone
p yields no information on the location of the decision boundary. In contrast,
P (?| x) = 1 ? |x ? 0.3| (shown as Figure 3) satisfies Condition 3, and the learner could infer it
is getting close to the decision boundary when it starts receiving more abstention responses.
Note that here c, f, C, ? are unknown and arbitrary parameters that characterize the complexity of the
learning task. We want to design an algorithm that does not require knowledge of these parameters
but still achieves nearly optimal query complexity.
3
Learning one-dimensional thresholds
In this section, we start with the one dimensional case (d = 1) to demonstrate the main idea. We will
generalize these results to multidimensional instance space in the next section.
When d = 1, the decision boundary g ? becomes a point in [0, 1], and the corresponding classifier
is a threshold function over [0,1]. In other words the hypothesis space becomes H = {f? (x) =
1 [x > ?] : ? ? [0, 1]}). We denote the ground truth decision boundary by ?? ? [0, 1]. We want to
find a ?? ? [0, 1] such that |?? ? ?? | is small while making as few queries as possible.
3.1
Algorithm
The proposed algorithm is a binary search style algorithm shown as Algorithm 1. (For the sake of
1
simplicity, we assume log 2?
is an integer.) Algorithm 1 takes a desired precision ? and confidence
4
Algorithm 1 The active learning algorithm for learning thresholds
1: Input: ?, ?
2: [L0 , R0 ] ? [0, 1]
1
? 1 do
3: for k = 0, 1, 2, . . . , log 2?
k
k
4:
Define three quartiles: Uk ? 3Lk4+Rk , Mk ? Lk +R
, Vk ? Lk +3R
2
4
(u)
(m)
(v)
(u)
(v)
5:
A , A , A , B , B ? Empty Array
6:
for n = 1, 2, . . . do
(u)
(m)
(v)
7:
Query at Uk , Mk , Vk , and receive labels Xn , Xn , Xn
8:
for w ? {u, m, v} do
9:
? We record whether X (w) =? in A(w) , and the 0/1 label (as -1/1) in B (w) if
(w)
X
6=?
10:
if X (w) 6=? then
11:
A(w) ? A(w) .append(1) , B (w) ? B (w) .append(21 X (w) = 1 ? 1)
12:
else
13:
A(w) ? A(w) .append(0)
14:
end if
15:
end for
16:
? Check if the differences of abstention
responses
o are statistically significant
n
17:
18:
(u)
if C HECK S IGNIFICANT-VAR( Ai
n
(m)
, ?
i=1 4 log
? Ai
[Lk+1 , Rk+1 ] ? [Uk , Rk ]; break
n
(v)
(m)
on
) then
, ?
i=1 4 log
19:
else if C HECK S IGNIFICANT-VAR( Ai
20:
21:
22:
[Lk+1 , Rk+1 ] ? [Lk , Vk ]; break
end if
? Check if the differences between 0 and 1 labels are statistically significant
oB (u) .length
n
(u)
, 4 log? 1 ) then
if C HECK S IGNIFICANT( ?Bi
23:
24:
25:
i=1
? Ai
1
2?
[Lk+1 , Rk+1 ] ? [Uk , Rk ]; break
oB (v) .length
n
(v)
,
else if C HECK S IGNIFICANT( Bi
26:
[Lk+1 , Rk+1 ] ? [Lk , Vk ]; break
27:
end if
28:
end for
29: end for
30: Output: ?? = Llog 1 + Rlog 1 /2
2?
i=1
1
2?
) then
2?
?
4 log
1
2?
) then
2?
level ? as its input, and returns an estimation ?? of the decision boundary ?? . The algorithm maintains
an interval [Lk , Rk ] in which ?? is believed to lie, and shrinks this interval iteratively. To find the
subinterval that contains ?? , Algorithm 1 relies on two auxiliary functions (marked in Procedure 2) to
conduct adaptive sequential hypothesis tests regarding subintervals of interval [Lk , Rk ].
Suppose ?? ? [Lk , Rk ]. Algorithm 1 tries to shrink this interval to a 34 of its length in each iteration by
k
k
repetitively querying on quartiles Uk = 3Lk4+Rk , Mk = Lk +R
, Vk = Lk +3R
. To determine which
2
4
specific subinterval to choose, the algorithm uses 0/1 labels and abstention responses simultaneously.
Since the ground truth labels are determined by 1 [x > ?? ], one can infer that if the number of queries
that return label 0 at Uk (Vk ) is statistically significantly more (less) than label 1, then ?? should
be on the right (left) side of Uk (Vk ). Similarly, from Condition 1, if the number of non-abstention
responses at Uk (Vk ) is statistically significantly more than non-abstention responses at Mk , then ??
should be closer to Mk than Uk (Vk ).
Algorithm 1 relies on the ability to shrink the search interval via statistically comparing the numbers
of obtained labels at locations Uk , Mk , Vk . As a result, a main building block of Algorithm 1 is to
test whether i.i.d. bounded random variables Yi are greater in expectation than i.i.d. bounded random
variables Zi with statistical significance. In Procedure 2, we have two test functions CheckSignificant
and CheckSignificant-Var that take i.i.d. random variables {Xi = Yi ? Zi } (|Xi | ? 1) and confidence
level ? as their input, and output whether it is statistically significant to conclude EXi > 0.
5
Procedure 2 Adaptive sequential testing
1: ? D0 , D1 are absolute constants defined in Proposition 1 and Proposition 2
2: ? {Xi } are i.i.d. random variables bounded by 1. ? is the confidence level. Detect if EX > 0
n
3: function C HECK S IGNIFICANT({Xi }i=1 , ?)
q
4:
p(n, ?) ? D0 1 + ln 1? + 4n [ln ln]+ 4n + ln 1?
Pn
5:
Return i=1 Xi ? p(n, ?)
6: end function
n
7: function C HECK S IGNIFICANT-VAR({Xi }i=1 , ?)
P
Pn
2
n
2
1
n
(
X
)
X
?
8:
Calculate the empirical variance Var = n?1
i
i
i=1
i=1
n
q
1
1
9:
q(n, Var, ?) ? D1 1 + ln ? +
Var + ln ? + 1 [ln ln]+ Var + ln 1? + 1 + ln 1?
Pn
10:
Return n ? ln 1? AND i=1 Xi ? q(n, Var, ?)
11: end function
CheckSignificant is based on the following uniform concentration result regarding the empirical
mean:
Proposition 1. Suppose X1 , X2 , . . . are a sequence of i.i.d. random variables with X1 ? [?2, 2],
EX1 = 0. Take any 0 < ? < 1. Then there is an absolute constant D0 such that with probability at
least 1 ? ?, for all n > 0 simultaneously,
n
X
Xi ? D 0
i=1
1
1 + ln +
?
s
1
4n [ln ln]+ 4n + ln
?
!
In Algorithm 1, we use CheckSignificant to detect whether the expected number of queries that
return label 0 at location Uk (Vk ) is more/less than the expected number of label 1 with a statistical
significance.
CheckSignificant-Var is based on the following uniform concentration
result which further utilizes
Pn
Pn
2
n
1
2
the empirical variance Vn = n?1
i=1 Xi ? n (
i=1 Xi ) :
Proposition 2. There is an absolute constant D1 such that with probability at least 1 ? ?, for all
n ? ln 1? simultaneously,
n
X
Xi ? D 1
i=1
1
1 + ln +
?
s
1
1 + ln + Vn
?
!
1
1
[ln ln]+ (1 + ln + Vn ) + ln
?
?
The use of variance results in a tighter bound when Var(Xi ) is small.
In Algorithm 1, we use CheckSignificant-Var to detect the statistical significance of the relative order
of the number of queries that return non-abstention responses at Uk (Vk ) compared to the number of
non-abstention responses at Mk . This results in a better query complexity than using CheckSignificant
under Condition 3, since the variance of the number of abstention responses approaches 0 when the
interval [Lk , Rk ] zooms in on ?? .1
3.2
Analysis
For Algorithm 1 to be statistically consistent, we only need Condition 1.
Theorem 1. Let ?? be the
ground truth. If the labeler L satisfies Condition 1 and Algorithm 1 stops
? then ?? ? ?? ? ? with probability at least 1 ? ? .
to output ?,
2
1
We do not apply CheckSignificant-Var to 0/1 labels, because unlike the difference between the numbers of
abstention responses at Uk (Vk ) and Mk , the variance of the difference between the numbers of 0 and 1 labels
stays above a positive constant.
6
Under additional Conditions 2 and 3, we can derive upper bounds of the query complexity for our
algorithm. (Recall f and ? are defined in Conditions 2 and 3.)
1.Under Conditions 1 and
Theorem 2. Let ?? be the ground truth, and ?? be the output of Algorithm
1
?2?
?
2, with probability at least 1 ? ?, Algorithm 1 makes at most O f ( ? ) ?
queries.
2
Theorem 3. Let ? be the ground truth, and ?? be the output of Algorithm
1. Under Conditions 1 and
1
?
3, with probability at least 1 ? ?, Algorithm 1 makes at most O f ( ? ) queries.
?
2
The query complexity given by Theorem 3 is independent of ? that decides the flipping rate, and
consequently smaller than the bound in Theorem 2. This improvement is due to the use of abstention
responses, which become much more informative under Condition 3.
3.3
Lower Bounds
In this subsection, we give lower bounds of query complexity in the one-dimensional case and
establish near optimality of Algorithm 1. We will give corresponding lower bounds for the highdimensional case in the next section.
The lower bound in [24] can be easily generalized to Condition 2:
Theorem 4. ([24]) There is a universal constant ?0 ? (0, 1) and a labeler L satisfying Conditions 1
?
and 2, such that for any active
algorithm A, there is a ? ? [0, 1], such that for small enough
learning
1 ?2?
.
?
?, ?(?, ?0 , A, L, ?? ) ? ? f (?)
Our query complexity (Theorem 3) for the algorithm is also almost tight under Conditions 1 and 3
with a polynomial abstention rate.
Theorem 5. There is a universal constant ?0 ? (0, 1) and a labeler L satisfying Conditions 1, 2,
and 3 with f (x) = C ? x? (C ? > 0 and 0 < ? ? 2 are constants), such that for any active learning
algorithm A, there is a ?? ? [0, 1], such that for small enough ?, ?(?, ?0 , A, L, ?? ) ? ? (??? ).
3.4
Remarks
Our results confirm the intuition that learning with abstention is easier than learning with noisy
labels. This is true because a noisy label might mislead the learning algorithm, but an abstention
response never does. Our analysis shows, in particular, that if the labeler never abstains, and outputs
?
?
completely noisy labels with probability bounded by 1 ? |x ? ?? | (i.e.,
P (Y 6= I [x > ? ] | x) ?
1
?2?
? ?
?
is significantly larger than the
2 (1 ? |x ? ? | )), then the near optimal query complexity of O ?
??
?
near optimal O (? ) query complexity associated with a labeler who only abstains with probability
?
P (Y =?| x) ? 1 ? |x ? ?? | and never flips a label. More precisely, while in both cases the labeler
outputs the same amount of corrupted labels, the query complexity of the abstention-only case is
significantly smaller than the noise-only case.
Note that the query complexity of Algorithm 1 consists of two kinds of queries: queries which return
0/1 labels and are used by function CheckSignificant, and queries which return abstention and are
used by function CheckSignificant-Var. Algorithm 1 will stop querying when the responses of one of
the two kinds of queries are statistically significant. Under Condition 2, our proof actually shows
that the optimal number of queries is dominated by the number of queries used by CheckSignificant
function. In other words, a simplified variant of Algorithm 1 which excludes use of abstention
feedback is near optimal. Similarly, under Condition 3, the optimal query complexity is dominated
by the number of queries used by CheckSignificant-Var function. Hence the variant of Algorithm 1
which disregards 0/1 labels would be near optimal.
4
The multidimensional case
We follow [6] to generalize the results from one-dimensional thresholds to the d-dimensional (d > 1)
smooth boundary fragment class ?(K, ?).
7
Algorithm 3 The active learning algorithm for the smooth boundary fragment class
1: Input: ?, ?, ?
d?1
0 1
, M , . . . , MM?1
2: M ? ? ??1/? . L ? M
3: For each l ? L, apply Algorithm 1 with parameter (?, ?/M d?1 ) to learn a threshold gl that
approximates g ? (l)
n
od?1
4: Partition the instance space into cells {Iq } indexed by q ? 0, 1, . . . , M
?
1
, where
?
qd?1 ? (qd?1 + 1)?
q1 ? (q1 + 1)?
? ??? ?
,
,
Iq =
M
M
M
M
P
x), where
5: For each cell Iq , perform a polynomial interpolation: gq (?
x) = l?Iq ?L gl Qq,l (?
Qq,l (?
x) =
d?1
Y
?
Y
i=1 j=0,j6=M li ??qi
6: Output: g(?
x) =
4.1
P
d?1
q?{0,1,..., M
? ?1}
? i ? (?qi + j)/M
x
li ? (?qi + j)/M
gq (?
x)1 [?
x ? q]
Lower bounds
Theorem 6. There are universal constants ?0 ? (0, 1), c0 > 0, and a labeler L satisfying Conditions 1 and 2, such that for any active learning
algorithmA, there is a g ? ? ?(K, ?), such that for
d?1
1
small enough ?, ?(?, ?0 , A, L, g ? ) ? ? f (c0 ?) ??2?? ? .
Theorem 7. There is a universal constant ?0 ? (0, 1) and a labeler L satisfying Conditions 1, 2,
and Condition 3 with f (x) = C ? x? (C ? > 0 and 0 < ? ? 2 are constants), such that for any active
learning
algorithm
A, there is a g ? ? ?(K, ?), such that for small enough ?, ?(?, ?0 , A, L, g ? ) ?
? ????
4.2
d?1
?
.
Algorithm and Analysis
Recall the decision boundary of the smooth boundary fragment class can be seen as the epigraph of a
smooth function [0, 1]d?1 ? [0, 1]. For d > 1, we can reduce the problem to the one-dimensional
problem by discretizing the first d?1 dimensions of the instance space and then perform a polynomial
interpolation. The algorithm is shown as Algorithm 3. For the sake of simplicity, we assume ?, M/?
in Algorithm 3 are integers.
We have similar consistency guarantee and upper bounds as in the one-dimensional case.
Theorem 8. Let g ? be the ground truth. If the labeler L satisfies Condition 1 and Algorithm 3 stops
to output g, then kg ? ? gk ? ? with probability at least 1 ? 2? .
3. UnderConditions 1 and
Theorem 9. Let g ? be the ground truth, and g be the output of Algorithm
?2?? d?1
d
?
?
2, with probability at least 1 ? ?, Algorithm 3 makes at most O f (?/2) ?
queries.
3. Under
Theorem 10. Let g ? be the ground truth, and g be the output of Algorithm
Conditions 1
? d?1
d
?
?
and 3, with probability at least 1 ? ?, Algorithm 3 makes at most O f (?/2) ?
queries.
Acknowledgments. We thank NSF under IIS-1162581, CCF-1513883, and CNS-1329819 for
research support.
References
[1] M.-F. Balcan and P. M. Long. Active and passive learning of linear separators under log-concave distributions. In COLT, 2013.
[2] Maria-Florina Balcan, Alina Beygelzimer, and John Langford. Agnostic active learning. In Proceedings of
the 23rd international conference on Machine learning, pages 65?72. ACM, 2006.
8
[3] Maria-Florina Balcan and Steve Hanneke. Robust interactive learning. In Proceedings of The 25th
Conference on Learning Theory, 2012.
[4] A. Beygelzimer, D. Hsu, J. Langford, and T. Zhang. Agnostic active learning without constraints. In NIPS,
2010.
[5] Alina Beygelzimer, Daniel Hsu, John Langford, and Chicheng Zhang. Search improves label for active
learning. arXiv preprint arXiv:1602.07265, 2016.
[6] Rui M. Castro and Robert D. Nowak. Minimax bounds for active learning. IEEE Transactions on
Information Theory, 54(5):2339?2353, 2008.
[7] Yuxin Chen, S Hamed Hassani, Amin Karbasi, and Andreas Krause. Sequential information maximization:
When is greedy near-optimal? In Proceedings of The 28th Conference on Learning Theory, pages 338?363,
2015.
[8] D. A. Cohn, L. E. Atlas, and R. E. Ladner. Improving generalization with active learning. Machine
Learning, 15(2), 1994.
[9] S. Dasgupta. Coarse sample complexity bounds for active learning. In NIPS, 2005.
[10] S. Dasgupta, D. Hsu, and C. Monteleoni. A general agnostic active learning algorithm. In NIPS, 2007.
[11] Meng Fang and Xingquan Zhu. I don?t know the label: Active learning with blind knowledge. In Pattern
Recognition (ICPR), 2012 21st International Conference on, pages 2238?2241. IEEE, 2012.
[12] Steve Hanneke. Teaching dimension and the complexity of active learning. In Learning Theory, pages
66?81. Springer, 2007.
[13] Tibor Heged?us. Generalized teaching dimensions and the query complexity of learning. In Proceedings of
the eighth annual conference on Computational learning theory, pages 108?117. ACM, 1995.
[14] M. K??ri?inen. Active learning in the non-realizable case. In ALT, 2006.
[15] Christoph Kading, Alexander Freytag, Erik Rodner, Paul Bodesheim, and Joachim Denzler. Active learning
and discovery of object categories in the presence of unnameable instances. In Computer Vision and
Pattern Recognition (CVPR), 2015 IEEE Conference on, pages 4343?4352. IEEE, 2015.
[16] Yuan-Chuan Li and Cheh-Chih Yeh. Some equivalent forms of bernoulli?s inequality: A survey. Applied
Mathematics, 4(07):1070, 2013.
[17] Stanislav Minsker. Plug-in approach to active learning. Journal of Machine Learning Research, 13(Jan):67?
90, 2012.
[18] Mohammad Naghshvar, Tara Javidi, and Kamalika Chaudhuri. Bayesian active learning with non-persistent
noise. IEEE Transactions on Information Theory, 61(7):4080?4098, 2015.
[19] R. D. Nowak. The geometry of generalized binary search. IEEE Transactions on Information Theory,
57(12):7893?7906, 2011.
[20] Maxim Raginsky and Alexander Rakhlin. Lower bounds for passive and active learning. In Advances in
Neural Information Processing Systems, pages 1026?1034, 2011.
[21] Aaditya Ramdas and Akshay Balsubramani. Sequential nonparametric testing with the law of the iterated
logarithm. In Proceedings of the Conference on Uncertainty in Artificial Intelligence, 2016.
[22] A. B. Tsybakov. Optimal aggregation of classifiers in statistical learning. Annals of Statistics, 32:135?166,
2004.
[23] Ruth Urner, Shai Ben-david, and Ohad Shamir. Learning from weak teachers. In International Conference
on Artificial Intelligence and Statistics, pages 1252?1260, 2012.
[24] Songbai Yan, Kamalika Chaudhuri, and Tara Javidi. Active learning from noisy and abstention feedback.
In Communication, Control, and Computing (Allerton), 2015 53th Annual Allerton Conference on. IEEE,
2015.
[25] Chicheng Zhang and Kamalika Chaudhuri. Beyond disagreement-based agnostic active learning. In
Advances in Neural Information Processing Systems, pages 442?450, 2014.
[26] Chicheng Zhang and Kamalika Chaudhuri. Active learning from weak and strong labelers. In Advances in
Neural Information Processing Systems, pages 703?711, 2015.
9
| 6162 |@word mild:1 version:3 polynomial:4 stronger:1 c0:2 eng:2 q1:2 harder:1 contains:1 fragment:4 daniel:1 ours:1 existing:1 current:1 comparing:1 od:1 beygelzimer:3 john:2 partition:1 informative:2 drop:1 atlas:1 alone:1 greedy:1 intelligence:2 xk:1 record:1 yuxin:1 provides:2 coarse:1 location:3 allerton:2 zhang:4 unbounded:1 become:1 persistent:1 incorrect:2 consists:1 yuan:1 introduce:1 manner:1 expected:2 automatically:1 increasing:3 becomes:2 provided:2 spain:1 underlying:2 bounded:6 notation:1 agnostic:5 what:1 kg:3 kind:4 substantially:1 developed:1 informed:1 whatsoever:1 guarantee:2 multidimensional:2 expands:1 concave:1 xd:17 interactive:3 classifier:11 uk:13 control:1 positive:1 declare:1 minsker:1 meng:1 subscript:1 interpolation:2 might:1 studied:1 christoph:1 bi:2 statistically:9 practical:1 acknowledgment:1 testing:2 practice:1 block:1 procedure:3 jan:1 foundational:1 empirical:3 yan:2 universal:4 significantly:4 matching:1 word:4 confidence:3 get:1 close:9 unlabeled:3 context:1 equivalent:1 focused:1 survey:1 mislead:1 simplicity:2 array:1 fang:1 qq:2 annals:1 diego:3 suppose:3 shamir:1 us:1 hypothesis:12 strengthens:1 particularly:1 satisfying:4 recognition:2 preprint:1 calculate:1 naghshvar:1 region:2 decrease:1 intuition:2 complexity:26 tight:1 learner:8 completely:2 easily:1 exi:1 query:58 artificial:2 labeling:5 whose:3 heuristic:1 indecision:1 larger:1 cvpr:1 otherwise:1 lder:2 ability:1 statistic:2 nondecreasing:3 noisy:11 final:2 sequence:1 differentiable:1 propose:1 outputting:1 interaction:1 gq:2 chaudhuri:5 achieve:1 adapts:1 rlog:1 amin:1 ky:1 getting:1 empty:1 generating:1 ben:1 object:1 derive:1 iq:4 pose:1 strong:2 auxiliary:1 c:1 involves:1 qd:2 quartile:2 abstains:3 require:1 generalization:1 proposition:4 tighter:1 strictly:1 pl:2 hold:1 mm:1 around:1 considered:3 ground:14 major:1 achieves:3 estimation:1 label:54 always:2 pn:5 l0:1 joachim:1 improvement:2 vk:13 maria:2 check:2 bernoulli:1 contrast:1 sense:1 detect:3 realizable:1 membership:6 relation:1 classification:2 among:1 colt:1 fairly:1 never:3 labeler:38 flipped:3 nearly:4 others:1 primarily:1 few:3 simultaneously:3 zoom:1 geometry:1 cns:1 interest:1 investigate:1 hg:2 closer:6 nowak:2 necessary:1 ohad:1 conduct:1 indexed:1 logarithm:1 loge:1 desired:1 theoretical:4 mk:8 instance:10 classify:1 maximization:1 subset:1 uniform:2 too:1 characterize:1 teacher:1 corrupted:1 confident:1 st:1 international:3 stay:1 receiving:1 continuously:1 choose:1 possibly:1 style:2 return:11 li:3 heck:6 satisfy:3 blind:1 later:2 break:4 lot:1 try:1 analyze:1 red:1 start:2 aggregation:1 maintains:1 shai:1 chicheng:3 variance:5 who:3 largely:1 yield:1 generalize:3 weak:3 bayesian:1 iterated:1 hanneke:2 j6:1 plateau:1 hamed:1 monteleoni:1 urner:1 definition:1 naturally:1 associated:2 proof:1 couple:1 gain:1 stop:3 hsu:3 popular:1 recall:2 knowledge:3 subsection:1 improves:2 hassani:1 sophisticated:1 actually:1 disposal:1 higher:2 steve:2 follow:1 response:19 evaluated:1 shrink:3 just:1 until:1 langford:3 cohn:1 perhaps:1 building:1 true:2 ccf:1 hence:1 assigned:3 iteratively:1 ex1:1 generalized:3 demonstrate:1 mohammad:1 l1:1 aaditya:1 passive:2 balcan:3 abstain:6 extend:1 approximates:1 significant:5 counterexample:1 ai:4 rd:2 consistency:3 mathematics:1 similarly:2 teaching:2 access:3 labelers:5 hide:1 perspective:1 scenario:1 inequality:1 binary:5 discretizing:1 abstention:42 yi:2 seen:1 minimum:1 greater:1 relaxed:1 additional:1 r0:1 determine:1 monotonically:2 ii:1 reduces:1 infer:2 d0:3 smooth:9 technical:1 plug:1 believed:1 repetitively:1 long:1 qi:3 variant:3 basic:1 florina:2 vision:3 expectation:1 arxiv:2 iteration:1 sometimes:1 achieved:1 cell:2 receive:1 addition:2 want:3 strictness:2 krause:1 interval:6 else:3 unlike:2 rodner:1 integer:2 near:7 presence:1 easy:1 songbai:2 enough:4 affect:1 zi:2 imperfect:1 reduce:2 idea:1 regarding:2 andreas:1 requesting:1 whether:4 repeatedly:1 remark:1 clear:1 amount:3 nonparametric:1 tsybakov:3 chuan:1 category:2 simplest:1 nsf:1 heged:1 dasgupta:2 threshold:10 alina:2 abstaining:4 nonadaptive:1 excludes:1 fraction:1 raginsky:1 uncertainty:1 almost:1 chih:1 vn:3 utilizes:2 decision:24 ob:2 bound:20 annual:2 handful:1 sharply:1 precisely:1 constraint:1 x2:2 flat:2 ri:1 sake:2 dominated:2 generates:2 optimality:1 relatively:1 according:1 icpr:1 request:3 smaller:2 slightly:1 making:3 castro:1 karbasi:1 ln:27 needed:1 know:1 flip:2 complies:1 end:8 apply:2 balsubramani:1 away:1 disagreement:1 include:1 ensure:1 exploit:3 build:1 establish:1 objective:1 move:1 looked:2 flipping:2 parametric:2 primary:1 concentration:2 javidi:3 distance:4 unable:1 thank:1 topic:1 considers:1 erik:1 besides:1 length:3 useless:1 ruth:1 providing:3 difficult:2 mostly:1 robert:1 gk:1 negative:1 append:3 design:1 unknown:2 perform:2 upper:7 disagree:1 ladner:1 situation:1 communication:1 ucsd:3 arbitrary:2 david:1 complement:1 specified:1 california:3 barcelona:1 nip:4 address:1 able:1 beyond:1 usually:1 below:1 pattern:2 eighth:1 max:1 natural:1 rely:2 indicator:1 zhu:1 minimax:1 lk:14 ignificant:6 nice:2 literature:1 discovery:1 yeh:1 relative:1 stanislav:1 law:1 limitation:4 querying:4 var:15 consistent:2 gl:2 side:1 allow:2 fall:1 akshay:1 absolute:3 boundary:31 feedback:5 xn:3 dimension:3 made:1 adaptive:6 san:3 simplified:1 transaction:3 monotonicity:1 confirm:1 active:41 sequentially:1 decides:1 xingquan:1 conclude:1 xi:12 inen:1 don:1 search:5 additionally:1 learn:2 robust:1 subintervals:1 subinterval:2 improving:1 separator:1 significance:3 main:2 noise:21 paul:1 ramdas:1 allowed:2 x1:6 epigraph:2 ff:1 precision:1 lie:1 third:1 rk:12 theorem:13 specific:1 pac:5 rakhlin:1 alt:1 sequential:4 kamalika:6 maxim:1 rui:1 chen:1 easier:1 logarithmic:1 simply:1 explore:1 likely:1 applies:2 springer:1 truth:14 satisfies:7 relies:2 acm:2 conditional:1 goal:3 marked:1 consequently:1 considerable:1 specifically:1 determined:1 acting:1 llog:1 disregard:1 exception:1 tara:3 select:1 highdimensional:1 support:1 arises:1 alexander:2 d1:3 ex:1 |
5,706 | 6,163 | Collaborative Recurrent Autoencoder:
Recommend while Learning to Fill in the Blanks
Hao Wang, Xingjian Shi, Dit-Yan Yeung
Hong Kong University of Science and Technology
{hwangaz,xshiab,dyyeung}@cse.ust.hk
Abstract
Hybrid methods that utilize both content and rating information are commonly
used in many recommender systems. However, most of them use either handcrafted
features or the bag-of-words representation as a surrogate for the content information but they are neither effective nor natural enough. To address this problem,
we develop a collaborative recurrent autoencoder (CRAE) which is a denoising
recurrent autoencoder (DRAE) that models the generation of content sequences in
the collaborative filtering (CF) setting. The model generalizes recent advances in
recurrent deep learning from i.i.d. input to non-i.i.d. (CF-based) input and provides
a new denoising scheme along with a novel learnable pooling scheme for the recurrent autoencoder. To do this, we first develop a hierarchical Bayesian model for the
DRAE and then generalize it to the CF setting. The synergy between denoising
and CF enables CRAE to make accurate recommendations while learning to fill
in the blanks in sequences. Experiments on real-world datasets from different
domains (CiteULike and Netflix) show that, by jointly modeling the order-aware
generation of sequences for the content information and performing CF for the
ratings, CRAE is able to significantly outperform the state of the art on both the
recommendation task based on ratings and the sequence generation task based on
content information.
1
Introduction
With the high prevalence and abundance of Internet services, recommender systems are becoming
increasingly important to attract users because they can help users make effective use of the information available. Companies like Netflix have been using recommender systems extensively to target
users and promote products. Existing methods for recommender systems can be roughly categorized
into three classes [13]: content-based methods that use the user profiles or product descriptions only,
collaborative filtering (CF) based methods that use the ratings only, and hybrid methods that make
use of both. Hybrid methods using both types of information can get the best of both worlds and, as a
result, usually outperform content-based and CF-based methods.
Among the hybrid methods, collaborative topic regression (CTR) [20] was proposed to integrate a
topic model and probabilistic matrix factorization (PMF) [15]. CTR is an appealing method in that it
produces both promising and interpretable results. However, CTR uses a bag-of-words representation
and ignores the order of words and the local context around each word, which can provide valuable
information when learning article representation and word embeddings. Deep learning models like
convolutional neural networks (CNN) which use layers of sliding windows (kernels) have the potential
of capturing the order and local context of words. However, the kernel size in a CNN is fixed during
training. To achieve good enough performance, sometimes an ensemble of multiple CNNs with
different kernel sizes has to be used. A more natural and adaptive way of modeling text sequences
would be to use gated recurrent neural network (RNN) models [8, 3, 18]. A gated RNN takes in one
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
word (or multiple words) at a time and lets the learned gates decide whether to incorporate or to
forget the word. Intuitively, if we can generalize gated RNNs to the CF setting (non-i.i.d.) to jointly
model the generation of sequences and the relationship between items and users (rating matrices), the
recommendation performance could be significantly boosted.
Nevertheless, very few attempts have been made to develop feedforward deep learning models for CF,
let alone recurrent ones. This is due partially to the fact that deep learning models, like many machine
learning models, assume i.i.d. inputs. [16, 6, 7] use restricted Boltzmann machines and RNN instead
of the conventional matrix factorization (MF) formulation to perform CF. Although these methods
involve both deep learning and CF, they actually belong to CF-based methods because they do not
incorporate the content information like CTR, which is crucial for accurate recommendation. [14]
uses low-rank MF in the last weight layer of a deep network to reduce the number of parameters, but
it is for classification instead of recommendation tasks. There have also been nice explorations on
music recommendation [10, 25] in which a CNN or deep belief network (DBN) is directly used for
content-based recommendation. However, the models are deterministic and less robust since the noise
is not explicitly modeled. Besides, the CNN is directly linked to the ratings making the performance
suffer greatly when the ratings are sparse, as will be shown later in our experiments. Very recently,
collaborative deep learning (CDL) [23] is proposed as a probabilistic model for joint learning of
a probabilistic stacked denoising autoencoder (SDAE) [19] and collaborative filtering. However,
CDL is a feedforward model that uses bag-of-words as input and it does not model the order-aware
generation of sequences. Consequently, the model would have inferior recommendation performance
and is not capable of generating sequences at all, which will be shown in our experiments. Besides
order-awareness, another drawback of CDL is its lack of robustness (see Section 3.1 and 3.5 for
details). To address these problems, we propose a hierarchical Bayesian generative model called
collaborative recurrent autoencoder (CRAE) to jointly model the order-aware generation of sequences
(in the content information) and the rating information in a CF setting. Our main contributions are:
? By exploiting recurrent deep learning collaboratively, CRAE is able to sophisticatedly model
the generation of items (sequences) while extracting the implicit relationship between items
(and users). We design a novel pooling scheme for pooling variable-length sequences into
fixed-length vectors and also propose a new denoising scheme to effectively avoid overfitting.
Besides for recommendation, CRAE can also be used to generate sequences on the fly.
? To the best of our knowledge, CRAE is the first model that bridges the gap between RNN
and CF, especially with respect to hybrid methods for recommender systems. Besides, the
Bayesian nature also enables CRAE to seamlessly incorporate other auxiliary information
to further boost the performance.
? Extensive experiments on real-world datasets from different domains show that CRAE can
substantially improve on the state of the art.
2
Problem Statement and Notation
Similar to [20], the recommendation task considered in this paper takes implicit feedback [9] as the
training and test data. There are J items (e.g., articles or movies) in the dataset. For item j, there is a
(j)
corresponding sequence consisting of Tj words where the vector et specifies the t-th word using the
1-of-S representation, i.e., a vector of length S with the value 1 in only one element corresponding
to the word and 0 in all other elements. Here S is the vocabulary size of the dataset. We define an
I-by-J binary rating matrix R = [Rij ]I?J where I denotes the number of users. For example, in the
CiteULike dataset, Rij = 1 if user i has article j in his or her personal library and Rij = 0 otherwise.
(j)
Given some of the ratings in R and the corresponding sequences of words et (e.g., titles of articles
or plots of movies), the problem is to predict the other ratings in R.
0(j)
(j)
(j)
(j)
In the following sections, et denotes the noise-corrupted version of et and (ht ; st ) refers to
the concatenation of the two KW -dimensional column vectors. All input weights (like Ye and Yei )
(j)
and recurrent weights (like We and Wei ) are of dimensionality KW -by-KW . The output state ht ,
(j)
o (j)
gate units (e.g., ht ), and cell state st are of dimensionality KW . K is the dimensionality of the
final representation ? j , middle-layer units ? j , and latent vectors vj and ui . IK or IKW denotes a
K-by-K or KW -by-KW identity matrix. For convenience we use W+ to denote the collection of all
f
i
o
weights and biases. Similarly h+
t is used to denote the collection of ht , ht , ht , and ht .
2
e01
e02
h1
h2
s1
s2
e1
e2
h3
h4
h5
s3
s4
s5
?w
W+
E0
?
?
?
v
?v
?v
v
A
?u
u
R
J
?u
I
$
E
R
J
B
u
A
<?>
$
I
Figure 1: On the left is the graphical model for an example CRAE where Tj = 2 for all j. To
prevent clutter, the hyperparameters for beta-pooling, all weights, biases, and links between ht and ?
are omitted. On the right is the graphical model for the degenerated CRAE. An example recurrent
autoencoder with Tj = 3 is shown. ?h?i? is the hwildcardi and ?$? marks the end of a sentence. E0
0(j) Tj
(j) Tj
and E are used in place of [et ]t=1
and [et ]t=1
respectively.
3
Collaborative Recurrent Autoencoder
In this section we will first propose a generalization of the RNN called robust recurrent networks
(RRN), followed by the introduction of two key concepts, wildcard denoising and beta-pooling, in
our model. After that, the generative process of CRAE is provided to show how to generalize the
RRN as a hierarchical Bayesian model from an i.i.d. setting to a CF (non-i.i.d.) setting.
3.1
Robust Recurrent Networks
One problem with RNN models like long short-term memory networks (LSTM) is that the computation is deterministic without taking the noise into account, which means it is not robust especially
with insufficient training data. To address this robustness problem, we propose RRN as a type of
noisy gated RNN. In RRN, the gates and other latent variables are designed to incorporate noise,
making the model more robust. Note that unlike [4, 5], the noise in RRN is directly propagated back
and forth in the network, without the need for using separate neural networks to approximate the
distributions of the latent variables. This is much more efficient and easier to implement. Here we
provide the generative process of RRN. Using t = 1 . . . Tj to index the words in the sequence, we
have (we drop the index j for items for notational simplicity):
?1
xt?1 ? N (Ww et?1 , ??1
s IKW ), at?1 ? N (Yxt?1 + Wht?1 + b, ?s IKW )
st ?
N (?(hft?1 )
st?1 +
?(hit?1 )
?(at?1 ), ??1
s IKW ),
(1)
(2)
where xt is the word embedding of the t-th word, Ww is a KW -by-S word embedding matrix, et is
the 1-of-S representation mentioned above, stands for the element-wise product operation between
two vectors, ?(?) denotes the sigmoid function, st is the cell state of the t-th word, and b, Y, and
W denote the biases, input weights, and recurrent weights respectively. The forget gate units hft
and the input gate units hit in Equation (2) are drawn from Gaussian distributions depending on their
corresponding weights and biases Yf , Wf , Yi , Wi , bf , and bi :
i
i
i
i
?1
hft ? N (Yf xt + Wf ht + bf , ??1
s IKW ), ht ? N (Y xt + W ht + b , ?s IKW ).
The output ht depends on the output gate hot which has its own weights and biases Yo , Wo , and bo :
o
?1
hot ? N (Yo xt + Wo ht + bo , ??1
s IKW ), ht ? N (tanh(st ) ?(ht?1 ), ?s IKW ).
(3)
In the RRN, information of the processed sequence is contained in the cell states st and the output
states ht , both of which are column vectors of length KW . Note that RRN can be seen as a generalized
and Bayesian version of LSTM [1]. Similar to [18, 3], two RRNs can be concatenated to form an
encoder-decoder architecture.
3.2
Wildcard Denoising
Since the input and output are identical here, unlike [18, 3] where the input is from the source
language and the output is from the target language, this naive RRN autoencoder can suffer from
serious overfitting, even after taking noise into account and reversing sequence order (we find that
3
reversing sequence order in the decoder [18] does not improve the recommendation performance).
One natural way of handling it is to borrow ideas from the denoising autoencoder [19] by randomly
dropping some of the words in the encoder. Unfortunately, directly dropping words may mislead
the learning of transition between words. For example, if we drop the word ?is? in the sentence
?this is a good idea?, the encoder will wrongly learn the subsequence ?this a?, which never appears
in a grammatically correct sentence. Here we propose another denoising scheme, called wildcard
denoising, where a special word ?hwildcardi? is added to the vocabulary and we randomly select
some of the words and replace them with ?hwildcardi?. This way, the encoder RRN will take ?this
hwildcardi a good idea? as input and successfully avoid learning wrong subsequences. We call this
denoising recurrent autoencoder (DRAE). Note that the word ?hwildcardi? also has a corresponding
word embedding. Intuitively this wildcard denoising RRN autoencoder learns to fill in the blanks in
sentences automatically. We find this denoising scheme much better than the naive one. For example,
in dataset CiteULike wildcard denoising can provide a relative accuracy boost of about 20%.
3.3
Beta-Pooling
The RRN autoencoders would produce a representation vector for each input word. In order to
facilitate the factorization of the rating matrix, we need to pool the sequence of vectors into one
single vector of fixed length 2KW before it is further encoded into a K-dimensional vector. A natural
way is to use a weighted average of the vectors. Unfortunately different sequences may need weights
of different size. For example, pooling a sequence of 8 vectors needs a weight vector with 8 entries
while pooling a sequence of 50 vectors needs one with 50 entries. In other words, we need a weight
vector of variable length for our pooling scheme. To tackle this problem, we propose to use a beta
distribution. If six vectors are to be pooled into one single vector (using weighted average), we
p
can use the area wp in the range ( p?1
6 , 6 ) of the x-axis of the probability density function (PDF)
for the beta distribution Beta(a, b) as the pooling weight. Then the resulting pooling weight vector
becomes y = (w1 , . . . , w6 )T . Since the total area is always 1 and the x-axis is bounded, the beta
distribution is perfect for this type of variable-length pooling (hence the name beta-pooling). If we
set the hyperparameters a = b = 1, it will be equivalent to average pooling. If a is set large enough
and b > a the PDF will peak slightly to the left of x = 0.5, which means that the last time step of the
encoder RRN is directly used as the pooling result. With only two parameters, beta-pooling is able to
pool vectors flexibly enough without having the risk of overfitting the data.
3.4
CRAE as a Hierarchical Bayesian Model
Following the notation in Section 2 and using the DRAE in Section 3.2 as a component, we then
provide the generative process of the CRAE (note that t indexes words or time steps, j indexes
sentences or documents, and Tj is the number of words in document j):
0(j)
(j)
(j)
Encoding (t = 1, 2, . . . , Tj ): Generate xt?1 , at?1 , and st according to Equation (1)-(2).
Compression and decompression (t = Tj + 1):
?1
? j ? N (W1 (hTj ; sTj ) + b1 , ??1
s IK ), (hTj +1 ; sTj +1 ) ? N (W2 tanh(? j ) + b2 , ?s I2KW ).
(j)
(j)
(j)
(j)
(j)
(j)
(j)
Decoding (t = Tj + 2, Tj + 3, . . . , 2Tj + 1): Generate at?1 , st , and ht
tion (1)-(3), after which generate:
(j)
(4)
according to Equa-
(j)
et?Tj ?2 ? Mult(softmax(Wg ht + bg )).
Beta-pooling and recommendation:
(j)
(j)
? j ? N (tanh(W1 fa,b ({(ht ; st )}t ) + b1 ), ??1
s IK )
vj ?
N (? j , ??1
v IK ),
ui ?
N (0, ??1
u IK ),
Rij ?
(5)
N (uTi vj , C?1
ij ).
Note that each column of the weights and biases in W+ is drawn from N (0, ??1
w IKW ) or
(j)
(j)
i
N (0, ??1
and the forget gate hft?1
w IK ). In the generative process above, the input gate ht?1
(j)
can be drawn as described in Section 3.1. e0 t denotes the corrupted word (with the embedding
4
0(j)
(j)
(j)
xt ) and et denotes the original word (with the embedding xt ). ?w , ?u , ?s , and ?v are hyperparameters and Cij is a confidence parameter (Cij = ? if Rij = 1 and Cij = ? otherwise).
Note that if ?s goes to infinity, the Gaussian distribution (e.g., in Equation (4)) will become a Dirac
delta distribution centered at the mean. The compression and decompression act like a bottleneck
between two Bayesian RRNs. The purpose is to reduce overfitting, provide necessary nonlinear
transformation, and perform dimensionality reduction to obtain a more compact final representation ? j for CF. The graphical model for an example CRAE where Tj = 2 for all j is shown in
(j)
(j)
Figure 1(left). fa,b ({(ht ; st )}t ) in Equation (5) is the result of beta-pooling with hyperparameters
a and b. If we denote the cumulative distribution function of the beta distribution as F (x; a, b),
(j)
(j) (j)
(j)
(j)
(j)
?t = (ht ; st ) for t = 1, . . . , Tj , and ?t = (ht+1 ; st+1 ) for t = Tj + 1, . . . , 2Tj , then
P
2T
(j) (j)
we have fa,b ({(ht ; st )}t ) = t=1j (F ( 2Tt j , a, b) ? F ( t?1
2Tj , a, b))?t . Please see Section 3 of the
supplementary materials for details (including hyperparameter learning) of beta-pooling. From the
generative process, we can see that both CRAE and CDL are Bayesian deep learning (BDL) models
(as described in [24]) with a perception component (DRAE in CRAE) and a task-specific component.
3.5
Learning
(j)
According to the CRAE model above, all parameters like ht and vj can be treated as random
variables so that a full Bayesian treatment such as methods based on variational approximation can
be used. However, due to the extreme nonlinearity and the CF setting, this kind of treatment is
non-trivial. Besides, with CDL [23] and CTR [20] as our primary baselines, it would be fairer to use
maximum a posteriori (MAP) estimates, which is what CDL and CTR do.
End-to-end joint learning: Maximization of the posterior probability is equivalent to maximizing
(j)
(j)
0(j)
(j)
the joint log-likelihood of {ui }, {vj }, W+ , {? j }, {? j }, {et }, {et }, {h+
}, {st }, and R
t
given ?u , ?v , ?w , and ?s :
X Cij
?u X
?v X
L = log p(DRAE|?s , ?w ) ?
kui k22 ?
kvj ? ? j k22 ?
(Rij ? uTi vj )2
2 i
2 j
2
i,j
?
?s X
(j) (j)
k tanh(W1 fa,b ({(ht ; st )}t ) + b1 ) ? ? j k22 ,
2 j
where log p(DRAE|?s , ?w ) corresponds to the prior and likelihood terms for DRAE (including
the encoding, compression, decompression, and decoding in Section 3.4) involving W+ , {? j },
(j)
0(j)
(j)
(j)
{et }, {et }, {h+
}, and {st }. For simplicity and computational efficiency, we can fix the
t
hyperparameters of beta-pooling so that Beta(a, b) peaks slightly to the left of x = 0.5 (e.g.,
a = 9.8 ? 107 , b = 1 ? 108 ), which leads to ? j = tanh(? j ) (a treatment for the more general case
with learnable a or b is provided in the supplementary materials). Further, if ?s approaches infinity,
(j) (j)
the terms with ?s in log p(DRAE|?s , ?w ) will vanish and ? j will become tanh(W1 (hTj , sTj )+b1 ).
Figure 1(right) shows the graphical model of a degenerated CRAE when ?s approaches positive
infinity and b > a (with very large a and b). Learning this degenerated version of CRAE is equivalent
to jointly training a wildcard denoising RRN and an encoding RRN coupled with the rating matrix. If
?v 1, CRAE will further degenerate to a two-step model where the representation ? j learned by
the DRAE is directly used for CF. On the contrary if ?v 1, the decoder RRN essentially vanishes.
Both extreme cases can greatly degrade the predictive performance, as shown in the experiments.
Robust nonlinearity on distributions: Different from [23, 22], nonlinear transformation is per(j)
formed after adding the noise with precision ?s (e.g. at in Equation (1)). In this case, the input of
the nonlinear transformation is a distribution rather than a deterministic value, making the nonlinearity
more robust than in [23, 22] and leading to more efficient and direct learning algorithms than CDL.
Consider a univariate Gaussian distribution N (x|?, ??1
s ) and the sigmoid function ?(x) =
1
,
the
expectation
(see
Section
6
of
the
supplementary
materials for details):
1+exp(?x)
Z
E(x) = N (x|?, ??1
(6)
s )?(x)dx = ?(?(?s )?),
Equation (6) holds because the convolution of a sigmoid function with a Gaussian distribution can be
approximated by another sigmoid function. Similarly, we can approximate ?(x)2 with ?(?1 (x + ?0 )),
5
?
?
where ?1 = 4 ? 2 2 and ?0 = ? log( 2 + 1). Hence the variance
Z
D(x) ?
2
N (x|?, ??1
s ) ? ?(??1 (x + ?0 ))dx ? E(x) = ?(
?1 (? + ?0 )
) ? E(x)2 ? ??1
s ,
1/2
(1 + ? 2 ?21 ??1
s )
(7)
where we use ??1
s to approximate D(x) for computational efficiency. Using Equation (6) and (7),
the Gaussian distribution in Equation (2) can be computed as:
N (?(hft?1 ) st?1 + ?(hit?1 ) ?(at?1 ), ??1
s IK W )
f
i
? N (?(?(?s )ht?1 ) st?1 + ?(?(?s )ht?1 ) ?(?(?s )at?1 ), ??1
s IKW ),
(8)
where the superscript (j) is dropped. We use overlines (e.g., at?1 = Ye xt?1 + We ht?1 + be ) to
denote the mean of the distribution from which a hidden variable is drawn. By applying Equation (8)
recursively, we can compute st for any t. Similar approximation is used for tanh(x) in Equation (3)
since tanh(x) = 2?(2x) ? 1. This way the feedforward computation of DRAE would be seamlessly
chained together, leading to more efficient learning algorithms than the layer-wise algorithms in
[23, 22] (see Section 6 of the supplementary materials for more details).
Learning parameters: To learn ui and vj , block coordinate ascent can be used. Given the current
(j) (j)
W+ , we can compute ? as ? = tanh(W1 fa,b ({(ht ; st )}t ) + b1 ) and get the following update
rules:
ui ? (VCi VT + ?u IK )?1 VCi Ri
(j)
(j)
vj ? (UCi UT + ?v IK )?1 (UCj Rj + ?v tanh(W1 fa,b ({(ht ; st )}t ) + b1 )T ),
where U = (ui )Ii=1 , V = (vj )Jj=1 , Ci = diag(Ci1 , . . . , CiJ ) is a diagonal matrix, and Ri =
(Ri1 , . . . , RiJ )T is a column vector containing all the ratings of user i.
Given U and V, W+ can be learned using the back-propagation algorithm according to Equation
(6)-(8) and the generative process in Section 3.4. Alternating the update of U, V, and W+ gives a
local optimum of L . After U and V are learned, we can predict the ratings as Rij = uTi vj .
4
Experiments
In this section, we report some experiments on real-world datasets from different domains to evaluate
the capabilities of recommendation and automatic generation of missing sequences.
4.1
Datasets
We use two datasets from different real-world domains. CiteULike is from [20] with 5,551 users and
16,980 items (articles with text). Netflix consists of 407,261 users, 9,228 movies, and 15,348,808
ratings after removing users with less than 3 positive ratings (following [23], ratings larger than 3 are
regarded as positive ratings). Please see Section 7 of the supplementary materials for details.
4.2
Evaluation Schemes
Recommendation: For the recommendation task, similar to [21, 23], P items associated with each
user are randomly selected to form the training set and the rest is used as the test set. We evaluate the
models when the ratings are in different degrees of density (P ? {1, 2, . . . , 5}). For each value of P ,
we repeat the evaluation five times with different training sets and report the average performance.
Following [20, 21], we use recall as the performance measure since the ratings are in the form of
implicit feedback [9, 12]. Specifically, a zero entry may be due to the fact that the user is not interested
in the item, or that the user is not aware of its existence. Thus precision is not a suitable performance
measure. We sort the predicted ratings of the candidate items and recommend the top M items for
the target user. The recall@M for each user is then defined as:
recall@M =
# items that the user likes among the top M
# items that the user likes
The average recall over all users is reported.
6
.
0.35
0.65
0.11
0.1
0.5
0.09
0.2
0.4
CRAE
CDL
CTR
DeepMusic
CMF
SVDFeature
0.35
0.3
0.15
0.25
0.2
0.1
Recall
0.45
Recall
Recall
0.25
0.12
0.6
0.55
2
3
P
4
5
1
2
3
P
4
0.3
0.25
0.07
0.2
CRAE
CDL
CTR
DeepMusic
CMF
SVDFeature
0.15
0.06
0.05
0.1
0.04
0.03
0.15
1
0.08
CRAE
CDL
CTR
DeepMusic
CMF
SVDFeature
Recall
CRAE
CDL
CTR
DeepMusic
CMF
SVDFeature
0.3
0.02
50
5
0.05
100
150
200
M
250
300
50
100
150
200
250
300
M
Figure 2: Performance comparison of CRAE, CDL, CTR, DeepMusic, CMF, and SVDFeature based
on recall@M for datasets CiteULike and Netflix. P is varied from 1 to 5 in the first two figures.
We also use another evaluation metric, mean average precision (mAP), in the experiments. Exactly
the same as [10], the cutoff point is set at 500 for each user.
Sequence generation on the fly: For the sequence generation task, we set P = 5. In terms of
content information (e.g., movie plots), we randomly select 80% of the items to include their content
in the training set. The trained models are then used to predict (generate) the content sequences for
the other 20% items. The BLEU score [11] is used to evaluate the quality of generation. To compute
the BLEU score in CiteULike we use the titles as training sentences (sequences). Both the titles
and sentences in the abstracts of the articles (items) are used as reference sentences. For Netflix, the
first sentences of the plots are used as training sentences. The movie names and sentences in the
plots are used as reference sentences. A higher BLEU score indicates higher quality of sequence
generation. Since CDL, CTR, and PMF cannot generate sequences directly, a nearest neighborhood
based approach is used with the resulting vj . Note that this task is extremely difficult because the
sequences of the test set are unknown during both the training and testing phases. For this reason,
this task is impossible for existing machine translation models like [18, 3].
4.3
Baselines and Experimental Settings
The models for comparison are listed as follows:
? CMF: Collective Matrix Factorization [17] is a model incorporating different sources of
information by simultaneously factorizing multiple matrices.
? SVDFeature: SVDFeature [2] is a model for feature-based collaborative filtering. In this
paper we use the bag-of-words as raw features to feed into SVDFeature.
? DeepMusic: DeepMusic [10] is a feedforward model for music recommendation mentioned
in Section 1. We use the best performing variant as our baseline.
? CTR: Collaborative Topic Regression [20] is a model performing topic modeling and
collaborative filtering simultaneously as mentioned in the previous section.
? CDL: Collaborative Deep Learning (CDL) [23] is proposed as a probabilistic feedforward
model for joint learning of a probabilistic SDAE [19] and CF.
? CRAE: Collaborative Recurrent Autoencoder is our proposed recurrent model. It jointly
performs collaborative filtering and learns the generation of content (sequences).
In the experiments, we use 5-fold cross validation to find the optimal hyperparameters for CRAE and
the baselines. For CRAE, we set ? = 1, ? = 0.01, K = 50, KW = 100. The wildcard denoising
rate is set to 0.4. See Section 5.1 of the supplementary materials for details.
4.4
Quantitative Comparison
Recommendation: The first two plots of Figure 2 show the recall@M for the two datasets when P
is varied from 1 to 5. As we can see, CTR outperforms the other baselines except for CDL. Note that
as previously mentioned, in both datasets DeepMusic suffers badly from overfitting when the rating
matrix is extremely sparse (P = 1) and achieves comparable performance with CTR when the rating
matrix is dense (P = 5). CDL as the strongest baseline consistently outperforms other baselines.
By jointly learning the order-aware generation of content (sequences) and performing collaborative
filtering, CRAE is able to outperform all the baselines by a margin of 0.7% ? 1.9% (a relative boost
of 2.0% ? 16.7%) in CiteULike and 3.5% ? 6.0% (a relative boost of 5.7% ? 22.5%) in Netflix.
Note that since the standard deviation is minimal (3.38 ? 10?5 ? 2.56 ? 10?3 ), it is not included in
the figures and tables to avoid clutter.
The last two plots of Figure 2 show the recall@M for CiteULike and Netflix when M varies from 50
to 300 and P = 1. As shown in the plots, the performance of DeepMusic, CMF, and SVDFeature is
7
250
25
10
200
20
8
2
15
1.5
10
25
250
8
20
200
6
15
150
4
10
100
2
5
10
150
15
6
1
100
10
4
5
50
0
5
0
0.5
(a)
1
0
0.5
2
0
0.5
1
(b)
0
0
0.5
1
0
0
0.5
(c)
0
1
0
0.5
(d)
1
0
0
0.5
(e)
1
(f)
0
50
0
0.5
(g)
1
0
0
0.5
1
(h)
Figure 3: The shape of the beta distribution for different a and b (corresponding to Table 1).
Table 1: Recall@300 for beta-pooling with different hyperparameters
a
b
Recall
31112
40000
12.17
311
400
12.54
1
10
10.48
1
1
11.62
0.4
0.4
11.08
10
1
10.72
400
311
12.71
40000
31112
12.22
Table 2: mAP for two datasets
CiteULike
Netflix
CRAE
0.0123
0.0301
CDL
0.0091
0.0275
CTR
0.0071
0.0211
DeepMusic
0.0058
0.0156
CMF
0.0061
0.0144
SVDFeature
0.0056
0.0173
Table 3: BLEU score for two datasets
CiteULike
Netflix
CRAE
46.60
48.69
CDL
21.14
6.90
CTR
31.47
17.17
PMF
17.85
11.74
similar in this setting. Again CRAE is able to outperform the baselines by a large margin and the
margin gets larger with the increase of M .
As shown in Figure 3 and Table 1, we also investigate the effect of a and b in beta-pooling and find
that in DRAE: (1) temporal average pooling performs poorly (a = b = 1); (2) most information
concentrates near the bottleneck; (3) the right of the bottleneck contains more information than the
left. Please see Section 4 of the supplementary materials for more details.
As another evaluation metric, Table 2 compares different models based on mAP. As we can see,
compared with CDL, CRAE can provide a relative boost of 35% and 10% for CiteULike and
Netflix, respectively. Besides quantitative comparison, qualitative comparison of CRAE and CDL is
provided in Section 2 of the supplementary materials. In terms of time cost, CDL needs 200 epochs
(40s/epoch) while CRAE needs about 80 epochs (150s/epoch) for optimal performance.
Sequence generation on the fly: To evaluate the ability of sequence generation, we compute the
BLEU score of the sequences (titles for CiteULike and plots for Netflix) generated by different models.
As mentioned in Section 4.2, this task is impossible for existing machine translation models like
[18, 3] due to the lack of source sequences. As we can see in Table 3, CRAE achieves a BLEU
score of 46.60 for CiteULike and 48.69 for Netflix, which is much higher than CDL, CTR and PMF.
Incorporating the content information when learning user and item latent vectors, CTR is able to
outperform other baselines and CRAE can further boost the BLEU score by sophisticatedly and jointly
modeling the generation of sequences and ratings. Note that although CDL is able to outperform
other baselines in the recommendation task, it performs poorly when generating sequences on the fly,
which demonstrates the importance of modeling each sequence recurrently as a whole rather than as
separate words.
5
Conclusions and Future Work
We develop a collaborative recurrent autoencoder which can sophisticatedly model the generation of
item sequences while extracting the implicit relationship between items (and users). We design a new
pooling scheme for pooling variable-length sequences and propose a wildcard denoising scheme to
effectively avoid overfitting. To the best of our knowledge, CRAE is the first model to bridge the
gap between RNN and CF. Extensive experiments show that CRAE can significantly outperform the
state-of-the-art methods on both the recommendation and sequence generation tasks.
With its Bayesian nature, CRAE can easily be generalized to seamlessly incorporate auxiliary
information (e.g., the citation network for CiteULike and the co-director network for Netflix) for
further accuracy boost. Moreover, multiple Bayesian recurrent layers may be stacked together to
increase its representation power. Besides making recommendations and guessing sequences on
the fly, the wildcard denoising recurrent autoencoder also has potential to solve other challenging
problems such as recovering the blurred words in ancient documents.
8
References
[1] Y. Bengio, I. J. Goodfellow, and A. Courville. Deep learning. Book in preparation for MIT Press, 2015.
[2] T. Chen, W. Zhang, Q. Lu, K. Chen, Z. Zheng, and Y. Yu. SVDFeature: a toolkit for feature-based
collaborative filtering. JMLR, 13:3619?3622, 2012.
[3] K. Cho, B. van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning
phrase representations using RNN encoder-decoder for statistical machine translation. In EMNLP, 2014.
[4] J. Chung, K. Kastner, L. Dinh, K. Goel, A. C. Courville, and Y. Bengio. A recurrent latent variable model
for sequential data. In NIPS, 2015.
[5] O. Fabius and J. R. van Amersfoort. Variational recurrent auto-encoders. arXiv preprint arXiv:1412.6581,
2014.
[6] K. Georgiev and P. Nakov. A non-iid framework for collaborative filtering with restricted Boltzmann
machines. In ICML, 2013.
[7] B. Hidasi, A. Karatzoglou, L. Baltrunas, and D. Tikk. Session-based recommendations with recurrent
neural networks. arXiv preprint arXiv:1511.06939, 2015.
[8] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735?1780, 1997.
[9] Y. Hu, Y. Koren, and C. Volinsky. Collaborative filtering for implicit feedback datasets. In ICDM, 2008.
[10] A. V. D. Oord, S. Dieleman, and B. Schrauwen. Deep content-based music recommendation. In NIPS,
2013.
[11] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. BLEU: a method for automatic evaluation of machine
translation. In ACL, 2002.
[12] S. Rendle, C. Freudenthaler, Z. Gantner, and L. Schmidt-Thieme. BPR: Bayesian personalized ranking
from implicit feedback. In UAI, 2009.
[13] F. Ricci, L. Rokach, and B. Shapira. Introduction to Recommender Systems Handbook. Springer, 2011.
[14] T. N. Sainath, B. Kingsbury, V. Sindhwani, E. Arisoy, and B. Ramabhadran. Low-rank matrix factorization
for deep neural network training with high-dimensional output targets. In ICASSP, 2013.
[15] R. Salakhutdinov and A. Mnih. Probabilistic matrix factorization. In NIPS, 2007.
[16] R. Salakhutdinov, A. Mnih, and G. E. Hinton. Restricted Boltzmann machines for collaborative filtering.
In ICML, 2007.
[17] A. P. Singh and G. J. Gordon. Relational learning via collective matrix factorization. In KDD, 2008.
[18] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In NIPS,
2014.
[19] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol. Stacked denoising autoencoders:
Learning useful representations in a deep network with a local denoising criterion. JMLR, 11:3371?3408,
2010.
[20] C. Wang and D. M. Blei. Collaborative topic modeling for recommending scientific articles. In KDD,
2011.
[21] H. Wang, B. Chen, and W.-J. Li. Collaborative topic regression with social regularization for tag recommendation. In IJCAI, 2013.
[22] H. Wang, X. Shi, and D. Yeung. Relational stacked denoising autoencoder for tag recommendation. In
AAAI, 2015.
[23] H. Wang, N. Wang, and D. Yeung. Collaborative deep learning for recommender systems. In KDD, 2015.
[24] H. Wang and D. Yeung. Towards Bayesian deep learning: A framework and some existing methods. TKDE,
2016, to appear.
[25] X. Wang and Y. Wang. Improving content-based and hybrid music recommendation using deep learning.
In ACM MM, 2014.
9
| 6163 |@word kong:1 cnn:4 middle:1 version:3 compression:3 bf:2 hu:1 fairer:1 recursively:1 reduction:1 contains:1 score:7 document:3 outperforms:2 existing:4 blank:3 current:1 dx:2 ust:1 e01:1 citeulike:14 kdd:3 shape:1 enables:2 plot:8 interpretable:1 designed:1 drop:2 update:2 alone:1 generative:7 selected:1 item:19 fabius:1 short:2 blei:1 provides:1 cse:1 zhang:1 five:1 kingsbury:1 along:1 h4:1 direct:1 beta:18 become:2 ik:9 director:1 qualitative:1 consists:1 roughly:1 e02:1 nor:1 salakhutdinov:2 company:1 automatically:1 window:1 becomes:1 spain:1 provided:3 notation:2 bounded:1 moreover:1 what:1 kind:1 thieme:1 substantially:1 htj:3 transformation:3 temporal:1 quantitative:2 act:1 tackle:1 exactly:1 wrong:1 hit:3 demonstrates:1 unit:4 appear:1 positive:3 before:1 dropped:1 service:1 local:4 encoding:3 becoming:1 baltrunas:1 rnns:1 acl:1 challenging:1 co:1 factorization:7 bi:1 range:1 equa:1 testing:1 block:1 implement:1 prevalence:1 xshiab:1 svdfeature:11 area:2 rnn:9 yan:1 significantly:3 mult:1 word:36 confidence:1 refers:1 shapira:1 get:3 convenience:1 cannot:1 wrongly:1 stj:3 context:2 risk:1 applying:1 impossible:2 conventional:1 deterministic:3 equivalent:3 shi:2 map:4 maximizing:1 go:1 missing:1 flexibly:1 sainath:1 mislead:1 simplicity:2 rule:1 regarded:1 fill:3 borrow:1 his:1 embedding:5 coordinate:1 target:4 user:23 us:3 goodfellow:1 element:3 approximated:1 fly:5 preprint:2 wang:9 rij:8 valuable:1 mentioned:5 vanishes:1 ui:6 personal:1 chained:1 trained:1 singh:1 predictive:1 yei:1 efficiency:2 bpr:1 easily:1 joint:4 icassp:1 schwenk:1 stacked:4 effective:2 neighborhood:1 encoded:1 supplementary:8 larger:2 solve:1 otherwise:2 wg:1 encoder:6 ability:1 ward:1 jointly:7 noisy:1 final:2 superscript:1 sequence:45 propose:7 product:3 uci:1 wht:1 achieve:1 degenerate:1 poorly:2 papineni:1 forth:1 description:1 arisoy:1 dirac:1 exploiting:1 sutskever:1 ijcai:1 optimum:1 produce:2 generating:2 perfect:1 help:1 depending:1 recurrent:24 develop:4 nearest:1 ij:1 h3:1 auxiliary:2 predicted:1 recovering:1 larochelle:1 concentrate:1 drawback:1 correct:1 cnns:1 exploration:1 centered:1 karatzoglou:1 amersfoort:1 material:8 ricci:1 fix:1 generalization:1 ci1:1 merrienboer:1 hold:1 mm:1 around:1 considered:1 exp:1 dieleman:1 predict:3 achieves:2 collaboratively:1 omitted:1 purpose:1 bag:4 tanh:10 title:4 bridge:2 successfully:1 weighted:2 mit:1 gaussian:5 always:1 rather:2 avoid:4 boosted:1 yo:2 notational:1 consistently:1 rank:2 likelihood:2 indicates:1 seamlessly:3 hk:1 greatly:2 baseline:11 wf:2 posteriori:1 attract:1 her:1 hidden:1 interested:1 among:2 classification:1 art:3 special:1 softmax:1 aware:5 never:1 having:1 identical:1 kw:10 yu:1 icml:2 kastner:1 promote:1 future:1 report:2 recommend:2 gordon:1 serious:1 few:1 randomly:4 simultaneously:2 phase:1 consisting:1 attempt:1 investigate:1 mnih:2 zheng:1 evaluation:5 extreme:2 tj:18 rokach:1 tikk:1 accurate:2 capable:1 necessary:1 ancient:1 pmf:4 e0:3 minimal:1 column:4 modeling:6 maximization:1 phrase:1 cost:1 deviation:1 entry:3 deepmusic:10 ri1:1 reported:1 encoders:1 varies:1 corrupted:2 yxt:1 cho:1 st:22 density:2 lstm:2 peak:2 oord:1 probabilistic:6 decoding:2 pool:2 together:2 kvj:1 schrauwen:1 ctr:19 w1:7 again:1 aaai:1 containing:1 emnlp:1 book:1 chung:1 leading:2 rrn:16 li:1 account:2 potential:2 pooled:1 b2:1 blurred:1 explicitly:1 ranking:1 depends:1 bg:1 later:1 h1:1 tion:1 linked:1 netflix:13 sort:1 capability:1 collaborative:24 contribution:1 formed:1 accuracy:2 convolutional:1 variance:1 ensemble:1 generalize:3 bayesian:13 raw:1 vincent:1 iid:1 lu:1 strongest:1 suffers:1 volinsky:1 e2:1 associated:1 propagated:1 dataset:4 treatment:3 recall:13 knowledge:2 ut:1 dimensionality:4 actually:1 back:2 appears:1 feed:1 higher:3 wei:1 formulation:1 implicit:6 autoencoders:2 vci:2 nonlinear:3 lack:2 propagation:1 yf:2 quality:2 scientific:1 facilitate:1 name:2 ye:2 concept:1 k22:3 effect:1 hence:2 regularization:1 alternating:1 wp:1 during:2 inferior:1 please:3 hong:1 generalized:2 criterion:1 pdf:2 tt:1 performs:3 wise:2 variational:2 novel:2 recently:1 sigmoid:4 cmf:8 handcrafted:1 belong:1 bougares:1 s5:1 dinh:1 xingjian:1 automatic:2 dbn:1 similarly:2 decompression:3 session:1 nonlinearity:3 language:2 toolkit:1 posterior:1 own:1 recent:1 dyyeung:1 schmidhuber:1 binary:1 vt:1 yi:1 seen:1 goel:1 ii:1 sliding:1 multiple:4 full:1 rj:1 cross:1 long:2 icdm:1 e1:1 involving:1 regression:3 variant:1 essentially:1 expectation:1 metric:2 yeung:4 arxiv:4 kernel:3 sometimes:1 hochreiter:1 cell:3 hft:5 source:3 crucial:1 w2:1 rest:1 unlike:2 ascent:1 pooling:25 bahdanau:1 contrary:1 grammatically:1 call:1 extracting:2 near:1 feedforward:5 bengio:4 enough:4 embeddings:1 architecture:1 reduce:2 idea:3 bottleneck:3 whether:1 six:1 wo:2 suffer:2 jj:1 deep:18 useful:1 involve:1 listed:1 s4:1 clutter:2 extensively:1 processed:1 dit:1 generate:6 specifies:1 outperform:7 s3:1 delta:1 per:1 tkde:1 hyperparameter:1 dropping:2 key:1 nevertheless:1 drawn:4 prevent:1 neither:1 cutoff:1 ht:31 utilize:1 h5:1 place:1 decide:1 uti:3 comparable:1 capturing:1 layer:5 internet:1 followed:1 koren:1 courville:2 fold:1 badly:1 infinity:3 ri:2 personalized:1 tag:2 extremely:2 performing:4 according:4 slightly:2 increasingly:1 wi:1 appealing:1 making:4 s1:1 intuitively:2 restricted:3 equation:11 previously:1 rendle:1 end:3 gulcehre:1 generalizes:1 available:1 operation:1 hierarchical:4 schmidt:1 robustness:2 gate:8 existence:1 original:1 denotes:6 top:2 cf:20 include:1 graphical:4 music:4 ikw:10 concatenated:1 especially:2 ramabhadran:1 freudenthaler:1 added:1 fa:6 primary:1 diagonal:1 surrogate:1 guessing:1 link:1 separate:2 concatenation:1 decoder:4 lajoie:1 degrade:1 topic:6 trivial:1 bleu:8 reason:1 degenerated:3 w6:1 besides:7 length:8 modeled:1 relationship:3 insufficient:1 index:4 manzagol:1 difficult:1 unfortunately:2 cij:5 statement:1 hao:1 design:2 collective:2 boltzmann:3 unknown:1 gated:4 perform:2 recommender:7 convolution:1 datasets:11 hinton:1 relational:2 ww:2 varied:2 rating:25 extensive:2 sentence:12 learned:4 barcelona:1 boost:7 nip:5 address:3 able:7 usually:1 perception:1 including:2 memory:2 belief:1 hot:2 suitable:1 power:1 natural:4 hybrid:6 treated:1 zhu:1 scheme:10 improve:2 movie:5 technology:1 cdl:24 library:1 axis:2 autoencoder:16 naive:2 coupled:1 auto:1 text:2 nice:1 prior:1 epoch:4 relative:4 generation:19 filtering:11 validation:1 h2:1 integrate:1 awareness:1 degree:1 hidasi:1 article:7 roukos:1 translation:4 repeat:1 last:3 bias:6 taking:2 sparse:2 van:2 feedback:4 vocabulary:2 world:5 stand:1 transition:1 cumulative:1 ignores:1 commonly:1 adaptive:1 made:1 collection:2 social:1 approximate:3 compact:1 citation:1 synergy:1 overfitting:6 uai:1 handbook:1 b1:6 recommending:1 subsequence:2 factorizing:1 latent:5 table:8 promising:1 nature:2 learn:2 robust:7 improving:1 kui:1 domain:4 vj:11 diag:1 main:1 dense:1 s2:1 noise:7 hyperparameters:7 profile:1 whole:1 categorized:1 sdae:2 precision:3 candidate:1 vanish:1 jmlr:2 learns:2 abundance:1 hwangaz:1 removing:1 xt:9 specific:1 learnable:2 recurrently:1 incorporating:2 adding:1 effectively:2 importance:1 ci:1 sequential:1 margin:3 gantner:1 gap:2 easier:1 mf:2 chen:3 forget:3 univariate:1 vinyals:1 contained:1 partially:1 bo:2 recommendation:25 sindhwani:1 springer:1 corresponds:1 acm:1 identity:1 consequently:1 towards:1 replace:1 content:18 included:1 specifically:1 except:1 reversing:2 denoising:21 called:3 total:1 experimental:1 wildcard:9 select:2 mark:1 preparation:1 incorporate:5 evaluate:4 handling:1 |
5,707 | 6,164 | Clustering Signed Networks with the
Geometric Mean of Laplacians
Pedro Mercado1 , Francesco Tudisco2 and Matthias Hein1
1
Saarland University, Saarbr?cken, Germany
2
University of Padua, Padua, Italy
Abstract
Signed networks allow to model positive and negative relationships. We analyze
existing extensions of spectral clustering to signed networks. It turns out that
existing approaches do not recover the ground truth clustering in several situations
where either the positive or the negative network structures contain no noise. Our
analysis shows that these problems arise as existing approaches take some form of
arithmetic mean of the Laplacians of the positive and negative part. As a solution
we propose to use the geometric mean of the Laplacians of positive and negative
part and show that it outperforms the existing approaches. While the geometric
mean of matrices is computationally expensive, we show that eigenvectors of the
geometric mean can be computed efficiently, leading to a numerical scheme for
sparse matrices which is of independent interest.
1
Introduction
A signed graph is a graph with positive and negative edge weights. Typically positive edges model
attractive relationships between objects such as similarity or friendship and negative edges model
repelling relationships such as dissimilarity or enmity. The concept of balanced signed networks
can be traced back to [10, 3]. Later, in [5], a signed graph is defined as k-balanced if there exists
a partition into k groups where only positive edges are within the groups and negative edges are
between the groups. Several approaches to find communities in signed graphs have been proposed
(see [23] for an overview). In this paper we focus on extensions of spectral clustering to signed
graphs. Spectral clustering is a well established method for unsigned graphs which, based on the
first eigenvectors of the graph Laplacian, embeds nodes of the graphs in Rk and then uses k-means
to find the partition. In [16] the idea is transferred to signed graphs. They define the signed ratio
and normalized cut functions and show that the spectrum of suitable signed graph Laplacians yield a
relaxation of those objectives. In [4] other objective functions for signed graphs are introduced. They
show that a relaxation of their objectives is equivalent to weighted kernel k-means by choosing an
appropriate kernel. While they have a scalable method for clustering, they report that they can not
find any cluster structure in real world signed networks.
We show that the existing extensions of the graph Laplacian to signed graphs used for spectral
clustering have severe deficiencies. Our analysis of the stochastic block model for signed graphs
shows that, even for the perfectly balanced case, recovery of the ground-truth clusters is not guaranteed.
The reason is that the eigenvectors encoding the cluster structure do not necessarily correspond to
the smallest eigenvalues, thus leading to a noisy embedding of the data points and in turn failure
of k-means to recover the cluster structure. The implicit mathematical reason is that all existing
extensions of the graph Laplacian are based on some form of arithmetic mean of operators of the
positive and negative graphs. In this paper we suggest as a solution to use the geometric mean of
the Laplacians of positive and negative part. In particular, we show that in the stochastic block
model the geometric mean Laplacian allows in expectation to recover the ground-truth clusters in
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
any reasonable clustering setting. A main challenge for our approach is that the geometric mean
Laplacian is computationally expensive and does not scale to large sparse networks. Thus a main
contribution of this paper is showing that the first few eigenvectors of the geometric mean can still be
computed efficiently. Our algorithm is based on the inverse power method and the extended Krylov
subspace technique introduced by [8] and allows to compute eigenvectors of the geometric mean
A#B of two matrices A, B without ever computing A#B itself.
In Section 2 we discuss existing work on Laplacians on signed graphs. In Section 3 we discuss the
geometric mean of two matrices and introduce the geometric mean Laplacian which is the basis of our
spectral clustering method for signed graphs. In Section 4 we analyze our and existing approaches for
the stochastic block model. In Section 5 we introduce our efficient algorithm to compute eigenvectors
of the geometric mean of two matrices, and finally in Section 6 we discuss performance of our
approach on real world graphs. Proofs have been moved to the supplementary material.
2
Signed graph clustering
Networks encoding positive and negative relations among the nodes can be represented by weighted
signed graphs. Consider two symmetric non-negative weight matrices W + and W ? , a vertex set
V = {v1 , . . . , vn }, and let G+ = (V, W + ) and G? = (V, W ? ) be the induced graphs. A signed
graph is the pair G? = (G+ , G? ) where G+ and G? encode positive and the negative relations,
respectively.
The concept of community in signed networks is typically related to the theory of social balance.
This theory, as presented in [10, 3], is based on the analysis of affective ties, where positive ties are a
source of balance whereas negative ties are considered as a source of imbalance in social groups.
Definition 1 ([5], k-balance). A signed graph is k-balanced if the set of vertices can be partitioned
into k sets such that within the subsets there are only positive edges, and between them only negative.
The presence of k-balance in G? implies the presence of k groups of nodes being both assortative
in G+ and dissassortative in G? . However this situation is fairly rare in real world networks and
expecting communities in signed networks to be a perfectly balanced set of nodes is unrealistic.
In the next section we will show that Laplacians inspired by Definition 1 are based on some form of
arithmetic mean of Laplacians. As an alternative we propose the geometric mean of Laplacians and
show that it is able to recover communities when either G+ is assortative, or G? is disassortative, or
both. Results of this paper will make clear that the use of the geometric mean of Laplacians allows to
recognize communities where previous approaches fail.
2.1
Laplacians on Unsigned Graphs
Spectral clustering of undirected, unsigned graphs using the Laplacian matrix is a well established
technique (see [19] for an overview). Given an unsigned graph G = (V, W ), the Laplacian and its
normalized version are defined as
L=D?W
Lsym = D?1/2 LD?1/2
(1)
Pn
where Dii = j=1 wij is the diagonal matrix of the degrees of G. Both Laplacians are positive
semidefinite, and the multiplicity k of the eigenvalue 0 is equal to the number of connected components in the graph. Further, the Laplacian is suitable in assortative cases [19], i.e. for the identification
of clusters under the assumption that the amount of edges inside clusters has to be larger than the
amount of edges between them.
For disassortative cases, i.e. for the identification of clusters where the amount of edges has to be
larger between clusters than inside clusters, the signless Laplacian is a better choice [18]. Given the
unsigned graph G = (V, W ), the signless Laplacian and its normalized version are defined as
Q = D + W,
Qsym = D?1/2 QD?1/2
(2)
Both Laplacians are positive semi-definite, and the smallest eigenvalue is zero if and only if the graph
has a bipartite component [6].
2
2.2
Laplacians on Signed Graphs
Recently a number of Laplacian operators for
networks have been introduced. Consider the
Psigned
n
+
+
signed graph G? = (G+ , G? ). Let Dii
= j=1 wij
be the diagonal matrix of the degrees of G+
P
n
+
?
?
? ii =
and D
j=1 wij + wij the one of the overall degrees in G .
The following Laplacians for signed networks have been considered so far
? ?1 LBR ,
LBR = D+ ? W + +W ? , LBN = D
(balance ratio/normalized Laplacian)
+
?
?1/2
?1/2
? ? W +W , LSN = D
?
?
LSR = D
LSR D
, (signed ratio/normalized Laplacian)
(3)
and spectral clustering algorithms have been proposed for G? , based on these Laplacians [16, 4].
Let L+ and Q? be the Laplacian and the signless Laplacian matrices of the graphs G+ and G? ,
respectively. We note that the matrix LSR blends the informations from G+ and G? into (twice) the
arithmetic mean of L+ and Q? , namely the following identity holds
LSR = L+ + Q? .
(4)
Thus, as an alternative to the normalization defining LSN from LSR , it is natural to consider the
?
arithmetic mean of the normalized Laplacians LAM = L+
sym + Qsym . In the next section we
+
?
introduce the geometric mean of Lsym and Qsym and propose a new clustering algorithm for signed
graphs based on that matrix. The analysis and experiments of next sections will show that blending
the information from the positive and negative graphs trough the geometric mean overcomes the
deficiencies showed by the arithmetic mean based operators.
3
Geometric mean of Laplacians
We define here the geometric mean of matrices and introduce the geometric mean of normalized
Laplacians for clustering signed networks. Let A1/2 be the unique positive definite solution of the
matrix equation X 2 = A, where A is positive definite.
Definition 2. Let A, B be positive definite matrices. The geometric mean of A and B is the positive
definite matrix A#B defined by A#B = A1/2 (A?1/2 BA?1/2 )1/2 A1/2 .
One can prove that A#B = B#A (see [1] for details). Further, there are several useful ways to
represent the geometric mean of positive definite matrices (see f.i. [1, 12])
(5)
A#B = A(A?1 B)1/2 = (BA?1 )1/2 A = B(B ?1 A)1/2 = (AB ?1 )1/2 B
The next result reveals further consistency with the scalar case, in fact we observe that if A and B have
some eigenvectors in common, then A + B and A#B have those eigenvectors, with eigenvalues given
by the arithmetic and geometric mean of the corresponding eigenvalues of A and B, respectively.
Theorem 1. Let u be an eigenvector of A and B with eigenvalues
? ? and ?, respectively. Then, u is
an eigenvector of A + B and A#B with eigenvalue ? + ? and ??, respectively.
3.1
Geometric mean for signed networks clustering
Consider the signed network G? = (G+ , G? ). We define the normalized geometric mean Laplacian
of G? as
?
LGM = L+
(6)
sym #Qsym
We propose Algorithm 1 for clustering signed networks, based on the spectrum of LGM . By
definition 2, the matrix geometric mean A#B requires A and B to be positive definite. As both
the Laplacian and the signless Laplacian are positve semi-definte, in what follows we shall assume
?
that the matrices L+
sym and Qsym in (6) are modified by a small diagonal shift, ensuring positive
?
definiteness. That is, in practice, we consider L+
sym + ?1 I and Qsym + ?2 I being ?1 and ?2
small positive numbers. For the sake of brevity, we do not explicitly write the shifting matrices.
1
2
3
Input: Symmetric weight matrices W + , W ? ? Rn?n , number k of clusters to construct.
Output: Clusters C1 , . . . , Ck .
Compute the k eigenvectors u1 , . . . , uk corresponding to the k smallest eigenvalues of LGM .
Let U = (u1 , . . . , uk ).
Cluster the rows of U with k-means into clusters C1 , . . . , Ck .
Algorithm 1: Spectral clustering with LGM on signed networks
3
(E+ )
+
p+
out < pin
(E? )
?
p?
in < pout
(Ebal )
p?
in
+
p+
out
(Evol )
(Econf )
<
p+
in
+
p?
out
(EG )
?
+
+
p?
in + (k ? 1)pout < pin + (k ? 1)pout
kp?
kp+
in
out
<1
+
+
?
?
pin +(k?1)pout
pin +(k?1)pout
?
p?
kp+
in ?pout
out
1
+
<1
+
+
?
?
p +(k?1)p
p +(k?1)p
in
out
in
out
Table 1: Conditions for the Stochastic Block Model analysis of Section 4
The main bottleneck of Algorithm 1 is the computation of the eigenvectors in step 1. In Section 5 we
propose a scalable Krylov-based method to handle this problem.
Let us briefly discuss the motivating intuition behind the proposed clustering strategy. Algorithm 1,
as well as state-of-the-art clustering algorithms based on the matrices in (3), rely on the k smallest
eigenvalues of the considered operator and their corresponding eigenvectors. Thus the relative
ordering of the eigenvalues plays a crucial role. Assume the eigenvalues to be enumerated in
ascending order. Theorem 1 states that the functions (A, B) 7? A + B and (A, B) 7? A#B map
eigenvalues of A and B having the same
p corresponding eigenvectors, into the arithmetic mean
?i (A) + ?j (B) and geometric mean ?i (A)?j (B), respectively, where ?i (?) is the ith smallest
eigenvalue of the corresponding matrix. Note that the indices i and j are not the same in general,
as the eigenvectors shared by A and B may be associated to eigenvalues having different positions
in the relative ordering of A and B. This intuitively suggests that small eigenvalues of A + B are
related to small eigenvalues of both A and B, whereas those of A#B are associated with small
eigenvalues of either A or B, or both. Therefore the relative ordering of the small eigenvalues of
LGM is influenced by the presence of assortative clusters in G+ (related to small eigenvalues of
?
?
L+
sym ) or by disassortative clusters in G (related to small eigenvalues in Qsym ), whereas the ordering
of the small eigenvalues of the arithmetic mean takes into account only the presence of both those
situations.
In the next section, for networks following the stochastic block model, we analyze in expectation
the spectrum of the normalized geometric mean Laplacian as well as the one of the normalized
Laplacians previously introduced. In this case the expected spectrum can be computed explicitly and
we observe that in expectation the ordering induced by blending the informations of G+ and G?
trough the geometric mean allows to recover the ground truth clusters perfectly, whereas the use of
the arithmetic mean introduces a bias which reverberates into a significantly higher clustering error.
4
Stochastic block model on signed graphs
In this section we present an analysis of different signed graph Laplacians based on the Stochastic
Block Model (SBM). The SBM is a widespread benchmark generative model for networks showing a
clustering, community, or group behaviour [22]. Given a prescribed set of groups of nodes, the SBM
defines the presence of an edge as a random variable with probability being dependent on which
groups it joins. To our knowledge this is the first analysis of spectral clustering on signed graphs
with the stochastic block model. Let C1 , . . . , Ck be ground truth clusters, all having the same size |C|.
?
We let p+
in (pin ) be the probability that there exists a positive (negative) edge between nodes in the
?
same cluster, and let p+
out (pout ) denote the probability of a positive (negative) edge between nodes in
different clusters.
Calligraphic letters denote matrices in expectation. In particular W + and W ? denote the weight
+
?
?
matrices in expectation. We have Wi,j
= p+
in and Wi,j = pin if vi , vj belong to the same cluster,
+
+
?
?
whereas Wi,j = pout and Wi,j = pout if vi , vj belong to different clusters. Sorting nodes according
to the ground truth clustering shows that W + and W ? have rank k.
Consider the relations in Table 1. Conditions E+ and E? describe the presence of assortative or
disassortative clusters in expectation. Note that, by Definition 1, a graph is balanced if and only if
?
?
+
p+
out = pin = 0. We can see that if E+ ? E? then G and G give information about the cluster
structure. Further, if E+ ? E? holds then Ebal holds. Similarly Econf characterizes a graph where
the relative amount of conflicts - i.e. positive edges between the clusters and negative edges inside the
clusters - is small. Condition EG is strictly related to such setting. In fact when E? ? EG holds then
4
Econf holds. Finally condition Evol implies that the expected volume in the negative graph is smaller
than the expected volume in the positive one. This condition is therefore not related to any signed
clustering structure.
Let
?1 = 1,
?i = (k ? 1)1Ci ? 1Ci .
The use of k-means on ?i , i = 1, . . . , k identifies the ground truth communities Ci . As spectral
clustering relies on the eigenvectors corresponding to the k smallest eigenvalues (see Algorithm 1)
we derive here necessary and sufficient conditions such that in expectation the eigenvectors ?i , i =
1, . . . , k correspond to the k smallest eigenvalues of the normalized Laplacians introduced so far. In
particular, we observe that condition EG affects the ordering of the eigenvalues of the normalized
geometric mean Laplacian. Instead, the ordering of the eigenvalues of the operators based on the
arithmetic mean is related to Ebal and Evol . The latter is not related to any clustering, thus introduces
a bias in the eigenvalues ordering which reverberates into a noisy embedding of the data points and in
turn into a significantly higher clustering error.
Theorem 2. Let LBN and LSN be the normalized Laplacians defined in (3) of the expected graphs.
The following statements are equivalent:
1. ?1 , . . . , ?k are the eigenvectors corresponding to the k smallest eigenvalues of LBN .
2. ?1 , . . . , ?k are the eigenvectors corresponding to the k smallest eigenvalues of LSN .
3. The two conditions Ebal and Evol hold simultaneously.
?
Theorem 3. Let LGM = L+
sym #Qsym be the geometric mean of the Laplacians of the expected
graphs. Then ?1 , . . . , ?k are the eigenvectors corresponding to the k smallest eigenvalues of LGM
if and only if condition EG holds.
Conditions for the geometric mean Laplacian of diagonally shifted Laplacians are available in the
supplementary material. Intuition suggests that a good model should easily identify clusters when
E+ ? E? . However, unlike condition EG , condition Evol ? Ebal is not directly satisfied under that
regime. Specifically, we have
Corollary 1. Assume that E+ ? E? holds. Then ?1 , . . . , ?k are eigenvectors corresponding to the
k smallest eigenvalues of LGM . Let p(k) denote the proportion of cases where ?1 , . . . , ?k are the
2
1
eigenvectors of the k smallest eigenvalues of LSN or LBN , then p(k) ? 61 + 3(k?1)
+ (k?1)
2.
In order to grasp the difference in expectation between LBN , LSN and LGM , in Fig 1 we present the
proportion of cases where Theorems 2 and 3 hold under different contexts. Experiments are done with
all four parameters discretized in [0, 1] with 100 steps. The expected proportion of cases where EG
holds (Theorem 3) is far above the corresponding proportion for Evol ? Ebal (Theorem 2), showing
that in expectation the geometric mean Laplacian is superior to the other signed Laplacians. In
Fig. 2 we present experiments on sampled graphs with k-means on top of the k smallest eigenvectors.
In all cases we consider clusters of size |C| = 100 and present the median of clustering error (i.e.,
error when clusters are labeled via majority vote) of 50 runs. The results show that the analysis
made in expectation closely resembles the actual behavior. In fact, even if we expect only one noisy
eigenvector for LBN and LSN , the use of the geometric mean Laplacian significantly outperforms
any other previously proposed technique in terms of clustering error. LSN and LBN achieve good
clustering only when the graph resembles a k-balanced structure, whereas they fail even in the ideal
situation where either the positive or the negative graphs are informative about the cluster structure.
As shown in Section 6, the advantages of LGM over the other Laplacians discussed so far allow us to
identify a clustering structure on the Wikipedia benchmark real world signed network, where other
clustering approaches have failed.
5
?
Krylov-based inverse power method for small eigenvalues of L+
sym #Qsym
The computation of the geometric mean A#B of two positive definite matrices of moderate size
has been discussed extensively by various authors [20, 11, 12, 13]. However, when A and B have
large dimensions, the approaches proposed so far become unfeasible, in fact A#B is in general a full
matrix even if A and B are sparse. In this section we present a scalable algorithm for the computation
?
of the smallest eigenvectors of L+
sym #Qsym . The method is discussed for a general pair of matrices
A and B, to emphasize its general applicability which is therefore interesting in itself. We remark that
5
Positive and Negative Informative:
+
?
?
p+
out < pin and pin < pout
1
Positive or Negative Informative:
+
?
?
p+
out < pin or pin < pout
+
+
?
p?
in + pout < pin + pout
Upper bound
LSN , LBN
LGM (ours)
0.8
0.6
0.4
0.2
0
2
5
10
25
Number of clusters
50
100
2
5
10
25
Number of clusters
50
100
2
5
10
25
Number of clusters
50
100
Figure 1: Fraction of cases where in expectation ?1 , . . . , ?k correspond to the k smallest eigenvalues
under the SBM.
L+
sym
Q?
sym
LSN
Median Clustering Error
Negative Informative
?
p?
in = 0.01, pout = 0.09
LBN
LAM
LGM (ours) Negative Informative
Positive Informative
+
p+
out /pin = 1/9
?
p?
/p
out in = 1 ? 0.3
?
p?
out /pin = 9/1
+
p+
/p
out in = 1 ? 0.3
Positive Informative
+
p+
in = 0.09, pout = 0.01
0.4
0.3
0.2
0.1
0
-0.05
0
0.05
Positive Information:
+
+
pin ? pout
-0.05
0
0.05
Negative Information:
?
?
pin ? pout
2
5
7
Sparsity (%)
10 2
5
7
Sparsity (%)
10
Figure 2: Median clustering error under the stochastic block model over 50 runs.
the method takes advantage of the sparsity of A and B and does not require to explicitly compute the
matrix A#B. To our knowledge this is the first effective method explicitly built for the computation
of the eigenvectors of the geometric mean of two large and sparse positive definite matrices.
Given a positive definite matrix M with eigenvalues ?1 ? ? ? ? ? ?n , let H be any eigenspace of M
associated to ?1 , . . . , ?t . The inverse power method (IPM) applied to M is a method that converges
to an eigenvector x associated to the smallest eigenvalue ?H of M such that ?H 6= ?i , i = 1, . . . , t.
The pseudocode of IPM applied to A#B = A(A?1 B)1/2 is shown in Algorithm 2. Given a vector
v and a matrix M , the notation solve{M, v} is used to denote a procedure returning the solution
x of the linear system M x = v. At each step the algorithm requires the solution of two linear
systems. The first one (line 2) is solved by the preconditioned conjugate gradient method, where the
preconditioner is obtained by the incomplete Cholesky decomposition of A. Note that the conjugate
gradient method is very fast, as A is assumed sparse and positive definite, and it is matrix-free, i.e. it
requires to compute the action of A on a vector, whereas it does not require the knowledge of A (nor
its inverse). The solution of the linear system occurring in line 3 is the major inner-problem of the
proposed algorithm. Its efficient solution is performed by means of an extended Krylov subspace
technique that we describe in the next section. The proposed implementation ensures the whole IPM
is matrix-free and scalable.
5.1
Extended Krylov subspace method for the solution of the linear system (A?1 B)1/2 x = y
We discuss here how to apply the technique known as Extended Krylov Subspace Method (EKSM) for
the solution of the linear system (A?1 B)1/2 x = y. Let M be a large and sparse matrix, and y a given
vector. When f is a function with a single pole, EKSM is a very effective method to approximate
the vector f (M )y without ever computing the matrix f (M ) [8]. Note that, given two positive
definite matrices A and B and a vector y, the vector we want to compute is x = (A?1 B)?1/2 y,
so that our problem boils down to the computation of the product f (M )y, where M = A?1 B and
f (X) = X ?1/2 . The general idea of EKSM s-th iteration is to project M onto the subspace
Ks (M, y) = span{y, M y, M ?1 y, . . . , M s?1 y, M 1?s y} ,
and solve the problem there. The projection onto Ks (M, y) is realized by means of the Lanczos
process, which produces a sequence of matrices Vs with orthogonal columns, such that the first
6
column of Vs is a multiple of y and range(Vs ) = Ks (M, y). Moreover at each step we have
M Vs = Vs Hs + [us+1 , vs+1 ][e2s+1 , e2s+2 ]T
(7)
where Hs is 2s ? 2s symmetric tridiagonal, us+1 and vs+1 are orthogonal to Vs , and ei is the i-th
canonical vector. The solution x is then approximated by xs = Vs f (Hs )e1 kyk ? f (M )y. If n
is the order of M , then the exact solution is obtained after at most n steps. However, in practice,
significantly fewer iterations are enough to achieve a good approximation, as the error kxs ? xk
decays exponentially with s (Thm 3.4 and Prop. 3.6 in [14]). See the supplementary material for
details.
The pseudocode for the extended Krylov iteration is presented in Algorithm 3. We use the stopping
criterion proposed in [14]. It is worth pointing out that at step 4 of the algorithm we can freely
choose any scalar product h?, ?i, without affecting formula (7) nor the convergence properties of
the method. As M = A?1 B, we use the scalar product hu, viA = uT Av induced by the positive
definite matrix A, so that the computation of the tridiagonal matrix Hs in the algorithm simplifies
to VsT BVs . We refer to [9] for further details. As before, the solve procedure is implemented
by means of the preconditioned conjugate gradient method, where the preconditioner is obtained
by the incomplete Cholesky decomposition of the coefficient matrix. Figure 3 shows that we are
?
able to compute the smallest eigenvector of L+
sym #Qsym being just a constant factor worse than
the computation of the eigenvector of the arithmetic mean, whereas the direct computation of the
geometric mean followed by the computation of the eigenvectors is unfeasible for large graphs.
1
2
3
4
5
6
7
Input: x0 , eigenspace H of A#B.
Output: Eigenpair (?H , x) of A#B
repeat
uk ? solve{A, xk }
vk ? solve{(A?1 B)1/2 , uk }
yk ? project uk over H?
xk+1 ? yk /kyk k2
until tolerance reached
?H ? xTk+1 xk , x ? xk+1
Algorithm 2: IPM applied to
A#B.1/2
1
2
3
4
5
6
7
8
9
10
Median time (sec)
11
LSN (eigs)
LBN (eigs)
LGM (eigs)
LGM (ours)
10 4
10 2
10 0
2
4
6
size of graph
6
8
10
?10 4
Input: u0 = y, V0 = [ ? ]
Output: x = (A?1 B)?1/2 y
v0 ? solve{B, Au0 }
for s = 0, 1, 2, . . . , n do
V?s+1 ? [Vs , us , vs ]
Vs+1 ? Orthogonalize columns of V?s+1 w.r.t. h?, ?iA
T
Hs+1 ? Vs+1
BVs+1
?1/2
xs+1 ? Hs+1 e1
if tolerance reached then break
us+1 ? solve{A, BVs+1 e1 }
vs+1 ? solve{B, AVs+1 e2 }
end
x ? Vs+1 xs+1
Algorithm 3: EKSM for the computation of
(A?1 B)?1/2 y
Figure 3: Median execution time of 10 runs for different Laplacians. Graphs have two perfect clusters and 2.5% of edges
among nodes. LGM (ours) uses Algs 2 and 3, whereas we
used Matlab?s eigs for the other matrices. The use of eigs
on LGM is prohibitive as it needs the matrix LGM to be built
(we use the toolbox provided in [2]), destroying the sparsity
of the original graphs. Experiments are performed using one
thread.
Experiments
Sociology Networks We evaluate signed Laplacians LSN , LBN , LAM and LGM through three realworld and moderate size signed networks: Highland tribes (Gahuku-Gama) network [21], Slovene
Parliamentary Parties Network [15] and US Supreme Court Justices Network [7]. For the sake of
comparison we take as ground truth the clustering that is stated in the corresponding references. We
observe that all signed Laplacians yield zero clustering error.
Experiments on Wikipedia signed network. We consider the Wikipedia adminship election dataset
from [17], which describes relationships that are positive, negative or non existent. We use Algs. 1?3
and look for 30 clusters. Positive and negative adjacency matrices sorted according to our clustering
are depicted in Figs. 4(a) and 4(b). We can observe the presence of a large relatively empty cluster.
7
Zooming into the denser portion of the graph we can see a k-balanced behavior (see Figs. 4(c) and
4(d)), i.e. the positive adjacency matrix shows assortative groups - resembling a block diagonal
structure - while the negative adjacency matrix shows a disassortative setting. Using LAM and LBN
we were not able to find any clustering structure, which corroborates results reported in [4]. This
further confirms that LGM overcomes other clustering approaches. To the knowledge of the authors,
this is the first time that clustering structure has been found in this dataset.
(b) W ?
(a) W +
(d) W ? (Zoom)
(c) W + (Zoom)
Figure 4: Wikipedia weight matrices sorted according to the clustering obtained with LGM (Alg. 1).
Experiments on UCI datasets. We evaluate our method LGM (Algs. 1?3) against LSN , LBN ,
and LAM with datasets from the UCI repository (see Table. 2). We build W + from a symmetric
k + -nearest neighbor graph, whereas W ? is obtained from the symmetric k ? -farthest neighbor
graph. For each dataset we test all clustering methods over all possible choices of k + , k ? ?
{3, 5, 7, 10, 15, 20, 40, 60}. In Table 2 we report the fraction of cases where each method achieves
the best and strictly best clustering error over all the 64 graphs, per each dataset. We can see that our
method outperforms other methods across all datasets.
In the figure on the right of Table 2 we present the clustering error on MNIST dataset fixing k + = 10.
?
With Q?
sym one gets the highest clustering error, which shows that the k -farthest neighbor graph is a
source of noise and is not informative. In fact, we observe that a small subset of nodes is the farthest
neighborhood of a large fraction of nodes. The noise from the k ? -farthest neighbor graph is strongly
influencing the performances of LSN and LBN , leading to a noisy embedding of the datapoints and
in turn to a high clustering error. On the other hand we can see that LGM is robust, in the sense that
its clustering performances are not affected negatively by the noise in the negative edges. Similar behaviors have been observed for the other datasets in Table 2, and are shown in supplementary material.
MNIST, k+ = 10
a
# vertices
# classes
LSN
LBN
LAM
LGM
Best (%)
Str. best (%)
Best (%)
Str. best (%)
Best (%)
Str. best (%)
Best (%)
Str. best (%)
iris
150
3
23.4
10.9
17.2
7.8
12.5
10.9
59.4
57.8
wine
178
3
40.6
21.9
21.9
4.7
28.1
14.1
42.2
35.9
ecoli optdig USPS pendig MNIST
310 5620 9298 10992 70000
3
10
10
10
10
18.8 28.1
10.9
10.9
12.5
14.1 28.1
9.4
10.9
12.5
7.8
0.0
1.6
3.1
0.0
6.3
0.0
1.6
3.1
0.0
14.1
0.0
0.0
1.6
0.0
12.5 0.0
0.0
1.6
0.0
65.6 71.9
89.1
84.4
87.5
60.9 71.9
87.5
84.4
87.5
0.6
0.4
5
a
7
0.2
L +sym
L BN
Q-sym
L AM
L SN
L GM (ours)
10
15 20
k?
40
0
60
Table 2: Experiments on UCI datasets. Left: fraction of cases where methods achieve best and strictly
best clustering error. Right: clustering error on MNIST dataset.
Acknowledgments. The authors acknowledge support by the ERC starting grant NOLEPRO
8
References
[1] R. Bhatia. Positive definite matrices. Princeton University Press, 2009.
[2] D. Bini and B. Ianazzo. The Matrix Means Toolbox. http://bezout.dm.unipi.it/
software/mmtoolbox/, May 2015.
[3] D. Cartwright and F. Harary. Structural balance: a generalization of Heider?s theory. Psychological Review, 63(5):277?293, 1956.
[4] K. Chiang, J. Whang, and I. Dhillon. Scalable clustering of signed networks using balance
normalized cut. CIKM, pages 615?624, 2012.
[5] J. A. Davis. Clustering and structural balance in graphs. Human Relations, 20:181?187, 1967.
[6] M. Desai and V. Rao. A characterization of the smallest eigenvalue of a graph. Journal of
Graph Theory, 18(2):181?194, 1994.
[7] P. Doreian and A. Mrvar. Partitioning signed social networks. Social Networks, 31(1):1?11,
2009.
[8] V. Druskin and L. Knizhnerman. Extended Krylov subspaces: approximation of the matrix
square root and related functions. SIAM J. Matrix Anal. Appl., 19:755?771, 1998.
[9] M. Fasi and B. Iannazzo. Computing the weighted geometric mean of two large-scale matrices
and its inverse times a vector. MIMS EPrint: 2016.29.
[10] F. Harary. On the notion of balance of a signed graph. Michigan Mathematical Journal,
2:143?146, 1953.
[11] N. J. Higham, D. S. Mackey, N. Mackey, and F. Tisseur. Functions preserving matrix groups
and iterations for the matrix square root. SIAM J. Matrix Anal. Appl., 26:849?877, 2005.
[12] B. Iannazzo. The geometric mean of two matrices from a computational viewpoint. Numer.
Linear Algebra Appl., to appear, 2015.
[13] B. Iannazzo and M. Porcelli. The Riemannian Barzilai-Borwein method with nonmonotone
line-search and the Karcher mean computation. Optimization online, December 2015.
[14] L. Knizhnerman and V. Simoncini. A new investigation of the extended Krylov subspace
method for matrix function evaluations. Numer. Linear Algebra Appl., 17:615?638, 2009.
[15] S. Kropivnik and A. Mrvar. An Analysis of the Slovene Parliamentary Parties Networks.
Development in Statistics and Methodology, pages 209?216, 1996.
[16] J. Kunegis, S. Schmidt, A. Lommatzsch, J. Lerner, E. Luca, and S. Albayrak. Spectral analysis
of signed graphs for clustering, prediction and visualization. In ICDM, pages 559?570, 2010.
[17] J. Leskovec and A. Krevl. SNAP Datasets: Stanford Large Network Dataset Collection.
http://snap.stanford.edu/data, June 2014.
[18] S. Liu. Multi-way dual cheeger constants and spectral bounds of graphs. Advances in Mathematics, 268:306 ? 338, 2015.
[19] U. Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17(4):395?416, Dec.
2007.
[20] M. Ra?ssouli and F. Leazizi. Continued fraction expansion of the geometric matrix mean and
applications. Linear Algebra Appl., 359:37?57, 2003.
[21] K. E. Read. Cultures of the Central Highlands, New Guinea. Southwestern Journal of Anthropology, 10(1):pp. 1?43, 1954.
[22] K. Rohe, S. Chatterjee, B. Yu, et al. Spectral clustering and the high-dimensional stochastic
blockmodel. The Annals of Statistics, 39(4):1878?1915, 2011.
[23] J. Tang, Y. Chang, C. Aggarwal, and H. Liu. A survey of signed network mining in social media.
arXiv preprint arXiv:1511.07569, 2015.
9
| 6164 |@word h:6 repository:1 version:2 briefly:1 proportion:4 justice:1 qsym:11 hu:1 confirms:1 bn:1 decomposition:2 ipm:4 ld:1 liu:2 ours:5 outperforms:3 existing:8 nonmonotone:1 repelling:1 numerical:1 partition:2 informative:8 v:15 mackey:2 generative:1 fewer:1 prohibitive:1 kyk:2 xk:5 ith:1 chiang:1 padua:2 characterization:1 node:11 saarland:1 mathematical:2 direct:1 become:1 prove:1 affective:1 inside:3 introduce:4 x0:1 ra:1 expected:6 behavior:3 nor:2 multi:1 discretized:1 inspired:1 actual:1 election:1 str:4 spain:1 project:2 notation:1 moreover:1 provided:1 medium:1 eigenspace:2 what:1 eigenvector:6 tie:3 returning:1 k2:1 uk:5 partitioning:1 farthest:4 grant:1 appear:1 positive:44 before:1 influencing:1 encoding:2 signed:48 twice:1 anthropology:1 resembles:2 k:3 suggests:2 appl:5 range:1 unique:1 acknowledgment:1 practice:2 block:10 assortative:6 definite:14 procedure:2 significantly:4 projection:1 suggest:1 get:1 unfeasible:2 onto:2 operator:5 unsigned:5 context:1 equivalent:2 map:1 destroying:1 resembling:1 starting:1 survey:1 recovery:1 evol:6 sbm:4 continued:1 datapoints:1 embedding:3 handle:1 notion:1 annals:1 play:1 gm:1 barzilai:1 exact:1 us:2 expensive:2 approximated:1 unipi:1 cut:2 labeled:1 observed:1 role:1 preprint:1 solved:1 ensures:1 connected:1 desai:1 ordering:8 highest:1 yk:2 balanced:8 expecting:1 intuition:2 mims:1 cheeger:1 existent:1 algebra:3 negatively:1 bipartite:1 basis:1 usps:1 easily:1 represented:1 various:1 fast:1 describe:2 effective:2 kp:3 bhatia:1 choosing:1 neighborhood:1 supplementary:4 larger:2 solve:8 denser:1 snap:2 stanford:2 statistic:3 noisy:4 itself:2 online:1 advantage:2 eigenvalue:36 sequence:1 matthias:1 albayrak:1 propose:5 lam:6 product:3 supreme:1 uci:3 achieve:3 moved:1 convergence:1 cluster:36 empty:1 produce:1 perfect:1 converges:1 object:1 simoncini:1 derive:1 fixing:1 nearest:1 implemented:1 implies:2 qd:1 closely:1 stochastic:10 human:1 material:4 dii:2 adjacency:3 require:2 behaviour:1 generalization:1 investigation:1 krevl:1 enumerated:1 blending:2 extension:4 strictly:3 hold:10 considered:3 ground:8 bvs:3 whang:1 pointing:1 major:1 achieves:1 cken:1 smallest:18 wine:1 weighted:3 nolepro:1 modified:1 ck:3 pn:1 corollary:1 encode:1 focus:1 june:1 vk:1 rank:1 blockmodel:1 sense:1 am:1 dependent:1 stopping:1 typically:2 relation:4 wij:4 germany:1 overall:1 among:2 dual:1 development:1 art:1 fairly:1 equal:1 construct:1 having:3 look:1 yu:1 report:2 few:1 algs:3 lerner:1 simultaneously:1 recognize:1 lbn:15 zoom:2 ab:1 interest:1 mining:1 evaluation:1 severe:1 grasp:1 numer:2 introduces:2 semidefinite:1 behind:1 mrvar:2 edge:16 necessary:1 culture:1 orthogonal:2 incomplete:2 disassortative:5 sociology:1 leskovec:1 psychological:1 column:3 rao:1 karcher:1 lanczos:1 applicability:1 pole:1 vertex:3 subset:2 rare:1 pout:17 tridiagonal:2 motivating:1 reported:1 siam:2 borwein:1 central:1 satisfied:1 choose:1 worse:1 leading:3 account:1 lsym:2 sec:1 coefficient:1 trough:2 explicitly:4 vi:2 later:1 root:2 performed:2 vst:1 break:1 analyze:3 characterizes:1 reached:2 portion:1 recover:5 contribution:1 square:2 efficiently:2 yield:2 correspond:3 identify:2 identification:2 worth:1 ecoli:1 influenced:1 definition:5 failure:1 against:1 pp:1 e2:1 dm:1 proof:1 associated:4 riemannian:1 boil:1 sampled:1 dataset:7 knowledge:4 ut:1 back:1 higher:2 methodology:1 done:1 strongly:1 just:1 implicit:1 lsr:5 preconditioner:2 until:1 hand:1 ei:1 widespread:1 hein1:1 defines:1 contain:1 concept:2 normalized:14 read:1 symmetric:5 dhillon:1 eg:7 attractive:1 davis:1 iris:1 criterion:1 recently:1 common:1 superior:1 wikipedia:4 pseudocode:2 overview:2 slovene:2 exponentially:1 volume:2 belong:2 discussed:3 refer:1 consistency:1 mathematics:1 similarly:1 erc:1 similarity:1 v0:2 showed:1 italy:1 moderate:2 calligraphic:1 preserving:1 freely:1 u0:1 semi:2 arithmetic:12 ii:1 full:1 multiple:1 aggarwal:1 luca:1 icdm:1 e1:3 a1:3 laplacian:24 ensuring:1 prediction:1 scalable:5 expectation:11 arxiv:2 iteration:4 kernel:2 normalization:1 represent:1 dec:1 c1:3 whereas:10 want:1 affecting:1 median:5 source:3 crucial:1 unlike:1 induced:3 undirected:1 eprint:1 december:1 structural:2 presence:7 ideal:1 enough:1 affect:1 perfectly:3 inner:1 idea:2 simplifies:1 court:1 shift:1 bottleneck:1 thread:1 lsn:15 remark:1 action:1 matlab:1 useful:1 clear:1 eigenvectors:24 amount:4 extensively:1 http:2 canonical:1 tutorial:1 shifted:1 cikm:1 per:1 write:1 shall:1 affected:1 group:10 four:1 traced:1 v1:1 graph:62 relaxation:2 fraction:5 run:3 inverse:5 letter:1 realworld:1 luxburg:1 reasonable:1 vn:1 bound:2 guaranteed:1 followed:1 deficiency:2 software:1 sake:2 u1:2 prescribed:1 span:1 xtk:1 relatively:1 transferred:1 according:3 conjugate:3 smaller:1 describes:1 across:1 kunegis:1 harary:2 partitioned:1 wi:4 intuitively:1 multiplicity:1 computationally:2 equation:1 visualization:1 previously:2 turn:4 discus:5 fail:2 pin:16 lommatzsch:1 ascending:1 end:1 available:1 apply:1 observe:6 spectral:14 appropriate:1 alternative:2 schmidt:1 original:1 top:1 clustering:54 bini:1 build:1 objective:3 cartwright:1 realized:1 blend:1 strategy:1 diagonal:4 gradient:3 subspace:7 zooming:1 majority:1 eigs:5 reason:2 preconditioned:2 index:1 relationship:4 ratio:3 balance:9 statement:1 negative:29 stated:1 ba:2 implementation:1 anal:2 imbalance:1 upper:1 av:2 francesco:1 datasets:6 benchmark:2 acknowledge:1 situation:4 extended:7 ever:2 defining:1 pendig:1 rn:1 thm:1 community:7 introduced:5 pair:2 namely:1 toolbox:2 conflict:1 saarbr:1 established:2 barcelona:1 nip:1 able:3 krylov:9 laplacians:30 regime:1 challenge:1 sparsity:4 built:2 shifting:1 power:3 suitable:2 unrealistic:1 natural:1 rely:1 ia:1 scheme:1 identifies:1 sn:1 review:1 geometric:39 relative:4 expect:1 gama:1 interesting:1 e2s:2 degree:3 sufficient:1 viewpoint:1 row:1 diagonally:1 repeat:1 free:2 sym:14 tribe:1 guinea:1 bias:2 allow:2 neighbor:4 sparse:6 tolerance:2 dimension:1 world:4 author:3 made:1 collection:1 far:5 party:2 social:5 approximate:1 emphasize:1 overcomes:2 reveals:1 assumed:1 corroborates:1 spectrum:4 search:1 table:7 robust:1 alg:1 expansion:1 necessarily:1 lgm:23 vj:2 main:3 whole:1 noise:4 arise:1 lbr:2 fig:4 join:1 definiteness:1 embeds:1 position:1 tang:1 rk:1 theorem:7 friendship:1 down:1 formula:1 rohe:1 showing:3 x:3 decay:1 iannazzo:3 exists:2 mnist:4 higham:1 ci:3 dissimilarity:1 execution:1 occurring:1 chatterjee:1 sorting:1 depicted:1 michigan:1 failed:1 scalar:3 chang:1 pedro:1 truth:8 relies:1 prop:1 identity:1 sorted:2 shared:1 specifically:1 eigenpair:1 kxs:1 orthogonalize:1 vote:1 cholesky:2 support:1 latter:1 brevity:1 heider:1 evaluate:2 princeton:1 |
5,708 | 6,165 | Dynamic Network Surgery for Efficient DNNs
Yiwen Guo?
Intel Labs China
[email protected]
Anbang Yao
Intel Labs China
[email protected]
Yurong Chen
Intel Labs China
[email protected]
Abstract
Deep learning has become a ubiquitous technology to improve machine intelligence.
However, most of the existing deep models are structurally very complex, making
them difficult to be deployed on the mobile platforms with limited computational
power. In this paper, we propose a novel network compression method called
dynamic network surgery, which can remarkably reduce the network complexity
by making on-the-fly connection pruning. Unlike the previous methods which
accomplish this task in a greedy way, we properly incorporate connection splicing
into the whole process to avoid incorrect pruning and make it as a continual network
maintenance. The effectiveness of our method is proved with experiments. Without
any accuracy loss, our method can efficiently compress the number of parameters
in LeNet-5 and AlexNet by a factor of 108? and 17.7? respectively, proving that
it outperforms the recent pruning method by considerable margins. Code and some
models are available at https://github.com/yiwenguo/Dynamic-Network-Surgery.
1
Introduction
As a family of brain inspired models, deep neural networks (DNNs) have substantially advanced a
variety of artificial intelligence tasks including image classification [13, 19, 11], natural language
processing, speech recognition and face recognition.
Despite these tremendous successes, recently designed networks tend to have more stacked layers,
and thus more learnable parameters. For instance, AlexNet [13] designed by Krizhevsky et al.
has 61 million parameters to win the ILSVRC 2012 classification competition, which is over 100
times more than that of LeCun?s conventional model [15] (e.g., LeNet-5), let alone the much more
complex models like VGGNet [19]. Since more parameters means more storage requirement and
more floating-point operations (FLOPs), it increases the difficulty of applying DNNs on mobile
platforms with limited memory and processing units. Moreover, the battery capacity can be another
bottleneck [9].
Although DNN models normally require a vast number of parameters to guarantee their superior
performance, significant redundancies have been reported in their parameterizations [4]. Therefore,
with a proper strategy, it is possible to compress these models without significantly losing their
prediction accuracies. Among existing methods, network pruning appears to be an outstanding one
due to its surprising ability of accuracy loss prevention. For instance, Han et al. [9] recently propose to
make "lossless" DNN compression by deleting unimportant parameters and retraining the remaining
ones (as illustrated in Figure 1(b)), somehow similar to a surgery process.
However, due to the complex interconnections among hidden neurons, parameter importance may
change dramatically once the network surgery begins. This leads to two main issues in [9] (and some
other classical methods [16, 10] as well). The first issue is the possibility of irretrievable network
?
This work was done when Yiwen Guo was an intern at Intel Labs China supervised by Anbang Yao who is
responsible for correspondence.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
damage. Since the pruned connections have no chance to come back, incorrect pruning may cause
severe accuracy loss. In consequence, the compression rate must be over suppressed to avoid such
loss. Another issue is learning inefficiency. As in the paper [9], several iterations of alternate pruning
and retraining are necessary to get a fair compression rate on AlexNet, while each retraining process
consists of millions of iterations, which can be very time consuming.
In this paper, we attempt to address these issues and pursue the compression limit of the pruning
method. To be more specific, we propose to sever redundant connections by means of continual
network maintenance, which we call dynamic network surgery. The proposed method involves
two key operations: pruning and splicing, conducted with two different purposes. Apparently,
the pruning operation is made to compress network models, but over pruning or incorrect pruning
should be responsible for the accuracy loss. In order to compensate the unexpected loss, we properly
incorporate the splicing operation into network surgery, and thus enabling connection recovery once
the pruned connections are found to be important any time. These two operations are integrated
together by updating parameter importance whenever necessary, making our method dynamic.
In fact, the above strategies help to make the whole process flexible. They are beneficial not only
to better approach the compression limit, but also to improve the learning efficiency, which will be
validated in Section 4. In our method, pruning and splicing naturally constitute a circular procedure
and dynamically divide the network connections into two categories, akin to the synthesis of excitatory
and inhibitory neurotransmitter in human nervous systems [17].
The rest of this paper is structured as follows. In Section 2, we introduce the related methods of DNN
compression by briefly discussing their merits and demerits. In Section 3, we highlight our intuition
of dynamic network surgery and introduce its implementation details. Section 4 experimentally
analyses our method and Section 5 draws the conclusions.
(a)
(b)
Figure 1: The pipeline of (a) our dynamic network surgery and (b) Han et al.?s method [9], using
AlexNet as an example. [9] needs more than 4800K iterations to get a fair compression rate (9?),
while our method runs only 700K iterations to yield a significantly better result (17.7?) with
comparable prediction accuracy.
2
Related Works
In order to make DNN models portable, a variety of methods have been proposed. Vanhoucke et
al. [20] analyse the effectiveness of data layout, batching and the usage of Intel fixed-point instructions,
making a 3? speedup on x86 CPUs. Mathieu et al. [18] explore the fast Fourier transforms (FFTs)
on GPUs and improve the speed of CNNs by performing convolution calculations in the frequency
domain.
An alternative category of methods resorts to matrix (or tensor) decomposition. Denil et al. [4]
propose to approximate parameter matrices with appropriately constructed low-rank decompositions.
Their method achieves 1.6? speedup on the convolutional layer with 1% drop in prediction accuracy.
Following similar ideas, some subsequent methods can provide more significant speedups [5, 22, 14].
Although matrix (or tensor) decomposition can be beneficial to DNN compression and speedup, these
methods normally incur severe accuracy loss under high compression requirement.
Vector quantization is another way to compress DNNs. Gong et al. [6] explore several such methods
and point out the effectiveness of product quantization. HashNet proposed by Chen et al. [1] handles
network compression by grouping its parameters into hash buckets. It is trained with a standard
backpropagation procedure and should be able to make substantial storage savings. The recently
2
Figure 2: Overview of the dynamic network surgery for a model with parameter redundancy.
proposed BinaryConnect [2] and Binarized Neural Networks [3] are able to compress DNNs by a
factor of 32?, while a noticeable accuracy loss is sort of inevitable.
This paper follows the idea of network pruning. It starts from the early work of LeCun et al.?s [16],
which makes use of the second derivatives of loss function to balance training loss and model
complexity. As an extension, Hassibi and Stork [10] propose to take non-diagonal elements of the
Hessian matrix into consideration, producing compression results with less accuracy loss. In spite
of their theoretical optimization, these two methods suffer from the high computational complexity
when tackling large networks, regardless of the accuracy drop. Very recently, Han et al. [9] explore
the magnitude-based pruning in conjunction with retraining, and report promising compression results
without accuracy loss. It has also been validated that the sparse matrix-vector multiplication can
further be accelerated by certain hardware design, making it more efficient than traditional CPU
and GPU calculations [7]. The drawback of Han et al.?s method [9] is mostly its potential risk of
irretrievable network damage and learning inefficiency.
Our research on network pruning is partly inspired by [9], not only because it can be very effective to
compress DNNs, but also because it makes no assumption on the network structure. In particular,
this branch of methods can be naturally combined with many other methods introduced above, to
further reduce the network complexity. In fact, Han et al. [8] have already tested such combinations
and obtained excellent results.
3
Dynamic Network Surgery
In this section, we highlight the intuition of our method and present its implementation details. In
order to simplify the explanations, we only talk about the convolutional layers and the fully connected
layers. However, as claimed in [8], our pruning method can also be applied to some other layer types
as long as their underlying mathematical operations are inner products on vector spaces.
3.1
Notations
First of all, we clarify the notations in this paper. Suppose a DNN model can be represented as
{Wk : 0 ? k ? C}, in which Wk denotes a matrix of connection weights in the kth layer. For the
fully connected layers with p-dimensional input and q-dimensional output, the size of Wk is simply
qk ? pk . For the convolutional layers with learnable kernels, we unfold the coefficients of each kernel
into a vector and concatenate all of them to Wk as a matrix.
In order to represent a sparse model with part of its connections pruned away, we use {Wk , Tk : 0 ?
k ? C}. Each Tk is a binary matrix with its entries indicating the states of network connections, i.e.,
whether they are currently pruned or not. Therefore, these additional matrices can be considered as
the mask matrices.
3.2
Pruning and Splicing
Since our goal is network pruning, the desired sparse model shall be learnt from its dense reference.
Apparently, the key is to abandon unimportant parameters and keep the important ones. However, the
parameter importance (i.e., the connection importance) in a certain network is extremely difficult
3
to measure because of the mutual influences and mutual activations among interconnected neurons.
That is, a network connection may be redundant due to the existence of some others, but it will soon
become crucial once the others are removed. Therefore, it should be more appropriate to conduct a
learning process and continually maintain the network structure.
Taking the kth layer as an example, we propose to solve the following optimization problem:
min L (Wk Tk )
Wk ,Tk
(i,j)
s.t. Tk
(i,j)
= hk (Wk
), ?(i, j) ? I,
(1)
in which L(?) is the network loss function, indicates the Hadamard product operator, set I
consists of all the entry indices in matrix Wk , and hk (?) is a discriminative function, which satisfies
hk (w) = 1 if parameter w seems to be crucial in the current layer, and 0 otherwise. Function
hk (?) is designed on the base of some prior knowledge so that it can constrain the feasible region of
Wk Tk and simplify the original NP-hard problem. For the sake of topic conciseness, we leave
the discussions of function hk (?) in Section 3.3. Problem (1) can be solved by alternately updating
Wk and Tk through the stochastic gradient descent (SGD) method, which will be introduced in the
following paragraphs.
Since binary matrix Tk can be determined with the constraints in (1), we only need to investigate the
update scheme of Wk . Inspired by the method of Lagrange Multipliers and gradient descent, we
give the following scheme for updating Wk . That is,
(i,j)
Wk
(i,j)
? Wk
??
?
(i,j) (i,j)
?(Wk Tk )
L (Wk Tk ) , ?(i, j) ? I,
(2)
in which ? indicates a positive learning rate. It is worth mentioning that we update not only the
important parameters, but also the ones corresponding to zero entries of Tk , which are considered
unimportant and ineffective to decrease the network loss. This strategy is beneficial to improve the
flexibility of our method because it enables the splicing of improperly pruned connections.
The partial derivatives in formula (2) can be calculated by the chain rule with a randomly chosen
minibatch of samples. Once matrix Wk and Tk are updated, they shall be applied to re-calculate the
whole network activations and loss function gradient. Repeat these steps iteratively, the sparse model
will be able to produce excellent accuracy. The above procedure is summarized in Algorithm 1.
Algorithm 1 Dynamic network surgery: the SGD method for solving optimization problem (1):
c k : 0 ? k ? C}: the reference model,
Input: X: training datum (with or without label), {W
?: base learning rate, f : learning policy.
Output: {Wk , Tk : 0 ? k ? C}: the updated parameter matrices and their binary masks.
c k , Tk ? 1, ?0 ? k ? C, ? ? 1 and iter ? 0
Initialize Wk ? W
repeat
Choose a minibatch of network input from X
Forward propagation and loss calculation with (W0 T0 ),...,(WC TC )
Backward propagation of the model output and generate ?L
for k = 0, ..., C do
Update Tk by function hk (?) and the current Wk , with a probability of ?(iter)
Update Wk by Formula (2) and the current loss function gradient ?L
end for
Update: iter ? iter + 1 and ? ? f (?, iter)
until iter reaches its desired maximum
Note that, the dynamic property of our method is shown in two aspects. On one hand, pruning
operations can be performed whenever the existing connections seem to become unimportant. Yet,
on the other hand, the mistakenly pruned connections shall be re-established if they once appear to be
important. The latter operation plays a dual role of network pruning, and thus it is called "network
splicing" in this paper. Pruning and splicing constitute a circular procedure by constantly updating
the connection weights and setting different entries in Tk , which is analogical to the synthesis of
excitatory and inhibitory neurotransmitter in human nervous system [17]. See Figure 2 for the
overview of our method and the method pipeline can be found in Figure 1(a).
4
3.3
Parameter Importance
Since the measure of parameter importance influences the state of network connections, function
hk (?), ?0 ? k ? C, can be essential to our dynamic network surgery. We have tested several
candidates and finally found the absolute value of the input to be the best choice, as claimed in [9].
That is, the parameters with relatively small magnitude are temporarily pruned, while the others with
large magnitude are kept or spliced in each iteration of Algorithm 1. Obviously, the threshold values
have a significant impact on the final compression rate. For a certain layer, a single threshold can be
set based on the average absolute value and variance of its connection weights. However, to improve
the robustness of our method, we use two thresholds ak and bk by importing a small margin t and set
bk as ak + t in Equation (3). For the parameters out of this range, we set their function outputs as the
corresponding entries in Tk , which means these parameters will neither be pruned nor spliced in the
current iteration.
?
(i,j)
?
if ak > |Wk |
? 0
(i,j)
(i,j)
(i,j)
hk (Wk ) =
(3)
if ak ? |Wk | < bk
Tk
?
?
(i,j)
1
if bk ? |Wk |
3.4
Convergence Acceleration
Considering that Algorithm 1 is a bit more complicated than the standard backpropagation method,
we shall take a few more steps to boost its convergence. First of all, we suggest slowing down the
pruning and splicing frequencies, because these operations lead to network structure change. This
can be done by triggering the update scheme of Tk stochastically, with a probability of p = ?(iter),
rather than doing it constantly. Function ?(?) shall be monotonically non-increasing and satisfy
?(0) = 1. After a prolonged decrease, the probability p may even be set to zero, i.e., no pruning or
splicing will be conducted any longer.
Another possible reason for slow convergence is the vanishing gradient problem. Since a large
percentage of connections are pruned away, the network structure should become much simpler
and probably even much "thinner" by utilizing our method. Thus, the loss function derivatives are
likely to be very small, especially when the reference model is very deep. We resolve this problem
by pruning the convolutional layers and fully connected layers separately, in the dynamic way still,
which is somehow similar to [9].
4
Experimental Results
In this section, we will experimentally analyse the proposed method and apply it on some popular
network models. For fair comparison and easy reproduction, all the reference models are trained by
the GPU implementation of Caffe package [12] with .prototxt files provided by the community.2 Also,
we follow the default experimental settings for SGD method, including the training batch size, base
learning rate, learning policy and maximal number of training iterations. Once the reference models
are obtained, we directly apply our method to reduce their model complexity. A brief summary of the
compression results are shown in Table 1.
Table 1: Dynamic network surgery can remarkably reduce the model complexity of some popular
networks, while the prediction error rate does not increase.
model
Top-1 error
Parameters
Iterations
Compression
LeNet-5 reference
LeNet-5 pruned
0.91%
0.91%
431K
4.0K
10K
16K
108?
LeNet-300-100 reference
LeNet-300-100 pruned
2.28%
1.99%
267K
4.8K
10K
25K
56?
AlexNet reference
AlexNet pruned
43.42%
43.09%
61M
3.45M
450K
700K
17.7?
2
Except for the simulation experiment and LeNet-300-100 experiments which we create the .prototxt files by
ourselves, because they are not available in the Caffe model zoo.
5
4.1
The Exclusive-OR Problem
To begin with, we consider an experiment on the synthetic data to preliminary testify the effectiveness
of our method and visualize its compression quality. The exclusive-OR (XOR) problem can be a
good option. It is a nonlinear classification problem as illustrated in Figure 3(a). In this experiment,
we turn the original problem to a more complicated one as Figure 3(b), in which some Gaussian
noises are mixed up with the original data (0, 0), (0, 1), (1, 0) and (1, 1).
(a)
(b)
Figure 3: The Exclusive-OR (XOR) classification problem (a) without noise and (b) with noise.
In order to classify these samples, we design a network model as illustrated in the left part of
Figure 4(a), which consists of 21 connections and each of them has a weight to be learned. The
sigmoid function is chosen as the activation function for all the hidden and output neurons. Twenty
thousand samples were randomly generated for the experiment, in which half of them were used as
training samples and the rest as test samples.
By 100,000 iterations of learning, this three-layer neural network achieves a prediction error rate of
0.31%. The weight matrix of network connections between input and hidden neurons can be found in
Figure 4(b). Apparently, its first and last row share the similar elements, which means there are two
hidden neurons functioning similarly. Hence, it is appropriate to use this model as a compression
reference, even though it is not very large. After 150,000 iterations, the reference model will be
compressed into the right side of Figure 4(a), and the new connection weights and their masks are
shown in Figure 4(b). The grey and green patches in T1 stand for those entries equal to one, and the
corresponding connections shall be kept. In particular, the green ones indicate the connections were
mistakenly pruned in the beginning but spliced during the surgery. The other patches (i.e., the black
ones) indicate the corresponding connections are permanently pruned in the end.
(a)
(b)
Figure 4: Dynamic network surgery on a three-layer neural network for the XOR problem. (a): The
network complexity is reduced to be optimal. (b) The connection weights are updated with masks.
The compressed model has a prediction error rate of 0.30%, which is slightly better than that of the
reference model, even though 40% of its parameters are set to be zero. Note that, the remaining
hidden neurons (excluding the bias unit) act as three different logic gates and altogether make up
6
the XOR classifier. However, if the pruning operations are conducted only on the initial parameter
magnitude (as in [9]), then probably four hidden neurons will be finally kept, which is obviously not
the optimal compression result.
In addition, if we reduce the impact of Gaussian noises and enlarge the margin between positive and
negative samples, then the current model can be further compressed, so that one more hidden neuron
will be pruned by our method.
So far, we have carefully explained the mechanism behind our method and preliminarily testified
its effectiveness. In the following subsections, we will further test our method on three popular NN
models and make quantitative comparisons with other network compression methods.
4.2
The MNIST database
MNIST is a database of handwritten digits and it is widely used to experimentally evaluate machine
learning methods. Same with [9], we test our method on two network models: LeNet-5 and LeNet300-100.
LeNet-5 is a conventional CNN model which consists of 4 learnable layers, including 2 convolutional
layers and 2 fully connected layers. It is designed by LeCun et al. [15] for document recognition. With
431K parameters to be learned, we train this model for 10,000 iterations and obtain a prediction error
rate of 0.91%. LeNet-300-100, as described in [15], is a classical feedforward neural network with
three fully connected layers and 267K learnable parameters. It is also trained for 10,000 iterations,
following the same learning policy as with LeNet-5. The well trained LeNet-300-100 model achieves
an error rate of 2.28%.
With the proposed method, we are able to compress these two models. The same batch size, learning
rate and learning policy are set as with the reference training processes, except for the maximal
number of iterations, which is properly increased. The results are shown in Table 1. After convergence,
the network parameters of LeNet-5 and LeNet-300-100 are reduced by a factor of 108? and 56?,
respectively, which means less than 1% and 2% of the network connections are kept, while the
prediction accuracies are as good or slightly better.
Table 2: Compare our compression results on LeNet-5 and LeNet-300-100 with that of [9]. The
percentage of remaining parameters after applying Han et al?s method [9] and our method are shown
in the last two columns.
Model
Layer
Params.
Params.% [9]
Params.% (Ours)
LeNet-5
conv1
conv2
fc1
fc2
Total
0.5K
25K
400K
5K
431K
? 66%
? 12%
? 8%
? 19%
? 8%
14.2%
3.1%
0.7%
4.3%
0.9%
LeNet-300-100
fc1
fc2
fc3
Total
236K
30K
1K
267K
? 8%
? 9%
? 26%
? 8%
1.8%
1.8%
5.5%
1.8%
To better demonstrate the advantage of our method, we make layer-by-layer comparisons between
our compression results and Han et al.?s [9] in Table 2. To the best of our knowledge, their method is
so far the most effective pruning method, if the learning inefficiency is not a concern. However, our
method still achieves at least 4 times the compression improvement against their method. Besides,
due to the significant advantage over Han et al.?s models [9], our compressed models will also be
undoubtedly much faster than theirs.
4.3
ImageNet and AlexNet
In the final experiment, we apply our method to AlexNet [13], which wins the ILSVRC 2012
classification competition. As with the previous experiments, we train the reference model first.
7
Without any data augmentation, we obtain a reference model with 61M well-learned parameters after
450K iterations of training (i.e., roughly 90 epochs). Then we perform the network surgery on it.
AlexNet consists of 8 learnable layers, which is considered to be deep. So we prune the convolutional
layers and fully connected layers separately, as previously discussed in Section 3.4. The training
batch size, base learning rate and learning policy still keep the same with reference training process.
We run 320K iterations for the convolutional layers and 380K iterations for the fully connected layers,
which means 700K iterations in total (i.e., roughly 140 epochs). In the test phase, we use just the
center crop and test our compressed model on the validation set.
Table 3: The comparison of different compressed models, with Top-1 and Top-5 prediction error rate,
the number of training epochs and the final compression rate shown in the table.
Model
Top-1 error
Top-5 error
Epochs
Compression
Fastfood 32 (AD) [21]
Fastfood 16 (AD) [21]
Naive Cut [9]
Han et al. [9]
Dynamic network surgery (Ours)
41.93%
42.90%
57.18%
42.77%
43.09%
23.23%
19.67%
19.99%
0
? 960
? 140
2?
3.7?
4.4?
9?
17.7?
Table 3 compares the result of our method with some others. The four compared models are built by
applying Han et al.?s method [9] and the adaptive fastfood transform method [21]. When compared
with these "lossless" methods, our method achieves the best result in terms of the compression rate.
Besides, after acceptable number of epochs, the prediction error rate of our model is comparable or
even better than those models compressed from better references.
In order to make more detailed comparisons, we compare the percentage of remaining parameters in
our compressed model with that of Han et al.?s [9], since they achieve the second best compression
rate. As shown in Table 4, our method compresses more parameters on almost every single layer in
AlexNet, which means both the storage requirement and the number of FLOPs are better reduced
when compared with [9]. Besides, our learning process is also much more efficient thus considerable
less epochs are needed (as least 6.8 times decrease).
Table 4: Compare our method with [9] on AlexNet.
5
Layer
Params.
Params.% [9]
Params.% (Ours)
conv1
conv2
conv3
conv4
conv5
fc1
fc2
fc3
Total
35K
307K
885K
664K
443K
38M
17M
4M
61M
? 84%
? 38%
? 35%
? 37%
? 37%
? 9%
? 9%
? 25%
? 11%
53.8%
40.6%
29.0%
32.3%
32.5%
3.7%
6.6%
4.6%
5.7%
Conclusions
In this paper, we have investigated the way of compressing DNNs and proposed a novel method
called dynamic network surgery. Unlike the previous methods which conduct pruning and retraining
alternately, our method incorporates connection splicing into the surgery and implements the whole
process in a dynamic way. By utilizing our method, most parameters in the DNN models can
be deleted, while the prediction accuracy does not decrease. The experimental results show that
our method compresses the number of parameters in LeNet-5 and AlexNet by a factor of 108?
and 17.7?, respectively, which is superior to the recent pruning method by considerable margins.
Besides, the learning efficiency of our method is also better thus less epochs are needed.
8
References
[1] Wenlin Chen, James T. Wilson, Stephen Tyree, Kilian Q. Weinberger, and Yixin Chen. Compressing neural
networks with the hashing trick. In ICML, 2015.
[2] Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. BinaryConnect: Training deep neural
networks with binary weights during propagations. In NIPS, 2015.
[3] Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural
networks: Training deep neural networks with weights and activations constrained to +1 or -1. arXiv
preprint arXiv:1602.02830v3, 2016.
[4] Misha Denil, Babak Shakibi, Laurent Dinh, Marc?Aurelio Ranzato, and Nando de Freitas. Predicting
parameters in deep learning. In NIPS, 2013.
[5] Emily L. Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Exploiting linear structure
within convolutional networks for efficient evaluation. In NIPS, 2014.
[6] Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional networks
using vector quantization. arXiv preprint arXiv:1412.6115, 2014.
[7] Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A Horowitz, and William J. Dally.
EIE: Efficient inference engine on compressed deep neural network. In ISCA, 2016.
[8] Song Han, Huizi Mao, and William J. Dally. Deep compression: Compressing deep neural networks with
pruning, trained quantization and Huffman coding. In ICLR, 2016.
[9] Song Han, Jeff Pool, John Tran, and William J. Dally. Learning both weights and connections for efficient
neural networks. In NIPS, 2015.
[10] Babak Hassibi and David G. Stork. Second order derivatives for network pruning: Optimal brain surgeon.
In NIPS, 1993.
[11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition.
In CVPR, 2016.
[12] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio
Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. In ACM
MM, 2014.
[13] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional
neural networks. In NIPS, 2012.
[14] Vadim Lebedev, Yaroslav Ganin, Maksim Rakhuba, Ivan Oseledets, and Victor Lempitsky. Speeding-up
convolutional neural networks using fine-tuned cp-decomposition. In ICLR, 2015.
[15] Yann LeCun, L?on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to
document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998.
[16] Yann LeCun, John S Denker, Sara A Solla, Richard E Howard, and Lawrence D Jackel. Optimal brain
damage. In NIPS, 1989.
[17] Harvey Lodish, Arnold Berk, S Lawrence Zipursky, Paul Matsudaira, David Baltimore, and James Darnell.
Molecular Cell Biology: Neurotransmitters, Synapses, and Impulse Transmission. W. H. Freeman, 2000.
[18] Michael Mathieu, Mikael Henaff, and Yann LeCun. Fast training of convolutional networks through FFTs.
arXiv preprint arXiv:1312.5851, 2013.
[19] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2014.
[20] Vincent Vanhoucke, Andrew Senior, and Mark Z. Mao. Improving the speed of neural networks on CPUs.
In NIPS Workshop, 2011.
[21] Zichao Yang, Marcin Moczulski, Misha Denil, Nando de Freitas, Alex Smola, Le Song, and Ziyu Wang.
Deep fried convnets. In ICCV, 2015.
[22] Xiangyu Zhang, Jianhua Zou, Xiang Ming, Kaiming He, and Jian Sun. Efficient and accurate approximations of nonlinear convolutional networks. In CVPR, 2015.
9
| 6165 |@word cnn:1 briefly:1 compression:28 seems:1 retraining:5 instruction:1 grey:1 simulation:1 decomposition:4 sgd:3 initial:1 inefficiency:3 liu:3 daniel:1 tuned:1 document:2 ours:3 outperforms:1 existing:3 freitas:2 current:5 com:4 guadarrama:1 surprising:1 activation:4 tackling:1 yet:1 must:1 gpu:2 john:2 subsequent:1 concatenate:1 enables:1 designed:4 drop:2 update:6 moczulski:1 hash:1 alone:1 intelligence:2 greedy:1 half:1 nervous:2 slowing:1 beginning:1 fried:1 vanishing:1 parameterizations:1 simpler:1 zhang:2 mathematical:1 constructed:1 become:4 incorrect:3 consists:5 paragraph:1 introduce:2 mask:4 roughly:2 nor:1 brain:3 inspired:3 ming:2 freeman:1 prolonged:1 cpu:3 resolve:1 considering:1 increasing:1 begin:2 spain:1 moreover:1 underlying:1 notation:2 provided:1 alexnet:12 substantially:1 pursue:1 guarantee:1 quantitative:1 every:1 binarized:2 continual:2 act:1 zaremba:1 classifier:1 unit:2 normally:2 appear:1 producing:1 continually:1 positive:2 t1:1 thinner:1 limit:2 consequence:1 soudry:1 despite:1 ak:4 laurent:1 black:1 china:4 dynamically:1 sara:1 mentioning:1 limited:2 range:1 lecun:7 responsible:2 implement:1 backpropagation:2 digit:1 procedure:4 unfold:1 evan:1 significantly:2 spite:1 suggest:1 get:2 operator:1 storage:3 risk:1 applying:3 influence:2 wenlin:1 conventional:2 center:1 layout:1 regardless:1 conv4:1 emily:1 recovery:1 matthieu:2 rule:1 utilizing:2 proving:1 handle:1 embedding:1 oseledets:1 updated:3 suppose:1 play:1 itay:1 losing:1 trick:1 element:2 recognition:6 updating:4 cut:1 database:2 role:1 fly:1 preprint:3 solved:1 wang:1 calculate:1 thousand:1 region:1 compressing:4 connected:7 kilian:1 sun:2 spliced:3 ranzato:1 decrease:4 removed:1 solla:1 ran:1 substantial:1 intuition:2 complexity:7 battery:1 dynamic:18 babak:2 trained:5 solving:1 incur:1 surgeon:1 efficiency:2 represented:1 neurotransmitter:3 talk:1 stacked:1 train:2 fast:3 effective:2 artificial:1 caffe:3 jean:1 widely:1 solve:1 cvpr:2 interconnection:1 otherwise:1 compressed:9 ability:1 simonyan:1 analyse:2 transform:1 abandon:1 final:3 obviously:2 preliminarily:1 advantage:2 karayev:1 propose:6 tran:1 interconnected:1 product:3 maximal:2 hadamard:1 flexibility:1 achieve:1 analogical:1 x86:1 competition:2 exploiting:1 convergence:4 yaniv:1 requirement:3 jing:1 darrell:1 produce:1 sutskever:1 transmission:1 leave:1 tk:19 help:1 bourdev:1 ganin:1 gong:2 andrew:2 noticeable:1 conv5:1 involves:1 come:1 indicate:2 drawback:1 cnns:1 stochastic:1 human:2 nando:2 require:1 dnns:7 preliminary:1 extension:1 clarify:1 mm:1 considered:3 lawrence:2 visualize:1 achieves:5 early:1 yixin:1 purpose:1 label:1 currently:1 hubara:1 jackel:1 ross:1 create:1 gaussian:2 rather:1 denil:3 avoid:2 mobile:2 wilson:1 conjunction:1 validated:2 properly:3 improvement:1 rank:1 indicates:2 hk:8 inference:1 el:1 nn:1 integrated:1 hidden:7 dnn:7 marcin:1 issue:4 classification:6 flexible:1 among:3 dual:1 prevention:1 platform:2 constrained:1 initialize:1 mutual:2 equal:1 once:6 saving:1 enlarge:1 biology:1 icml:1 denton:1 inevitable:1 report:1 others:4 simplify:2 np:1 few:1 richard:1 yoshua:3 randomly:2 floating:1 phase:1 ourselves:1 maintain:1 william:3 attempt:1 testify:1 undoubtedly:1 possibility:1 circular:2 investigate:1 evaluation:1 severe:2 misha:2 behind:1 darnell:1 chain:1 accurate:1 partial:1 necessary:2 conduct:2 divide:1 desired:2 re:2 girshick:1 theoretical:1 instance:2 classify:1 increased:1 column:1 entry:6 krizhevsky:2 conducted:3 reported:1 learnt:1 accomplish:1 synthetic:1 combined:1 params:6 pool:1 michael:1 together:1 synthesis:2 ilya:1 yao:3 lebedev:1 augmentation:1 choose:1 stochastically:1 horowitz:1 resort:1 derivative:4 wojciech:1 potential:1 de:2 importing:1 yaroslav:1 summarized:1 wk:26 coding:1 coefficient:1 isca:1 satisfy:1 ad:2 performed:1 lab:4 dally:3 apparently:3 doing:1 start:1 sort:1 option:1 complicated:2 jia:1 shakibi:1 accuracy:15 convolutional:15 qk:1 who:1 efficiently:1 variance:1 yield:1 xor:4 handwritten:1 vincent:1 ren:1 zoo:1 worth:1 eie:1 synapsis:1 reach:1 whenever:2 trevor:1 yurong:2 against:1 frequency:2 james:2 naturally:2 conciseness:1 proved:1 popular:3 knowledge:2 subsection:1 ubiquitous:1 carefully:1 back:1 appears:1 hashing:1 supervised:1 follow:1 zisserman:1 huizi:2 done:2 though:2 just:1 smola:1 until:1 convnets:1 hand:2 mistakenly:2 nonlinear:2 propagation:3 somehow:2 minibatch:2 quality:1 impulse:1 usage:1 multiplier:1 functioning:1 lenet:19 hence:1 iteratively:1 fc3:2 illustrated:3 during:2 demonstrate:1 cp:1 image:3 consideration:1 novel:2 recently:4 superior:2 sigmoid:1 overview:2 stork:2 million:2 discussed:1 he:2 theirs:1 significant:4 dinh:1 similarly:1 language:1 bruna:1 han:14 longer:1 base:4 pu:1 sergio:1 patrick:1 recent:2 henaff:1 claimed:2 certain:3 harvey:1 binary:4 success:1 discussing:1 victor:1 additional:1 prune:1 xiangyu:2 v3:1 redundant:2 monotonically:1 stephen:1 branch:1 faster:1 calculation:3 compensate:1 long:2 molecular:1 impact:2 prediction:11 crop:1 maintenance:2 arxiv:6 iteration:17 kernel:2 represent:1 sergey:1 cell:1 addition:1 remarkably:2 separately:2 huffman:1 fine:1 baltimore:1 jian:2 rakhuba:1 crucial:2 appropriately:1 rest:2 unlike:2 vadim:1 ineffective:1 probably:2 file:2 tend:1 incorporates:1 effectiveness:5 seem:1 call:1 yang:2 feedforward:1 bengio:3 easy:1 variety:2 ivan:1 architecture:1 triggering:1 reduce:5 idea:2 inner:1 lubomir:1 haffner:1 bottleneck:1 whether:1 t0:1 improperly:1 akin:1 song:4 suffer:1 karen:1 speech:1 hessian:1 cause:1 sever:1 constitute:2 shaoqing:1 deep:16 dramatically:1 detailed:1 unimportant:4 transforms:1 hardware:1 category:2 reduced:3 http:1 generate:1 percentage:3 inhibitory:2 shall:6 redundancy:2 key:2 iter:7 threshold:3 four:2 yangqing:1 deleted:1 binaryconnect:2 neither:1 kept:4 backward:1 vast:1 run:2 package:1 family:1 almost:1 splicing:11 yann:4 patch:2 draw:1 acceptable:1 jianhua:1 comparable:2 bit:1 layer:29 datum:1 correspondence:1 constraint:1 constrain:1 alex:2 sake:1 wc:1 fourier:1 speed:2 aspect:1 extremely:1 min:1 pruned:15 performing:1 relatively:1 gpus:1 speedup:4 structured:1 alternate:1 combination:1 beneficial:3 slightly:2 suppressed:1 rob:1 making:5 explained:1 iccv:1 bucket:1 ardavan:1 pipeline:2 equation:1 previously:1 turn:1 mechanism:1 needed:2 merit:1 end:2 available:2 operation:10 apply:3 denker:1 away:2 appropriate:2 batching:1 pierre:1 alternative:1 robustness:1 batch:3 permanently:1 gate:1 existence:1 original:3 compress:9 denotes:1 remaining:4 top:5 altogether:1 mikael:1 especially:1 classical:2 surgery:20 tensor:2 already:1 yunchao:1 strategy:3 damage:3 exclusive:3 diagonal:1 traditional:1 gradient:6 win:2 kth:2 iclr:3 fc2:3 capacity:1 w0:1 topic:1 portable:1 reason:1 code:1 besides:4 index:1 weinberger:1 balance:1 zichao:1 difficult:2 mostly:1 demerit:1 maksim:1 negative:1 implementation:3 design:2 proper:1 policy:5 twenty:1 conv2:2 perform:1 neuron:8 convolution:1 howard:1 enabling:1 descent:2 flop:2 hinton:1 excluding:1 pedram:1 community:1 introduced:2 bk:4 david:3 connection:29 imagenet:2 engine:1 learned:3 tremendous:1 established:1 barcelona:1 boost:1 nip:9 alternately:2 address:1 able:4 built:1 including:3 memory:1 explanation:1 deleting:1 green:2 power:1 natural:1 difficulty:1 predicting:1 residual:1 advanced:1 scheme:3 improve:5 github:1 technology:1 lossless:2 brief:1 fc1:3 mathieu:2 vggnet:1 naive:1 speeding:1 joan:1 prior:1 epoch:7 multiplication:1 xiang:1 loss:18 fully:7 highlight:2 mixed:1 yiwen:3 geoffrey:1 validation:1 shelhamer:1 vanhoucke:2 conv1:2 tyree:1 courbariaux:2 share:1 row:1 excitatory:2 summary:1 repeat:2 last:2 soon:1 side:1 bias:1 senior:1 arnold:1 conv3:1 face:1 taking:1 absolute:2 sparse:4 calculated:1 default:1 stand:1 forward:1 made:1 adaptive:1 far:2 pruning:30 approximate:1 keep:2 logic:1 consuming:1 discriminative:1 fergus:1 ziyu:1 table:10 promising:1 improving:1 excellent:2 complex:3 investigated:1 bottou:1 domain:1 marc:1 zou:1 pk:1 main:1 dense:1 fastfood:3 aurelio:1 whole:4 noise:4 paul:1 fair:3 intel:8 deployed:1 slow:1 hassibi:2 structurally:1 mao:3 candidate:1 anbang:3 ffts:2 donahue:1 formula:2 down:1 specific:1 learnable:5 reproduction:1 grouping:1 essential:1 concern:1 quantization:4 mnist:2 workshop:1 importance:6 magnitude:4 margin:4 chen:5 tc:1 simply:1 explore:3 likely:1 intern:1 lagrange:1 unexpected:1 temporarily:1 kaiming:2 chance:1 satisfies:1 constantly:2 acm:1 lempitsky:1 goal:1 acceleration:1 jeff:2 considerable:3 change:2 experimentally:3 feasible:1 hard:1 determined:1 except:2 berk:1 called:3 total:4 partly:1 experimental:3 indicating:1 ilsvrc:2 mark:2 guo:3 latter:1 jonathan:1 outstanding:1 accelerated:1 incorporate:2 evaluate:1 tested:2 |
5,709 | 6,166 | On Valid Optimal Assignment Kernels and
Applications to Graph Classification
Nils M. Kriege
Department of Computer Science
TU Dortmund, Germany
[email protected]
Pierre-Louis Giscard
Department of Computer Science
University of York, UK
[email protected]
Richard C. Wilson
Department of Computer Science
University of York, UK
[email protected]
Abstract
The success of kernel methods has initiated the design of novel positive semidefinite functions, in particular for structured data. A leading design paradigm for
this is the convolution kernel, which decomposes structured objects into their parts
and sums over all pairs of parts. Assignment kernels, in contrast, are obtained
from an optimal bijection between parts, which can provide a more valid notion
of similarity. In general however, optimal assignments yield indefinite functions,
which complicates their use in kernel methods. We characterize a class of base
kernels used to compare parts that guarantees positive semidefinite optimal assignment kernels. These base kernels give rise to hierarchies from which the optimal
assignment kernels are computed in linear time by histogram intersection. We
apply these results by developing the Weisfeiler-Lehman optimal assignment kernel
for graphs. It provides high classification accuracy on widely-used benchmark data
sets improving over the original Weisfeiler-Lehman kernel.
1
Introduction
The various existing kernel methods can conveniently be applied to any type of data, for which a
kernel is available that adequately measures the similarity between any two data objects. This includes
structured data like images [2, 5, 11], 3d shapes [1], chemical compounds [8] and proteins [4], which
are often represented by graphs. Most kernels for structured data decompose both objects and add up
the pairwise similarities between their parts following the seminal concept of convolution kernels
proposed by Haussler [12]. In fact, many graph kernels can be seen as instances of convolution
kernels under different decompositions [23].
A fundamentally different approach with good prospects is to assign the parts of one objects to the
parts of the other, such that the total similarity between the assigned parts is maximum possible.
Finding such a bijection is known as assignment problem and well-studied in combinatorial optimization [6]. This approach has been successfully applied to graph comparison, e.g., in general graph
matching [9, 17] as well as in kernel-based classification [8, 18, 1]. In contrast to convolution kernels,
assignments establish structural correspondences and thereby alleviate the problem of diagonal
dominance at the same time. However, the similarities derived in this way are not necessarily positive
semidefinite (p.s.d.) [22, 23] and hence do not give rise to valid kernels, severely limiting their use in
kernel methods.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Our goal in this paper is to consider a particular class of base kernels which give rise to valid
assignment kernels. In the following we use the term valid to mean a kernel which is symmetric and
positive semidefinite. We formalize the considered problem: Let [X ]n denote the set of all n-element
subsets of a set X and B(X, Y ) the set of all bijections between X, Y in [X ]n for n ? N. We study
k
the optimal assignment kernel KB
on [X ]n defined as
X
k
KB
(X, Y ) = max W (B), where W (B) =
k(x, y)
(1)
B?B(X,Y )
(x,y)?B
and k is a base kernel on X . For clarity of presentation we assume n to be fixed. In order to apply the
kernel to sets of different cardinality, we may fill up the smaller set by new objects z with k(z, x) = 0
for all x ? X without changing the result.
Related work. Correspondence problems have been extensively studied in object recognition,
where objects are represented by sets of features often called bag of words. Grauman and Darrell
proposed the pyramid match kernel that seeks to approximate correspondences between points in
Rd by employing a space-partitioning tree structure and counting how often points fall into the
same bin [11]. An adaptive partitioning with non-uniformly shaped bins was used to improve the
approximation quality in high dimensions [10].
For non-vectorial data, Fr?hlich et al. [8] proposed kernels for graphs derived from an optimal assignment between their vertices and applied the approach to molecular graphs. However, it was shown
that the resulting similarity measure is not necessarily a valid kernel [22]. Therefore, Vishwanathan et
al. [23] proposed a theoretically well-founded variation of the kernel, which essentially replaces the
max-function in Eq. (1) by a soft-max function. Besides introducing an additional parameter, which
must be chosen carefully to avoid numerical difficulties, the approach requires the evaluation of a
sum over all possible assignments instead of finding a single optimal one. This leads to an increase in
running time from cubic to factorial, which is infeasible in practice. Pachauri et al. [16] considered
the problem of finding optimal assignments between multiple sets. The problem is equivalent to
finding a permutation of the elements of every set, such that assigning the i-th elements to each other
yields an optimal result. Solving this problem allows the derivation of valid kernels between pairs
of sets with a fixed ordering. This approach was referred to as transitive assignment kernel in [18]
and employed for graph classification. However, this does not only lead to non-optimal assignments
between individual pairs of graphs, but also suffers from high computational costs. Johansson and
Dubhashi [14] derived kernels from optimal assignments by first sampling a fixed set of so-called
landmarks. Each data point is then represented by a feature vector, where each component is the
optimal assignment similarity to a landmark.
Various general approaches to cope with indefinite kernels have been proposed, in particular, for
support vector machines [see 15, and references therein]. Such approaches should principally be used
in applications, where similarities cannot be expressed by positive semidefinite kernels.
Our contribution. We study optimal assignment kernels in more detail and investigate which base
kernels lead to valid optimal assignment kernels. We characterize a specific class of kernels we
refer to as strong and show that strong kernels are equivalent to kernels obtained from a hierarchical
partition of the domain of the kernel. We show that for strong base kernels the optimal assignment (i)
yields a valid kernel; and (ii) can be computed in linear time given the associated hierarchy. While the
computation reduces to histogram intersection similar to the pyramid match kernel [11], our approach
is in no way restricted to specific objects like points in Rd . We demonstrate the versatility of our
results by deriving novel graph kernels based on optimal assignments, which are shown to improve
over their convolution-based counterparts. In particular, we propose the Weisfeiler-Lehman optimal
assignment kernel, which performs favourable compared to state-of-the-art graph kernels on a wide
range of data sets.
2
Preliminaries
Before continuing with our contribution, we begin by introducing some key notation for kernels
and trees which will be used later. A (valid) kernel on a set X is a function k : X ? X ? R such
that there is a real Hilbert space H and a mapping ? : X ? H such that k(x, y) = h?(x), ?(y)i
for all x, y in X , where h?, ?i denotes the inner product of H. We call ? a feature map, and H a
feature space. Equivalently, a function k : X ? X ? R is a kernel if and only if for every subset
2
{x1 , . . . , xn } ? X the n ? n matrix defined by [m]i,j = k(xi , xj ) is p.s.d. The Dirac kernel k? is
defined by k? (x, y) = 1, if x = y and 0 otherwise.
We consider simple undirected graphs G = (V, E), where V (G) = V is the set of vertices and
E(G) = E the set of edges. An edge {u, v} is for short denoted by uv or vu, where both refer to the
same edge. A graph with a unique path between any two vertices is a tree. A rooted tree is a tree
T with a distinguished vertex r ? V (T ) called root. The vertex following v on the path to the root
r is called parent of v and denoted by p(v), where p(r) = r. The vertices on this path are called
ancestors of v and the depth of v is the number of edges on the path. The lowest common ancestor
LCA(u, v) of two vertices u and v in a rooted tree is the unique vertex with maximum depth that is
an ancestor of both u and v.
3
Strong kernels and hierarchies
In this section we introduce a restricted class of kernels that will later turn out to lead to valid optimal
assignment kernels when employed as base kernel. We provide two different characterizations of
this class, one in terms of an inequality constraint on the kernel values, and the other by means of a
hierarchy defined on the domain of the kernel. The latter will provide the basis for our algorithm to
compute valid optimal assignment kernels efficiently.
We first consider similarity functions fulfilling the requirement that for any two objects there is no
third object that is more similar to each of them than the two to each other. We will see later in
Section 3.1 that every such function indeed is p.s.d. and hence a valid kernel.
Definition 1 (Strong Kernel). A function k : X ? X ? R?0 is called strong kernel if k(x, y) ?
min{k(x, z), k(z, y)} for all x, y, z ? X .
Note that a strong kernel requires that every object is most similar to itself, i.e., k(x, x) ? k(x, y) for
all x, y ? X .
In the following we introduce a restricted class of kernels that is derived from a hierarchy on the set
X . As we will see later in Theorem 1 this class of kernels is equivalent to strong kernels according to
Definition 1. Such hierarchies can be systematically constructed on sets of arbitrary objects in order
to derive strong kernels. We commence by fixing the concept of a hierarchy formally. Let T be a
rooted tree such that the leaves of T are the elements of X . Each inner vertex v in T corresponds to a
subset of X comprising all leaves of the subtree rooted at v. Therefore the tree T defines a family of
nested subsets of X . Let w : V (T ) ? R?0 be a weight function such that w(v) ? w(p(v)) for all v
in T . We refer to the tuple (T, w) as a hierarchy.
Definition 2 (Hierarchy-induced Kernel). Let H = (T, w) be a hierarchy on X , then the function
defined as k(x, y) = w(LCA(x, y)) for all x, y in X is the kernel on X induced by H.
We show that Definitions 1 and 2 characterize the same class of kernels.
Lemma 1. Every kernel on X that is induced by a hierarchy on X is strong.
Proof. Assume there is a hierarchy (T, w) that induces a kernel k that is not strong. Then there are
x, y, z ? X with k(x, y) < min{k(x, z), k(z, y)} and three vertices a = LCA(x, z), b = LCA(z, y)
and c = LCA(x, y) with w(c) < w(a) and w(c) < w(b). The unique path from x to the root contains
a and the path from y to the root contains b, both paths contain c. Since weights decrease along
paths, the assumption implies that a, b, c are pairwise distinct and c is an ancestor of a and b. Thus,
there must be a path from z via a to c and another path from z via b to c. Hence, T is not a tree,
contradicting the assumption.
We show constructively that the converse holds as well.
Lemma 2. For every strong kernel k on X there is a hierarchy on X that induces k.
Proof (Sketch). We incrementally construct a hierarchy on X that induces k by successive insertion
of elements from X . In each step the hierarchy induces k restricted to the inserted elements and
eventually induces k after insertion of all elements. Initially, we start with a hierarchy containing
just one element x ? X with w(x) = k(x, x). The key to all following steps is that there is a
unique way to extend the hierarchy: Let Xi ? X be the first i elements in the order of insertion
and let Hi = (Ti , wi ) be the hierarchy after the i-th step. A leaf representing the next element z
can be grafted onto Hi to form a hierarchy Hi+1 that induces k restricted to Xi+1 = Xi ? {z}. Let
3
LCA(x, c)
p
z
b
x
p
b3
b2 = c
b1
(a) Hi
(b) Hi+1 for B = {b1 , b2 , b3 }
z
b
(c) Hi+1 for |B| = 1
Figure 1: Illustrative example for the construction of the hierarchy on i + 1 objects (b), (c) from the hierarchy on
i objects (a) following the procedure used in the proof of Lemma 2. The inserted leaf z is highlighted in red, its
parent p with weight w(p) = kmax in green and b in blue, respectively.
B = {x ? Xi : k(x, z) = kmax }, where kmax = maxy?Xi k(y, z). There is a unique vertex b, such
that B are the leaves of the subtree rooted at b, cf. Fig. 1. We obtain Hi+1 by inserting a new vertex p
with child z into Ti , such that p becomes the parent of b, cf. Fig. 1(b), (c). We set wi+1 (p) = kmax ,
wi+1 (z) = k(z, z) and wi+1 (x) = wi (x) for all x ? V (Ti ). Let k 0 be the kernel induced by Hi+1 .
Clearly, k 0 (x, y) = k(x, y) for all x, y ? Xi . According to the construction k 0 (z, x) = kmax = k(z, x)
for all x ? B. For all x ?
/ B we have LCA(z, x) = LCA(c, x) for any c ? B, see Fig. 1(b). For strong
kernels k(x, c) ? min{k(x, z), k(z, c)} = k(x, z) and k(x, z) ? min{k(x, c), k(c, z)} = k(x, c),
since k(c, z) = kmax . Thus k(z, x) = k(c, x) must hold and consequently k 0 (z, x) = k(z, x).
Note that a hierarchy inducing a specific strong kernel is not unique: Adjacent inner vertices with the
same weight can be merged, and vertices with just one child can be removed without changing the
induced kernel. Combining Lemmas 1 and 2 we obtain the following result.
Theorem 1. A kernel k on X is strong if and only if it is induced by a hierarchy on X .
As a consequence of the above theorem the number of values a strong kernel on n objects may take is
bounded by the number of vertices in a binary tree with n leaves, i.e., for every strong kernel k on X
we have | img(k)| ? 2|X | ? 1. The Dirac kernel is a common example of a strong kernel, in fact,
every kernel k : X ? X ? R?0 with | img(k)| = 2 is strong.
The definition of a strong kernel and its relation to hierarchies is reminiscent of related concepts for
distances: A metric d on X is an ultrametric if d(x, y) ? max{d(x, z), d(z, y)} for all x, y, z ? X .
For every ultrametric d on X there is a rooted tree T with leaves X and edge weights, such that
(i) d is the path length between leaves in T , (ii) the path lengths from a leaf to the root are all
equal. Indeed, every ultrametric can be embedded into a Hilbert space [13] and thus the associated
inner product is a valid kernel. Moreover, it can be shown that this inner product always is a strong
kernel. However, the concept of strong kernels is more general: there are strong kernels k such
that the associated kernel metric dk (x, y) = k?(x) ? ?(y)k is not an ultrametric. The distinction
originates from the self-similarities, which in strong kernels, can be arbitrary provided that they fulfil
k(x, x) ? k(x, y) for all x, y in X . This degree of freedom is lost when considering distances. If we
require all self-similarities of a strong kernel to be equal, then the associated kernel metric always is
an ultrametric. Consequently, strong kernels correspond to a superset of ultrametrics. We explicitly
define a feature space for general strong kernels in the following.
3.1
Feature maps of strong kernels
We use the property that every strong kernel is induced by a hierarchy to derive feature vectors
for strong kernels. Let (T, w) be a hierarchy on X that induces the strong kernel k. We define the
additive weight function ? : V (T ) ? R?0 as ?(v) = w(v) ? w(p(v)) and ?(r) = w(r) for the root
r. Note that the property of a hierarchy assures that the difference is non-negative. For v ? V (T ) let
P (v) ? V (T ) denote the vertices in T on the path from v to the root r.
We consider the mapping ? : X ? Rt , where t = |V (T )| and the components indexed by v ? V (T )
are
p
?(v), if v ? P (x)
[?(x)]v =
0,
otherwise.
4
a b
c
r
a 4 3 1
v
b 3 5 1
c 1 1 2
(a) Kernel matrix
a
1; 1
?(a) =
3; 2
b
4; 1 5; 2
?(b) =
c
?(c) =
2; 1
(b) Hierarchy
r
v
a
b
c
? ? ?
>
1, 2, 1, 0, 0
? ?
?
>
1, 2, 0,
2, 0
?
? >
1, 0, 0, 0,
1
(c) Feature vectors
Figure 2: The matrix of a strong kernel on three objects (a) induced by the hierarchy (b) and the derived feature
vectors (c). A vertex u in (b) is annotated by its weights w(u); ?(u).
Proposition 1. Let k be a strong kernel on X . The function ? defined as above is a feature map of k,
i.e., k(x, y) = ?(x)> ?(y) for all x, y ? X .
Proof. Given arbitrary x, y ? X and let c = LCA(x, y). The dot product yields
X
X p
2
?(v) = w(c) = k(x, y),
?(x)> ?(y) =
[?(x)]v [?(y)]v =
v?V (T )
v?P (c)
since according to the definition the only non-zero products contributing to the sum over v ? V (T )
are those in P (x) ? P (y) = P (c).
Figure 2 shows an example of a strong kernel, an associated hierarchy and the derived feature vectors.
As a consequence of Theorem 1 and Proposition 1, strong kernels according to Definition 1 are indeed
valid kernels.
4
Valid kernels from optimal assignments
k
We consider the function KB
on [X ]n according to Eq. (1) under the assumption that the base kernel
k is strong. Let (T, w) be a hierarchy on X which induces k. For a vertex v ? V (T ) and a set X ? X ,
we denote by Xv the subset of X that is contained in the subtree rooted
P at v. We define the histogram
H k of a set X ? [X ]n w.r.t. the strong base kernel k as H k (X) = x?X ?(x) ? ?(x), where ? is the
feature map of the strong base kernel according to Section 3.1 and ? denotes the element-wise product.
Equivalently, [H k (X)]v = ?(v) ? |Xv | for v ? V (T ). The histogram intersection kernel [20] is
Pt
defined as Ku (g, h) = i=1 min{[g]i , [h]i }, t ? N, and known to be a valid kernel on Rt [2, 5].
Theorem 2. Let k be a strong kernel
on X and the histograms H k defined as above, then
k
KB
(X, Y ) = Ku H k (X), H k (Y ) for all X, Y ? [X ]n .
Proof. Let (T, w) be a hierarchy inducing the strong base kernel k. We rewrite the weight of an
assignment B as sum of weights of vertices in T . Since
X
X
X
k(x, y) = w(LCA(x, y)) =
?(v), we have W (B) =
k(x, y) =
cv ? ?(v),
v?P (x)?P (y)
(x,y)?B
v?V (T )
where cv counts how often v appears simultaneously in P (x) and P (y) in total for all (x, y) ? B.
For the histogram intersection kernel we obtain
X
X
Ku (H k (X), H k (Y )) =
min{?(v) ? |Xv |, ?(v) ? |Yv |} =
min{|Xv |, |Yv |} ? ?(v).
v?V (T )
v?V (T )
Since every assignment B ? B(X, Y ) is a bijection, each x ? X and y ? Y appears only once in B
and cv ? min{|Xv |, |Yv |} follows.
It remains to show that the above inequality is tight for an optimal assignment. We construct such an
assignment by the following greedy approach: We perform a bottom-up traversal on the hierarchy
starting with the leaves. For every vertex v in the hierarchy we arbitrarily pair the objects in Xv and
Yv that are not yet contained in the assignment. Note that no element in Xv has been assigned to an
element in Y \ Yv , and no element in Yv to an element from X \ Xv . Hence, at every vertex v we
have cv = min{|Xv |, |Yv |} vertices from Xv assigned to vertices in Yv .
5
X
H(X)
H(Y )
r v a b c
r v a b c
Y
a
a
a
b
a
b
b
c
c
c
8
6
4
2
0
(a) Assignment problem
(b) Histograms
Figure 3: An assignment instance (a) for X, Y ? [X ]5 and the derived histograms (b). The set X contains
three distinct vertices labelled a and the set Y two distinct vertices labelled b and c. Taking the multiplicities
into account the histograms are obtained from the hierarchy of the base kernel k depicted in Fig. 2. The
k
optimal assignment yields a value of KB
(X, Y ) = 15, where grey, green, brown, red and orange edges
have weight 1, 2, 3, 4 and 5, respectively. The histogram intersection kernel gives Ku (H k (X), H k (Y )) =
min{5, 5} + min{8, 6} + min{3, 1} + min{2, 4} + min{1, 2} = 15.
Figure 3 illustrates the relation between the optimal assignment kernel employing a strong base
kernel and the histogram intersection kernel. Note that a vertex v ? V (T ) with ?(v) = 0 does not
contribute to the histogram intersection kernel and can be omitted. In particular, for any two objects
x1 , x2 ? X with k(x1 , y) = k(x2 , y) for all y ? X we have ?(x1 ) = ?(x2 ) = 0. There is no
need to explicitly represent such leaves in the hierarchy, yet their multiplicity must be considered to
determine the number of leaves in the subtree rooted at an inner vertex, cf. Fig. 2, 3.
k
Corollary 1. If the base kernel k is strong, then the function KB
is a valid kernel.
Theorem 2 implies not only that optimal assignments give rise to valid kernels for strong base kernels,
but also allows to compute them by histogram intersection. Provided that the hierarchy is known,
bottom-up computation of histograms and their intersection can both be performed in linear time,
while the general Hungarian method would require cubic time to solve the assignment problem [6].
k
Corollary 2. Given a hierarchy inducing k, KB
(X, Y ) can be computed in time O(|X| + |Y |).
5
Graph kernels from optimal assignments
The concept of optimal assignment kernels is rather general and can be applied to derive kernels on
various structures. In this section we apply our results to obtain novel graph kernels, i.e., kernels
of the form K : G ? G ? R, where G denotes the set of graphs. We assume that every vertex v is
equipped with a categorical label given by ? (v). Labels typically arise from applications, e.g., in a
graph representing a chemical compound the labels may indicate atom types.
5.1
Optimal assignment kernels on vertices and edges
As a baseline we propose graph kernels on vertices and edges. The vertex optimal assignment kernel
k
(V-OA) is defined as K(G, H) = KB
(V (G), V (H)), where k is the Dirac kernel on vertex labels.
k
Analogously, the edge optimal assignment kernel (E-OA) is given by K(G, H) = KB
(E(G), E(H)),
where we define k(uv, st) = 1 if at least one of the mappings (u 7? s, v 7? t) and (u 7? t, v 7? s)
maps vertices with the same label only; and 0 otherwise. Since these base kernels are Dirac kernels,
they are strong and, consequently, V-OA and E-OA are valid kernels.
5.2
Weisfeiler-Lehman optimal assignment kernels
Weisfeiler-Lehman kernels are based on iterative vertex colour refinement and have been shown
to provide state-of-the-art prediction performance in experimental evaluations [19]. These kernels
employ the classical 1-dimensional Weisfeiler-Lehman heuristic for graph isomorphism testing and
consider subtree patterns encoding the neighbourhood of each vertex up to a given distance. For a
parameter h and a graph G with initial labels ? , a sequence (?0 , . . . , ?h ) of refined labels referred
to as colours is computed, where ?0 = ? and ?i is obtained from ?i?1 by the following procedure:
6
b
d
e
f
a
c
(a) Graph G with refined colours
7? 6
7? 5
7? 1
7? 4
7? 1
7? 1
7? 2
7? 1
7? 2
7? 1
(b) Feature vector
{a, b} {c, d} {f }
{e}
(c) Associated hierarchy
Figure 4: A graph G with uniform initial colours ?0 and refined colours ?i for i ? {1, . . . , 3} (a), the feature
vector of G for the Weisfeiler-Lehman subtree kernel (b) and the associated hierarchy (c). Note that the vertices
of G are the leaves of the hierarchy, although not shown explicitly in Fig. 4(c).
Sort the multiset of colours {?i?1 (u) : vu ? E(G)} for every vertex v lexicographically to obtain
a unique sequence of colours and add ?i?1 (v) as first element. Assign a new colour ?i (v) to every
vertex v by employing a one-to-one mapping from sequences to new colours. Figure 4(a) illustrates
the refinement process. The Weisfeiler-Lehman subtree kernel (WL) counts the vertex colours two
graphs have in common in the first h refinement steps and can be computed by taking the dot product
of feature vectors, where each component counts the occurrences of a colour, see Fig. 4(b).
We propose the Weisfeiler-Lehman optimal assignment kernel (WL-OA), which is defined on the
vertices like OA-V, but employs the non-trivial base kernel
k(u, v) =
h
X
k? (?i (u), ?i (v)).
(2)
i=0
This base kernel corresponds to the number of matching colours in the refinement sequence. More
intuitively, the base kernel value reflects to what extent the two vertices have a similar neighbourhood.
Let V be the set of all vertices of graphs in G, we show that the refinement process defines a hierarchy
on V, which induces the base kernel of Eq. (2). Each vertex colouring ?i naturally partitions V into
colour classes, i.e., sets of vertices with the same colour. Since the refinement takes the colour ?i (v) of
a vertex v into account when computing ?i+1 (v), the implication ?i (u) 6= ?i (v) ? ?i+1 (u) 6= ?i+1 (v)
holds for all u, v ? V. Hence, the colour classes induced by ?i+1 are at least as fine as those induced
by ?i . Moreover, the sequence (?i )0?i?h gives rise to a family of nested subsets, which can naturally
be represented by a hierarchy (T, w), see Fig. 4(c) for an illustration. When assuming ?(v) = 1
for all vertices v ? V (T ), the hierarchy induces the kernel of Eq. (2). We have shown that the base
kernel is strong and it follows from Corollary 1 that WL-OA is a valid kernel. Moreover, it can
be computed from the feature vectors of the Weisfeiler-Lehman subtree kernel in linear time by
histogram intersection, cf. Theorem 2.
6
Experimental evaluation
We report on the experimental evaluation of the proposed graph kernels derived from optimal
assignments and compare with state-of-the-art convolution kernels.
6.1
Method and Experimental Setup
We performed classification experiments using the C-SVM implementation LIBSVM [7]. We report
mean prediction accuracies and standard deviations obtained by 10-fold cross-validation repeated 10
times with random fold assignment. Within each fold all necessary parameters were selected by crossvalidation based on the training set. This includes the regularization parameter C, kernel parameters
where applicable and whether to normalize the kernel matrix. All kernels were implemented in Java
and experiments were conducted using Oracle Java v1.8.0 on an Intel Core i7-3770 CPU at 3.4GHz
(Turbo Boost disabled) with 16GB of RAM using a single processor only.
Kernels. As a baseline we implemented the vertex kernel (V) and edge kernel (E), which are the
dot products on vertex and edge label histograms, respectively, where an edge label consist of the
labels of its endpoints. V-OA and E-OA are the related optimal assignment kernels as described in
Sec. 5.1. For the Weisfeiler-Lehman kernels WL and WL-OA, see Section 5.2, the parameter h was
7
Table 1: Classification accuracies and standard deviations on graph data sets representing small molecules,
macromolecules and social networks.
Data Set
Kernel
M UTAG PTC-MR
NCI1
NCI109 P ROTEINS
D&D
E NZYMES C OLLAB R EDDIT
V
V-OA
85.4?0.7
82.5?1.1
57.8?0.9
56.4?1.8
64.6?0.1 63.6?0.2
65.6?0.3 65.1?0.4
71.9?0.4
73.8?0.5
78.2?0.4
78.8?0.3
23.4?1.1
35.1?1.1
56.2?0.0 75.3?0.1
59.3?0.1 77.8?0.1
E
E-OA
85.2?0.6
81.0?1.1
57.3?0.7
56.3?1.7
66.2?0.1 64.9?0.1
68.9?0.3 68.7?0.2
73.5?0.2
74.5?0.6
78.3?0.5
79.0?0.4
27.4?0.8
37.4?1.8
52.0?0.0 75.1?0.1
68.2?0.3 79.8?0.2
WL
86.0?1.7
WL-OA 84.5?1.7
61.3?1.4
63.6?1.5
85.8?0.2 85.9?0.3
86.1?0.2 86.3?0.2
75.6?0.4
76.4?0.4
79.0?0.4
79.2?0.4
53.7?1.4
59.9?1.1
79.1?0.1 80.8?0.4
80.7?0.1 89.3?0.3
GL
SP
54.7?2.0
58.9?2.2
70.5?0.2 69.3?0.2
74.5?0.3 73.0?0.3
72.7?0.6
75.8?0.5
79.7?0.7
79.0?0.6
30.6?1.2
42.6?1.6
64.7?0.1 60.1?0.2
58.8?0.2 84.6?0.2
85.2?0.9
83.0?1.4
chosen from {0, ..., 7}. In addition we implemented a graphlet kernel (GL) and the shortest-path
kernel (SP) [3]. GL is based on connected subgraphs with three vertices taking labels into account
similar to the approach used in [19]. For SP we used the Dirac kernel to compare path lengths and
computed the kernel by explicit feature maps, cf. [19]. Note that all kernels not identified as optimal
assignment kernels by the suffix OA are convolution kernels.
Data sets. We tested on widely-used graph classification benchmarks from different domains [cf.
4, 23, 19, 24]: M UTAG, PTC-MR, NCI1 and NCI109 are graphs derived from small molecules,
P ROTEINS, D&D and E NZYMES represent macromolecules, and C OLLAB and R EDDIT are derived
from social networks.1 All data sets have two class labels except E NZYMES and C OLLAB, which
are divided into six and three classes, respectively. The social network graphs are unlabelled and we
considered all vertices uniformly labelled. All other graph data sets come with vertex labels. Edge
labels, if present, were ignored since they are not supported by all graph kernels under comparison.
6.2
Results and discussion
Table 1 summarizes the classification accuracies. We observe that optimal assignment kernels on
most data sets improve over the prediction accuracy obtained by their convolution-based counterpart.
The only distinct exception is M UTAG. The extent of improvement on the other data sets varies, but is
in particular remarkable for E NZYMES and R EDDIT. This indicates that optimal assignment kernels
provide a more valid notion of similarity than convolution kernels for these classification tasks. The
most successful kernel is WL-OA, which almost consistently improves over WL and performs best
on seven of the nine data sets. WL-OA provides the second best accuracy on D&D and ranks in the
middle of the field for M UTAG. For these two data set the difference in accuracy between the kernels
is small and even the baseline kernels perform notably well.
The time to compute the quadratic kernel matrix was less that one minute for all kernels and data sets
with exception of SP on D&D (29 min) and R EDDIT (2 h) as well as GL on C OLLAB (28 min). The
running time to compute the optimal assignment kernels by histogram intersection was consistently
on par with the running time required for the related convolution kernels and orders of magnitude
faster than their computation by the Hungarian method.
7
Conclusions and future work
We have characterized the class of strong kernels leading to valid optimal assignment kernels and
derived novel effective kernels for graphs. The reduction to histogram intersection makes efficient
computation possible and known speed-up techniques for intersection kernels can directly be applied
(see, e.g., [21] and references therein). We believe that our results may form the basis for the design
of new kernels, which can be computed efficiently and adequately measure similarity.
1
The data sets, further references and statistics are available from http://graphkernels.cs.
tu-dortmund.de.
8
Acknowledgments
N. M. Kriege is supported by the German Science Foundation (DFG) within the Collaborative Research Center
SFB 876 ?Providing Information by Resource-Constrained Data Analysis?, project A6 ?Resource-efficient
Graph Mining?. P.-L. Giscard is grateful for the financial support provided by the Royal Commission for the
Exhibition of 1851.
References
[1] L. Bai, L. Rossi, Z. Zhang, and E. R. Hancock. An aligned subtree kernel for weighted graphs. In Proc.
Int. Conf. Mach. Learn., ICML 2015, pages 30?39, 2015.
[2] A. Barla, F. Odone, and A. Verri. Histogram intersection kernel for image classification. In Int. Conf.
Image Proc., ICIP 2003, volume 3, pages III?513?16 vol.2, Sept 2003.
[3] K. M. Borgwardt and H.-P. Kriegel. Shortest-path kernels on graphs. In Proc. IEEE Int. Conf. Data Min.,
ICDM ?05, pages 74?81, Washington, DC, USA, 2005.
[4] K. M. Borgwardt, C. S. Ong, S. Sch?nauer, S. V. N. Vishwanathan, A. J. Smola, and H.-P. Kriegel. Protein
function prediction via graph kernels. Bioinformatics, 21 Suppl 1:i47?i56, Jun 2005.
[5] S. Boughorbel, J. P. Tarel, and N. Boujemaa. Generalized histogram intersection kernel for image
recognition. In Int. Conf. Image Proc., ICIP 2005, pages III?161?4, 2005.
[6] R. E. Burkard, M. Dell?Amico, and S. Martello. Assignment Problems. SIAM, 2012.
[7] C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst.
Technol., 2:27:1?27:27, May 2011.
[8] H. Fr?hlich, J. K. Wegner, F. Sieker, and A. Zell. Optimal assignment kernels for attributed molecular
graphs. In Proc. Int. Conf. Mach. Learn., ICML ?05, pages 225?232, 2005.
[9] M. Gori, M. Maggini, and L. Sarti. Exact and approximate graph matching using random walks. IEEE
Trans. Pattern Anal. Mach. Intell., 27(7):1100?1111, July 2005.
[10] K. Grauman and T. Darrell. Approximate correspondences in high dimensions. In B. Sch?lkopf, J. C. Platt,
and T. Hoffman, editors, Adv. Neural Inf. Process. Syst. 19, NIPS ?06, pages 505?512. MIT Press, 2007.
[11] K. Grauman and T. Darrell. The pyramid match kernel: Efficient learning with sets of features. J. Mach.
Learn. Res., 8:725?760, May 2007.
[12] D. Haussler. Convolution kernels on discrete structures. Technical Report UCSC-CRL-99-10, University
of California, Santa Cruz, CA, USA, 1999.
[13] R. S. Ismagilov. Ultrametric spaces and related hilbert spaces. Mathematical Notes, 62(2):186?197, 1997.
[14] F. D. Johansson and D. Dubhashi. Learning with similarity functions on graphs using matchings of
geometric embeddings. In Proc. ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining, KDD
?15, pages 467?476. ACM, 2015.
[15] G. Loosli, S. Canu, and C. S. Ong. Learning SVM in Krein spaces. IEEE Trans. Pattern Anal. Mach.
Intell., PP(99):1?1, 2015.
[16] D. Pachauri, R. Kondor, and V. Singh. Solving the multi-way matching problem by permutation synchronization. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Adv.
Neural Inf. Process. Syst. 26, NIPS ?13, pages 1860?1868. Curran Associates, Inc., 2013.
[17] K. Riesen and H. Bunke. Approximate graph edit distance computation by means of bipartite graph
matching. Image Vis. Comp., 27(7):950 ? 959, 2009.
[18] M. Schiavinato, A. Gasparetto, and A. Torsello. Transitive assignment kernels for structural classification.
In A. Feragen, M. Pelillo, and M. Loog, editors, Int. Workshop Similarity-Based Pattern Recognit.,
SIMBAD ?15, pages 146?159, 2015.
[19] N. Shervashidze, P. Schweitzer, E. J. van Leeuwen, K. Mehlhorn, and K. M. Borgwardt. Weisfeiler-Lehman
graph kernels. J. Mach. Learn. Res., 12:2539?2561, 2011.
[20] M. J. Swain and D. H. Ballard. Color indexing. Int. J. Comp. Vis., 7(1):11?32, 1991.
[21] A. Vedaldi and A. Zisserman. Efficient additive kernels via explicit feature maps. IEEE Trans. Pattern
Anal. Mach. Intell., 34(3):480?492, 2012.
[22] J.-P. Vert. The optimal assignment kernel is not positive definite. CoRR, abs/0801.4061, 2008.
[23] S. V. N. Vishwanathan, N. N. Schraudolph, R. I. Kondor, and K. M. Borgwardt. Graph kernels. J. Mach.
Learn. Res., 11:1201?1242, 2010.
[24] P. Yanardag and S. V. N. Vishwanathan. Deep graph kernels. In Proc. ACM SIGKDD Int. Conf. Knowledge
Discovery and Data Mining, KDD ?15, pages 1365?1374. ACM, 2015.
9
| 6166 |@word kondor:2 middle:1 johansson:2 grey:1 seek:1 decomposition:1 thereby:1 reduction:1 bai:1 initial:2 contains:3 existing:1 assigning:1 yet:2 must:4 reminiscent:1 cruz:1 additive:2 partition:2 numerical:1 kdd:2 shape:1 greedy:1 leaf:13 selected:1 short:1 core:1 provides:2 characterization:1 bijection:3 contribute:1 successive:1 multiset:1 zhang:1 dell:1 mathematical:1 along:1 constructed:1 ucsc:1 schweitzer:1 mehlhorn:1 introduce:2 theoretically:1 pairwise:2 notably:1 indeed:3 multi:1 zell:1 cpu:1 equipped:1 cardinality:1 considering:1 becomes:1 spain:1 begin:1 notation:1 bounded:1 moreover:3 provided:3 project:1 lowest:1 what:1 burkard:1 finding:4 guarantee:1 every:17 ti:3 grauman:3 uk:4 partitioning:2 originates:1 converse:1 platt:1 louis:2 positive:6 before:1 xv:10 severely:1 consequence:2 encoding:1 mach:8 initiated:1 path:16 therein:2 studied:2 range:1 exhibition:1 unique:7 acknowledgment:1 testing:1 vu:2 practice:1 lost:1 graphlet:1 definite:1 procedure:2 java:2 vedaldi:1 matching:5 vert:1 word:1 boughorbel:1 protein:2 cannot:1 onto:1 kmax:6 seminal:1 equivalent:3 map:7 center:1 commence:1 starting:1 subgraphs:1 haussler:2 fill:1 deriving:1 financial:1 notion:2 variation:1 fulfil:1 limiting:1 ultrametric:6 hierarchy:44 construction:2 pt:1 exact:1 curran:1 associate:1 element:16 recognition:2 bottom:2 inserted:2 loosli:1 connected:1 adv:2 ordering:1 decrease:1 prospect:1 removed:1 insertion:3 ong:2 traversal:1 grateful:1 solving:2 rewrite:1 tight:1 singh:1 bipartite:1 basis:2 matchings:1 various:3 represented:4 derivation:1 distinct:4 hancock:1 effective:1 recognit:1 shervashidze:1 refined:3 odone:1 heuristic:1 widely:2 solve:1 otherwise:3 statistic:1 highlighted:1 itself:1 sequence:5 propose:3 product:8 fr:2 tu:3 inserting:1 combining:1 aligned:1 inducing:3 dirac:5 normalize:1 crossvalidation:1 parent:3 darrell:3 requirement:1 object:18 derive:3 ac:2 fixing:1 pelillo:1 eq:4 strong:46 implemented:3 hungarian:2 c:1 implies:2 indicate:1 come:1 merged:1 annotated:1 kb:9 bin:2 require:2 assign:2 decompose:1 alleviate:1 preliminary:1 proposition:2 hold:3 considered:4 mapping:4 omitted:1 proc:7 applicable:1 bag:1 combinatorial:1 label:14 edit:1 wl:10 successfully:1 reflects:1 weighted:1 hoffman:1 mit:1 clearly:1 always:2 i56:1 rather:1 bunke:1 avoid:1 wilson:2 corollary:3 derived:11 improvement:1 consistently:2 rank:1 indicates:1 contrast:2 sigkdd:2 martello:1 baseline:3 suffix:1 typically:1 initially:1 relation:2 ancestor:4 comprising:1 germany:1 classification:11 nci1:2 denoted:2 nzymes:4 art:3 constrained:1 orange:1 equal:2 construct:2 once:1 shaped:1 field:1 sampling:1 atom:1 washington:1 icml:2 future:1 report:3 fundamentally:1 richard:2 employ:2 simultaneously:1 intell:4 individual:1 dfg:1 macromolecule:2 versatility:1 ab:1 freedom:1 investigate:1 mining:3 evaluation:4 semidefinite:5 roteins:2 implication:1 edge:13 tuple:1 necessary:1 tree:11 indexed:1 continuing:1 walk:1 re:3 leeuwen:1 complicates:1 instance:2 soft:1 assignment:55 a6:1 cost:1 introducing:2 vertex:51 subset:6 deviation:2 uniform:1 swain:1 successful:1 conducted:1 characterize:3 commission:1 varies:1 st:1 borgwardt:4 siam:1 utag:4 analogously:1 containing:1 conf:7 weisfeiler:12 leading:2 syst:3 account:3 de:2 b2:2 sec:1 includes:2 int:9 inc:1 lehman:12 ultrametrics:1 explicitly:3 vi:2 later:4 root:7 performed:2 loog:1 red:2 start:1 yv:8 sort:1 contribution:2 collaborative:1 accuracy:7 efficiently:2 yield:5 correspond:1 lkopf:1 dortmund:3 comp:2 processor:1 suffers:1 definition:7 pp:1 naturally:2 associated:7 proof:5 attributed:1 colouring:1 knowledge:2 color:1 improves:1 hilbert:3 formalize:1 carefully:1 appears:2 zisserman:1 verri:1 just:2 smola:1 sketch:1 incrementally:1 defines:2 quality:1 disabled:1 believe:1 b3:2 usa:2 concept:5 contain:1 brown:1 counterpart:2 adequately:2 hence:5 assigned:3 chemical:2 regularization:1 symmetric:1 adjacent:1 self:2 rooted:8 illustrative:1 generalized:1 demonstrate:1 performs:2 image:6 wise:1 novel:4 common:3 endpoint:1 volume:1 extend:1 refer:3 cv:4 rd:2 uv:2 canu:1 dot:3 similarity:15 base:21 add:2 inf:2 compound:2 inequality:2 binary:1 success:1 arbitrarily:1 seen:1 additional:1 mr:2 employed:2 determine:1 paradigm:1 shortest:2 july:1 ii:2 multiple:1 reduces:1 technical:1 match:3 lexicographically:1 unlabelled:1 cross:1 faster:1 characterized:1 lin:1 divided:1 icdm:1 schraudolph:1 molecular:2 maggini:1 prediction:4 essentially:1 metric:3 histogram:20 kernel:199 represent:2 pyramid:3 suppl:1 addition:1 fine:1 sch:2 induced:10 undirected:1 call:1 structural:2 counting:1 iii:2 superset:1 embeddings:1 xj:1 wegner:1 identified:1 inner:6 i7:1 whether:1 six:1 isomorphism:1 colour:16 gb:1 sfb:1 york:4 nine:1 deep:1 ignored:1 santa:1 factorial:1 bijections:1 extensively:1 induces:10 http:1 blue:1 discrete:1 vol:1 dominance:1 key:2 indefinite:2 ptc:2 clarity:1 changing:2 libsvm:2 v1:1 ram:1 graph:45 sum:4 family:2 almost:1 summarizes:1 hi:8 correspondence:4 fold:3 replaces:1 nci109:2 quadratic:1 oracle:1 turbo:1 simbad:1 vectorial:1 vishwanathan:4 constraint:1 x2:3 speed:1 min:17 rossi:1 department:3 structured:4 developing:1 lca:10 according:6 smaller:1 wi:5 yanardag:1 maxy:1 intuitively:1 restricted:5 fulfilling:1 multiplicity:2 principally:1 indexing:1 resource:2 remains:1 assures:1 turn:1 eventually:1 count:3 german:1 available:2 apply:3 observe:1 hierarchical:1 pierre:2 occurrence:1 distinguished:1 neighbourhood:2 weinberger:1 original:1 denotes:3 running:3 cf:6 gori:1 ghahramani:1 establish:1 pachauri:2 classical:1 dubhashi:2 rt:2 diagonal:1 distance:4 oa:16 landmark:2 seven:1 extent:2 trivial:1 assuming:1 krein:1 besides:1 length:3 illustration:1 providing:1 equivalently:2 setup:1 grafted:1 negative:1 rise:5 constructively:1 design:3 implementation:1 anal:3 perform:2 convolution:11 benchmark:2 i47:1 technol:1 dc:1 arbitrary:3 pair:4 required:1 icip:2 california:1 distinction:1 barcelona:1 boost:1 nip:3 trans:4 kriegel:2 pattern:5 max:4 green:2 royal:1 difficulty:1 representing:3 improve:3 library:1 categorical:1 transitive:2 jun:1 sept:1 geometric:1 discovery:2 contributing:1 embedded:1 synchronization:1 par:1 permutation:2 tarel:1 remarkable:1 validation:1 foundation:1 degree:1 editor:3 systematically:1 gl:4 supported:2 infeasible:1 burges:1 fall:1 wide:1 taking:3 torsello:1 ghz:1 van:1 dimension:2 xn:1 valid:23 depth:2 adaptive:1 refinement:6 founded:1 employing:3 cope:1 social:3 welling:1 approximate:4 b1:2 img:2 xi:7 iterative:1 decomposes:1 table:2 ballard:1 ku:4 learn:5 molecule:2 ca:1 improving:1 bottou:1 necessarily:2 domain:3 sp:4 arise:1 contradicting:1 child:2 repeated:1 x1:4 fig:8 referred:2 intel:1 cubic:2 feragen:1 explicit:2 third:1 theorem:7 minute:1 specific:3 favourable:1 dk:1 svm:2 amico:1 consist:1 workshop:1 corr:1 magnitude:1 subtree:9 illustrates:2 barla:1 kriege:3 intersection:15 depicted:1 conveniently:1 expressed:1 contained:2 chang:1 corresponds:2 nested:2 acm:5 goal:1 presentation:1 consequently:3 labelled:3 crl:1 except:1 uniformly:2 boujemaa:1 lemma:4 nil:2 total:2 called:6 experimental:4 exception:2 formally:1 support:3 latter:1 bioinformatics:1 tested:1 |
5,710 | 6,167 | Estimating the class prior and posterior from noisy
positives and unlabeled data
Shantanu Jain, Martha White, Predrag Radivojac
Department of Computer Science
Indiana University, Bloomington, Indiana, USA
{shajain, martha, predrag}@indiana.edu
Abstract
We develop a classification algorithm for estimating posterior distributions from
positive-unlabeled data, that is robust to noise in the positive labels and effective
for high-dimensional data. In recent years, several algorithms have been proposed
to learn from positive-unlabeled data; however, many of these contributions remain theoretical, performing poorly on real high-dimensional data that is typically
contaminated with noise. We build on this previous work to develop two practical
classification algorithms that explicitly model the noise in the positive labels and
utilize univariate transforms built on discriminative classifiers. We prove that these
univariate transforms preserve the class prior, enabling estimation in the univariate space and avoiding kernel density estimation for high-dimensional data. The
theoretical development and parametric and nonparametric algorithms proposed
here constitute an important step towards wide-spread use of robust classification
algorithms for positive-unlabeled data.
1
Introduction
Access to positive, negative and unlabeled examples is a standard assumption for most semisupervised binary classification techniques. In many domains, however, a sample from one of the
classes (say, negatives) may not be available, leading to the setting of learning from positive and
unlabeled data (Denis et al., 2005). Positive-unlabeled learning often emerges in sciences and commerce where an observation of a positive example (say, that a protein catalyzes reactions or that a
social network user likes a particular product) is usually reliable. Here, however, the absence of a
positive observation cannot be interpreted as a negative example. In molecular biology, for example,
an attempt to label a data point as positive (say, that a protein is an enzyme) may be unsuccessful for
a variety of experimental and biological reasons, whereas in social networks an explicit dislike of a
product may not be possible. Both scenarios lead to a situation where negative examples cannot be
actively collected.
Fortunately, the absence of negatively labeled examples can be tackled by incorporating unlabeled
examples as negatives, leading to the development of non-traditional classifiers. A traditional classifier predicts whether an example is positive or negative, whereas a nontraditional classifier predicts
whether the example is positive or unlabeled (Elkan and Noto, 2008). Positive vs. unlabeled training
is reasonable because the class posterior ? and also the optimum scoring function for proper losses
(Reid and Williamson, 2010) ? in the traditional setting is monotonically related to the posterior
in the non-traditional setting. However, the true posterior can be fully recovered from the nontraditional posterior only if we know the class prior: the proportion of positives in unlabeled data.
Moreover, the knowledge of the class prior is necessary for estimation of the performance criteria
such as the error rate, balanced error rate or F-measure, and also for finding classifiers that optimize
criteria obtained by thresholding a non-traditional scoring function (Menon et al., 2015).
Class prior estimation in a nonparametric setting has been actively researched in the past decade
offering an extensive theory of identifiability (Ward et al., 2009; Blanchard et al., 2010; Scott et al.,
2013; Jain et al., 2016) and a few practical solutions (Elkan and Noto, 2008; Ward et al., 2009;
du Plessis and Sugiyama, 2014; Sanderson and Scott, 2014; Jain et al., 2016; Ramaswamy et al.,
2016). Application of these algorithms to real data, however, is limited in that none of the proposed algorithms simultaneously deals with noise in the labels and practical estimation for highdimensional data.
Much of the theory on learning class priors relies on the assumption that either the distribution of
positives is known or that the positive sample is clean. In practice, however, labeled data sets contain class-label noise, where an unspecified amount of negative examples contaminates the positive
sample. This is a realistic scenario in experimental sciences where technological advances enabled
generation of high-throughput data at a cost of occasional errors. One example for this comes from
the studies of proteins using analytical chemistry technology; i.e., mass spectrometry. For example,
in the process of peptide identification (Steen and Mann, 2004), bioinformatics methods are usually
set to report results with specified false discovery rate thresholds (e.g., 1%). Unfortunately, statistical assumptions in these experiments are sometimes violated thereby leading to substantial noise
in reported results, as in the case of identifying protein post-translational modifications. Similar
amounts of noise might appear in social networks such as Facebook, where some users select ?like?,
even when they do not actually like a particular post. Further, the only approach that does consider
similar such noise (Scott et al., 2013) requires density estimation, which is known to be problematic
for high-dimensional data.
In this work, we propose the first classification algorithm, with class prior estimation, designed
particularly for high-dimensional data with noise in the labeling of positive data. We first formalize
the problem of class prior estimation from noisy positive and unlabeled data. We extend the existing
identifiability theory for class prior estimation from positive-unlabeled data to this noise setting.
We then show that we can practically estimate class priors and the posterior distributions by first
transforming the input space to a univariate space, where density estimation is reliable. We prove
that these transformations preserve class priors and show that they correspond to training a nontraditional classifier. We derive a parametric algorithm and a nonparametric algorithm to learn the
class priors. Finally, we carry out experiments on synthetic and real-life data and provide evidence
that the new approaches are sound and effective.
2
Problem formulation
Consider a binary classification problem of mapping an input space X to an output space Y = {0, 1}.
Let f be the true distribution of inputs. It can be represented as the following mixture
f (x) = ?f1 (x) + (1 ? ?)f0 (x),
(1)
where x ? X , y ? Y, fy are distributions over X for the positive (y = 1) and negative (y = 0)
class, respectively; and ? ? [0, 1) is the class prior or the proportion of the positive examples in f .
We will refer to a sample from f as unlabeled data.
Let g be the distribution of inputs for the labeled data. Because the labeled sample contains some
mislabeled examples, the corresponding distribution is also a mixture of f1 and a small proportion,
say 1 ? ?, of f0 . That is,
g(x) = ?f1 (x) + (1 ? ?)f0 (x),
(2)
where ? ? (0, 1]. Observe that both mixtures have the same components but different mixing
proportions. The simplest scenario is that the mixing components f0 and f1 correspond to the classconditional distributions p(x|Y = 0) and p(x|Y = 1), respectively. However, our approach also
permits transformations of the input space X , thus resulting in a more general setup.
The objective of this work is to study the estimation of the class prior ? = p(Y = 1) and propose
practical algorithms for estimating ?. The efficacy of this estimation is clearly tied to ?, where as ?
gets smaller, the noise in the positive labels becomes larger. We will discuss identifiability of ? and
? and give a practical algorithm for estimating ? (and ?). We will then use these results to estimate
the posterior distribution of the class variable, p(y|x), despite the fact that the labeled set does not
contain any negative examples.
2
3
Identifiability
The class prior is identifiable if there is a unique class prior for a given pair (f, g). Much of the
identifiability characterization in this section has already been considered as the case of asymmetric
noise (Scott et al., 2013); see Section 7 on related work. We recreate these results here, with the aim
to introduce required notation, to highlight several important results for later algorithm development
and to include a few missing results needed for our approach. Though the proof techniques are
themselves quite different and could be of interest, we include them in the appendix due to space.
There are typically two aspects to address with identifiability. First, one needs to determine if a
problem is identifiable, and, second, if it is not, propose a canonical form that is identifiable. In
this section we will see that class prior is not identifiable in general because f0 can be a mixture
containing f1 and vice versa. To ensure identifiability, it is necessary to choose a canonical form
that prefers a class prior that makes the two components as different as possible; this canonical form
was introduced as the mutual irreducibility condition (Scott et al., 2013) and is related to the proper
novelty distribution (Blanchard et al., 2010) and the max-canonical form (Jain et al., 2016).
We discuss identifiability in terms of measures. Let ?, ?, ?0 and ?1 be probability measures defined
on some ?-algebra A on X , corresponding to f , g, f0 and f1 , respectively. It follows that
? = ??1 + (1 ? ?)?0
? = ??1 + (1 ? ?)?0 .
(3)
(4)
Consider a family of pairs of mixtures having the same components
F(?) = {(?, ?) : ? = ??1 + (1 ? ?)?0 , ? = ??1 + (1 ? ?)?0 , (?0 , ?1 ) ? ?, 0 ? ? < ? ? 1},
where ? is some set of pairs of probability measures defined on A. The family is parametrized
by the quadruple (?, ?, ?0 , ?1 ). The condition ? > ? means that ? has a greater proportion of
?1 compared to ?. This is consistent with our assumption that the labeled sample mainly contains
positives. The most general choice for ? is
?all = P all ? P all \ (?, ?) : ? ? P all ,
where P all is the set of all probability measures defined on A and (?, ?) : ? ? P all is the set of
pairs with equal distributions. Removing equal pairs prevents ? and ? from being identical.
We now define the maximum proportion of a component ?1 in a mixture ?, which is used in the
results below and to specify the criterion that enables identifiability; more specifically,
a??1 = max ? ? [0, 1] : ? = ??1 + (1 ? ?)?0 , ?0 ? P all .
(5)
Of particular interest is the case when a??1 = 0, which should be read as ?? is not a mixture containing ?1 ?. We finally define the set all possible (?, ?) that generate ? and ? when (?0 , ?1 ) varies in
?:
A+ (?, ?, ?) = {(?, ?) : ? = ??1 + (1 ? ?)?0 , ? = ??1 + (1 ? ?)?0 , (?0 , ?1 ) ? ?, 0 ? ? < ? ? 1}.
If A+ (?, ?, ?) is a singleton set for all (?, ?) ? F(?), then F(?) is identifiable in (?, ?).
First, we show that the most general choice for ?, ?all , leads to unidentifiability (Lemma 1). Fortunately, however, by choosing a restricted set
?res = (?0 , ?1 ) ? ?all : a??10 = 0, a??01 = 0
as ?, we do obtain identifiability (Theorem 1). In words, ?res contains pairs of distributions, where
each distribution in a pair cannot be expressed as a mixture containing the other. The proofs of the
results below are in the Appendix.
Lemma 1 (Unidentifiability) Given a pair of mixtures (?, ?) ?
(?, ?, ?0 , ?1 ) generate (?, ?) and ?+ = a?? , ? + = a?? . It follows that
F(?all ), let parameters
1. There is a one-to-one relation between (?0 , ?1 ) and (?, ?) and
?0 =
?? ? ??
,
???
?1 =
3
(1 ? ?)? ? (1 ? ?)?
.
???
(6)
2. Both expressions on the right-hand side of Equation 6 are well defined probability measures
if and only if ?/? ? ?+ and (1??)/(1??) ? ? + .
3. A+ (?, ?, ?all ) = {(?, ?) : ?/? ? ?+ , (1??)/(1??) ? ? + }.
4. F(?all ) is unidentifiable in (?, ?); i.e., (?, ?) is not uniquely determined from (?, ?).
5. F(?all ) is unidentifiable in ? and ?, individually; i.e., neither ? nor ? is uniquely determined from (?, ?).
Observe that the definition of a??1 and ? 6= ? imply ?+ < 1 and, consequently, any (?, ?) ?
A+ (?, ?, ?all ) satisfies ? < ?, as expected.
Theorem 1 (Identifiablity) Given (?, ?) ? F(?all ), let ?+ = a?? and ? + = a?? . Let ??0 =
(???+ ? )/(1??+ ), ??1 = (??? + ?)/(1?? + ) and
?? = ?
+
? ? = (1?? )/(1??+ ? + ).
(1?? + )/(1??+ ? + ),
+
(7)
It follows that
1. (?? , ? ? , ??0 , ??1 ) generate (?, ?)
??
??
2. (??0 , ??1 ) ? ?res and, consequently, ?? = a?1 , ? ? = a? 1 .
3. F(?res ) contains all pairs of mixtures in F(?all ).
4. A+ (?, ?, ?res ) = {(?? , ? ? )}.
5. F(?res ) is identifiable in (?, ?); i.e., (?, ?) is uniquely determined from (?, ?).
We refer to the expressions of ? and ? as mixtures of components ?0 and ?1 as a max-canonical form
when (?0 , ?1 ) is picked from ?res . This form enforces that ?1 is not a mixture containing ?0 and vice
versa, which leads to ?0 and ?1 having maximum separation, while still generating ? and ?. Each
pair of distributions in F(?res ) is represented in this form. Identifiability of F(?res ) in (?, ?) occurs
precisely when A+ (?, ?, ?res ) = {(?? , ? ? )}, i.e., (?? , ? ? ) is the only pair of mixing proportions
that can appear in a max-canonical form of ? and ?. Moreover, Statement 1 in Theorem 1 and
Statement 1 in Lemma 1 imply that the max-canonical form is unique and completely specified
by (?? , ? ? , ??0 , ??1 ), with ?? < ? ? following from Equation 7. Thus, using F(?res ) to model the
unlabeled and labeled data distributions makes estimation of not only ?, the class prior, but also
?, ?0 , ?1 a well-posed problem. Moreover, due to Statement 3 in Theorem 1, there is no loss in the
modeling capability by using F(?res ) instead of F(?all ). Overall, identifiability, absence of loss of
modeling capability and maximum separation between ?0 and ?1 combine to justify estimating ??
as the class prior.
4
Univariate Transformation
The theory and algorithms for class prior estimation are agnostic to the dimensionality of the data;
in practice, however, this dimensionality can have important consequences. Parametric Gaussian
mixture models trained via expectation-maximization (EM) are known to strongly suffer from colinearity in high-dimensional data. Nonparametric (kernel) density estimation is also known to have
curse-of-dimensionality issues, both in theory (Liu et al., 2007) and in practice (Scott, 2008).
We address the curse of dimensionality by transforming the data to a single dimension. The transformation ? : X ? R, surprisingly, is simply an output of a non-traditional classifier trained to
separate labeled sample, L, from unlabeled sample, U . The transform is similar to that in (Jain
et al., 2016), except that it is not required to be calibrated like a posterior distribution; as shown
below, a good ranking function is sufficient. First, however, we introduce notation and formalize the
data generation steps (Figure 1).
Let X be a random variable taking values in X , capturing the true distribution of inputs, ?, and Y
be an unobserved random variable taking values in Y, giving the true class of the inputs. It follows
that X|Y = 0 and X|Y = 1 are distributed according to ?0 and ?1 , respectively. Let S be a
selection random variable, whose value in S = {0, 1, 2} determines the sample to which an input x
is added (Figure 1). When S = 1, x is added to the noisy labeled sample; when S = 0, x is added
to the unlabeled sample; and when S = 2, x is not added to either of the samples. It follows that
4
Input
Select for
labeling
no
Unlabeled
S = 0
yes
Success of
labeling
yes
Y = 0 w.p. ?0
Y = 1 w.p. ?1
Noisy positive
S = 1
Y = 0 w.p. 1 ? ?0
Y = 1 w.p. 1 ? ?1
Dropped
S = 2
no
Figure 1: The labeling procedure, with S taking values from S = {0, 1, 2}. In the first step, the sample is
randomly selected to attempt labeling, with some probability independent of X or Y . If it is not selected, it is
added to the ?Unlabeled? set. If it is selected, then labeling is attempted. If the true label is Y = 1, then with
probability ?1 ? (0, 1), the labeling will succeed and it is added to ?Noisy positives?. Otherwise, it is added
to the ?Dropped? set. If the true label is Y = 0, then the attempted labeling is much more likely to fail, but
because of noise, could succeed. The attempted label of Y = 0 succeeds with probability ?0 , and is added to
?Noisy positives?, even though it is actually a negative instance. ?0 = 0 leads to the no noise case and the noise
increases as ?0 increases. ? = ?1 ?/(?1 ?+?0 (1??)), gives the proportion of positives in the ?Noisy positives?.
X u = X|S = 0 and X l = X|S = 1 are distributed according to ? and ?, respectively. We make
the following assumptions which are consistent with the statements above:
p(y|S = 0) = p(y),
(8)
p(y = 1|S = 1) = ?,
(9)
p(x|s, y) = p(x|y).
(10)
Assumptions 8 and 9 states that the proportion of positives in the unlabeled sample and the labeled
sample matches the true proportion in ? and ?, respectively. Assumption 10 states that the distribution of the positive inputs (and the negative inputs) in both the unlabeled and the labeled samples is
equal and unbiased. Lemma 2 gives the implications of these assumptions. Statement 3 in Lemma 2
is particularly interesting and perhaps counter-intuitive as it states that with non-zero probability
some inputs need to be dropped.
Lemma 2 Let X, Y and S be random variables taking values in X , Y and S, respectively, and
X u = X|S = 0 and X l = X|S = 1. For measures ?, ?, ?0 , ?1 , satisfying Equations 3 and 4 and
?1 6= ?0 , let ?, ?0 , ?1 give the distribution of X, X|Y = 0 and X|Y = 1, respectively. If X, Y and
S satisfy assumptions 8, 9 and 10, then
1. X is independent of S = 0; i.e., p(x|S = 0) = p(x)
2. X u and X l are distributed according to ? and ?, respectively.
3. p(S = 2) 6= 0.
The proof is in the Appendix. Next, we highlight the conditions under which the score function ?
preserves ?? . Observing that S serves as the pseudo class label for labeled vs. unlabeled classification as well, we first give an expression for the posterior:
?p (x) = p(S = 1|x, S ? {0, 1}), ?x ? X .
(11)
Theorem 2 (?? -preserving transform) Let random variables X, Y, S, X u , X l and measures
?, ?, ?0 , ?1 be as defined in Lemma 2. Let ?p be the posterior as defined in Equation 11 and
? = H ? ?p , where H is a 1-to-1 function on [0, 1] and ? is the composition operator. Assume
1. (?0 , ?1 ) ? ?res ,
2. X u and X l are continuous with densities f and g, respectively,
3. ?? , ?? , ?? 1 are the measures corresponding to ? (X u ), ? (X l ), ? (X1 ), respectively,
4. (?+ , ? + , ?? , ? ? ) = (a?? , a?? , a??1 , a?? 1 ) and (??+ , ??+ , ??? , ??? ) = (a???? , a???? , a???? 1 , a???? 1 ).
Then
(??+ , ??+ , ??? , ??? ) = (?+ , ? + , ?? , ? ? )
?
and so ? is an ? -preserving transformation.
Moreover, ?p can also be used to compute the true posterior probability:
?? (1 ? ?? ) p(S = 0) ?p (x)
1 ? ??
p(Y = 1|x) =
?
.
? ? ? ??
p(S = 1) 1 ? ?p (x) 1 ? ??
5
(12)
The proof is in the Appendix. Theorem 2 shows that the ?? is the same for the original data and
the transformed data, if the transformation function ? can be expressed as a composition of ?p and
a one-to-one function, H, defined on [0, 1]. Trivially, ?p itself is one such function. We emphasize,
however, that ?? -preservation is not limited by the efficacy of the calibration algorithm; uncalibrated
scoring that ranks inputs as ?p (x) also preserves ?? . Theorem 2 further demonstrates how the true
posterior, p(Y = 1|x), can be recovered from ?p by plugging in estimates of ?p , p(S=0)/p(S=1),
?? and ? ? in Equation 12. The posterior probability ?p can be estimated directly by using a probabilistic classifier or by calibrating a classifier?s score (Platt, 1999; Niculescu-Mizil and Caruana,
2005); |U |/|L| serves as an estimate of p(S=0)/p(S=1); section 5 gives parametric and nonparametric
approaches for estimation of ?? and ? ? .
5
Algorithms
In this section, we derive a parametric and a nonparametric algorithm to estimate ?? and ? ? from the
unlabeled sample, U = {Xiu }, and the noisy positive sample, L = {Xil }. In theory, both approaches
can handle multivariate samples; in practice, however, to circumvent the curse of dimensionality, we
exploit the theory of ?? -preserving univariate transforms to transform the samples.
Parametric approach. The parametric approach is derived by modeling each sample as a two
component Gaussian mixture, sharing the same components but having different mixing proportions:
Xiu ? ?N (u1 , ?1 ) + (1 ? ?)N (u0 , ?0 )
Xil ? ?N (u1 , ?1 ) + (1 ? ?)N (u0 , ?0 )
where u1 , u0 ? Rd and ?1 , ?0 ? Sd++ , the set of all d?d positive definite matrices. The algorithm is
an extension to the EM approach for Gaussian mixture models (GMMs) where, instead of estimating
the parameters of a single mixture, the parameters of both mixtures (?, ?, u0 , u1 , ?0 , ?1 ) are estimated simultaneously by maximizing the combined likelihood over both U and L. This approach,
which we refer to as a multi-sample GMM (MSGMM), exploits the constraint that the two mixtures
share the same components. The update rules and their derivation are given in the Appendix.
Nonparametric approach. Our nonparametric strategy directly exploits the results of Lemma 1 and
Theorem 1, which give a direct connection between (?+ = a?? , ? + = a?? ) and (?? , ? ? ). Therefore,
for a two-component mixture sample, M , and a sample from one of the components, C, it only
requires an algorithm to estimate the maximum proportion of C in M . For this purpose, we use
the AlphaMax algorithm (Jain et al., 2016), briefly summarized in the Appendix. Specifically, our
two-step approach for estimating ?? and ? ? is as follows: (i) Estimate ?+ and ? + as outputs of
AlphaMax(U, L) and AlphaMax(L, U ), respectively; (ii) Estimate (?? , ? ? ) from the estimates of
(?+ , ? + ) by applying Equation 7. We refer to our nonparametric algorithm as AlphaMax-N.
6
Empirical investigation
In this section we systematically evaluate the new algorithms in a controlled, synthetic setting as
well as on a variety of data sets from the UCI Machine Learning Repository (Lichman, 2013).
Experiments on synthetic data: We start by evaluating all algorithms in a univariate setting where
both mixing proportions, ? and ?, are known. We generate unit-variance Gaussian and unit-scale
Laplace-distributed i.i.d. samples and explore the impact of mixing proportions, the size of the
component sample, and the separation and overlap between the mixing components on the accuracy
of estimation. The class prior ? was varied from {0.05, 0.25, 0.50} and the noise component ? from
{1.00, 0.95, 0.75}. The size of the labeled sample L was varied from {100, 1000}, whereas the size
of the unlabeled sample U was fixed at 10000.
Experiments on real-life data: We considered twelve real-life data sets from the UCI Machine
Learning Repository. To adjust these data to our problems, categorical features were transformed
into numerical using sparse binary representation, the regression data sets were transformed into
classification based on mean of the target variable, and the multi-class classification problems were
converted into binary problems by combining classes. In each data set, a subset of positive and
negative examples was randomly selected to provide a labeled sample while the remaining data
(without class labels) were used as unlabeled data. The size of the labeled sample was kept at 1000
(or 100 for small data sets) and the maximum size of unlabeled data was set 10000.
6
Algorithms: We compare the AlphaMax-N and MSGMM algorithms to the Elkan-Noto algorithm
(Elkan and Noto, 2008) as well as the noiseless version of AlphaMax (Jain et al., 2016). There
are several versions of the Elkan-Noto estimator and each can use any underlying classifier. We
used the e1 alternative estimator combined with the ensembles of 100 two-layer feed-forward neural
networks, each with five hidden units. The out-of-bag scores of the same classifier were used as
a class-prior preserving transformation that created an input to the AlphaMax algorithms. It is
important to mention that neither Elkan-Noto nor AlphaMax algorithm was developed to handle
noisy labeled data. In addition, the theory behind the Elkan-Noto estimator restricts its use to classconditional distributions with non-overlapping supports. The algorithm by du Plessis and Sugiyama
(2014) minimizes the same objective as the e1 Elkan-Noto estimator and, thus, was not implemented.
Evaluation: All experiments were repeated 50 times to be able to draw conclusions with statistical
significance. In real-life data, the labeled sample was created randomly by choosing an appropriate
number of positive and negative examples to satisfy the condition for ? and the size of the labeled
sample, while the remaining data was used as the unlabeled sample. Therefore, the class prior in
the unlabeled data varies with the selection of the noise parameter ?. The mean absolute difference
between the true and estimated class priors was used as a performance measure. The best performing
algorithm on each data set was determined by multiple hypothesis testing using the P-value of 0.05
and Bonferroni correction.
Results: The comprehensive results for synthetic data drawn from univariate Gaussian and Laplace
distributions are shown in Appendix (Table 2). In these experiments no transformation was applied
prior to running any of the algorithms. As expected, the results show excellent performance of the
MSGMM model on the Gaussian data. These results significantly degrade on Laplace-distributed
data, suggesting sensitivity to the underlying assumptions. On the other hand, AlphaMax-N was
accurate over all data sets and also robust to noise. These results suggest that new parametric and
nonparametric algorithms perform well in these controlled settings.
Table 1 shows the results on twelve real data sets. Here, AlphaMax and AlphaMax-N algorithms
demonstrate significant robustness to noise, although the parametric version MSGMM was competitive in some cases. On the other hand, the Elkan-Noto algorithm expectedly degrades with noise.
Finally, we investigated the practical usefulness of the ?? -preserving transform. Table 3 (Appendix)
shows the results of AlphaMax-N and MSGMM on the real data sets, with and without using the
transform. Because of computational and numerical issues, we reduced the dimensionality by using principal component analysis (the original data caused matrix singularity issues for MSGMM
and density estimation issues for AlphaMax-N). MSGMM deteriorates significantly without the
transform, whereas AlphaMax-N preserves some signal for the class prior. AlphaMax-N with the
transform, however, shows superior performance on most data sets.
7
Related work
Class prior estimation in a semi-supervised setting, including positive-unlabeled learning, has been
extensively discussed previously; see Saerens et al. (2002); Cortes et al. (2008); Elkan and Noto
(2008); Blanchard et al. (2010); Scott et al. (2013); Jain et al. (2016) and references therein. Recently, a general setting for label noise has also been introduced, called the mutual contamination
model. The aim under this model is to estimate multiple unknown base distributions, using multiple random samples that are composed of different convex combinations of those base distributions
(Katz-Samuels and Scott, 2016). The setting of asymmetric label noise is a subset of this more
general setting, treated under general conditions by Scott et al. (2013), and previously investigated
under a more restrictive setting as co-training (Blum and Mitchell, 1998). A natural approach is
to use robust estimation to learn in the presence of class noise; this strategy, however, has been
shown to be ineffective, both theoretically (Long and Servedio, 2010; Manwani and Sastry, 2013)
and empirically (Hawkins and McLachlan, 1997; Bashir and Carter, 2005), indicating the need to
explicitly model the noise. Generative mixture model approaches have also been developed, which
explicitly model the noise (Lawrence and Scholkopf, 2001; Bouveyron and Girard, 2009); these algorithms, however, assume labeled data for each class. As the most related work, though Scott et al.
(2013) did not explicitly treat the positive-unlabeled learning with noisy positives, their formulation
can incorporate this setting by using ?0 = ? and ? = 1 ? ?1 . The theoretical and algorithmic
treatment, however, is very different. Their focus is on identifiability and analyzing convergence
rates and statistical properties, assuming access to some ?? function which can obtain proportions
7
Table 1: Mean absolute difference between estimated and true mixing proportion over twelve data sets from
the UCI Machine Learning Repository. Statistical significance was evaluated by comparing the Elkan-Noto
algorithm, AlphaMax, AlphaMax-N, and the multi-sample GMM after applying a multivariate-to-univariate
transform (MSGMM-T). The bold font type indicates the winner and the asterisk indicates statistical significance. For each data set, shown are the true mixing proportion (?), true proportion of the positives in the labeled
sample (?), sample dimensionality (d), the number of positive examples (n1 ), the total number of examples
(n), and the area under the ROC curve (AUC) for a model trained between labeled and unlabeled data.
Data
?
?
AUC d
n1
n
Elkan-Noto AlphaMax AlphaMax-N MSGMM-T
0.095 1.00 0.842 13 5188 45000 0.241
0.070
0.037*
0.163
Bank
0.096 0.95 0.819 13 5188 45000 0.284
0.079
0.036*
0.155
0.101 0.75 0.744 13 5188 45000 0.443
0.124
0.040*
0.127
0.419 1.00 0.685 8
490 1030 0.329
0.141
0.181
0.077*
Concrete
0.425 0.95 0.662 8
490 1030 0.363
0.174
0.231
0.095*
0.446 0.75 0.567 8
490 1030 0.531
0.212
0.272
0.233
0.342 1.00 0.825 127 2565 5574 0.017
0.011
0.017
0.008*
Gas
0.353 0.95 0.795 127 2565 5574 0.078
0.016
0.006
0.006
0.397 0.75 0.672 127 2565 5574 0.396
0.137
0.009
0.006*
0.268 1.00 0.810 13 209 506
0.159
0.087
0.094
0.209
Housing
0.281 0.95 0.777 13 209 506
0.226
0.094
0.110
0.204
0.330 0.75 0.651 13 209 506
0.501
0.125
0.134
0.172
0.093 1.00 0.933 36 1508 6435 0.074
0.009
0.007*
0.157
Landsat
0.103 0.95 0.904 36 1508 6435 0.110
0.015
0.008*
0.152
0.139 0.75 0.788 36 1508 6435 0.302
0.063
0.012*
0.143
0.409 1.00 0.792 126 3916 8124 0.029
0.015*
0.022
0.037
Mushroom 0.416 0.95 0.766 126 3916 8124 0.087
0.015
0.008*
0.037
0.444 0.75 0.648 126 3916 8124 0.370
0.140
0.020
0.024
0.086 1.00 0.885 10 560 5473 0.116
0.026*
0.044
0.129
Pageblock 0.087 0.95 0.858 10 560 5473 0.137
0.031*
0.052
0.125
0.090 0.75 0.768 10 560 5473 0.256
0.041*
0.064
0.111
0.243 1.00 0.875 16 3430 10992 0.030
0.006*
0.009
0.081
Pendigit
0.248 0.95 0.847 16 3430 10992 0.071
0.011
0.005*
0.074
0.268 0.75 0.738 16 3430 10992 0.281
0.093
0.007*
0.062
0.251 1.00 0.735 8
268 768
0.351
0.120
0.111
0.171
Pima
0.259 0.95 0.710 8
268 768
0.408
0.118
0.110
0.168
0.289 0.75 0.623 8
268 768
0.586
0.144
0.156
0.175
0.139 1.00 0.929 9
8903 58000 0.024*
0.027
0.029
0.157
Shuttle
0.140 0.95 0.903 9
8903 58000 0.052
0.004*
0.007
0.157
0.143 0.75 0.802 9
8903 58000 0.199
0.047
0.004*
0.148
0.226 1.00 0.842 57 1813 4601 0.184
0.046
0.041
0.059
Spambase 0.240 0.95 0.812 57 1813 4601 0.246
0.059
0.042*
0.063
0.295 0.75 0.695 57 1813 4601 0.515
0.155
0.044*
0.059
0.566 1.00 0.626 11 4113 6497 0.290
0.083
0.060
0.070
Wine
0.575 0.95 0.610 11 4113 6497 0.322
0.113
0.063
0.076
0.612 0.75 0.531 11 4113 6497 0.420
0.322
0.353
0.293
between samples. They do not explicitly address issues with high-dimensional data nor focus on
algorithms to obtain ?? . In contrast, we focus primarily on the univariate transformation to handle
high-dimensional data and practical algorithms for estimating ?? . Supervised learning used for class
prior-preserving transformation provides a rich set of techniques to address high-dimensional data.
8
Conclusion
In this paper, we developed a practical algorithm for classification of positive-unlabeled data with
noise in the labeled data set. In particular, we focused on a strategy for high-dimensional data,
providing a univariate transform that reduces the dimension of the data, preserves the class prior so
that estimation in this reduced space remains valid and is then further useful for classification. This
approach provides a simple algorithm that simultaneously improves estimation of the class prior and
provides a resulting classifier. We derived a parametric and a nonparametric version of the algorithm
and then evaluated its performance on a wide variety of learning scenarios and data sets. To the best
of our knowledge, this algorithm represents one of the first practical and easy-to-use approaches to
learning with high-dimensional positive-unlabeled data with noise in the labels.
8
Acknowledgements
We thank Prof. Michael W. Trosset for helpful comments. Grant support: NSF DBI-1458477, NIH
R01MH105524, NIH R01GM103725, and the Indiana University Precision Health Initiative.
References
S. Bashir and E. M. Carter. High breakdown mixture discriminant analysis. J Multivar Anal, 93(1):102?111,
2005.
G. Blanchard, G. Lee, and C. Scott. Semi-supervised novelty detection. J Mach Learn Res, 11:2973?3009,
2010.
A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. COLT 1998, pages 92?100,
1998.
C. Bouveyron and S. Girard. Robust supervised classification with mixture models: learning from data with
uncertain labels. Pattern Recognit, 42(11):2649?2658, 2009.
C. Cortes, M. Mohri, M. Riley, and A. Rostamizadeh. Sample selection bias correction theory. ALT 2008,
pages 38?53, 2008.
F. Denis, R. Gilleron, and F. Letouzey. Learning from positive and unlabeled examples. Theor Comput Sci,
348(16):70?83, 2005.
M. C. du Plessis and M. Sugiyama. Class prior estimation from positive and unlabeled data. IEICE Trans Inf
& Syst, E97-D(5):1358?1362, 2014.
C. Elkan and K. Noto. Learning classifiers from only positive and unlabeled data. KDD 2008, pages 213?220,
2008.
D. M. Hawkins and G. J. McLachlan. High-breakdown linear discriminant analysis. J Am Stat Assoc, 92(437):
136?143, 1997.
S. Jain, M. White, M. W. Trosset, and P. Radivojac. Nonparametric semi-supervised learning of class proportions. arXiv preprint arXiv:1601.01944, 2016. URL http://arxiv.org/abs/1601.01944.
J. Katz-Samuels and C. Scott. A mutual contamination analysis of mixed membership and partial label models.
arXiv preprint arXiv:1602.06235, 2016. URL http://arxiv.org/abs/1602.06235.
N. D. Lawrence and B. Scholkopf. Estimating a kernel Fisher discriminant in the presence of label noise. ICML
2001, pages 306?313, 2001.
M. Lichman. UCI Machine Learning Repository, 2013. URL http://archive.ics.uci.edu/ml.
H. Liu, J. D. Lafferty, and L. A. Wasserman. Sparse nonparametric density estimation in high dimensions using
the rodeo. AISTATS 2007, pages 283?290, 2007.
P. M. Long and R. A. Servedio. Random classification noise defeats all convex potential boosters. Mach Learn,
78(3):287?304, 2010.
N. Manwani and P. S. Sastry. Noise tolerance under risk minimization. IEEE T Cybern, 43(3):1146?1151,
2013.
A. K. Menon, B. van Rooyen, C. S. Ong, and R. C. Williamson. Learning from corrupted binary labels via
class-probability estimation. ICML 2015, pages 125?134, 2015.
A. Niculescu-Mizil and R. Caruana. Obtaining calibrated probabilities from boosting. UAI 2005, pages 413?
420, 2005.
J. C. Platt. Probabilistic outputs for support vector machines and comparison to regularized likelihood methods,
pages 61?74. MIT Press, 1999.
H. G. Ramaswamy, C. Scott, and A. Tewari. Mixture proportion estimation via kernel embedding of distributions. arXiv preprint arXiv:1603.02501, 2016. URL https://arxiv.org/abs/1603.02501.
M. D. Reid and R. C. Williamson. Composite binary losses. J Mach Learn Res, 11:2387?2422, 2010.
M. Saerens, P. Latinne, and C. Decaestecker. Adjusting the outputs of a classifier to new a priori probabilities:
a simple procedure. Neural Comput, 14:21?41, 2002.
T. Sanderson and C. Scott. Class proportion estimation with application to multiclass anomaly rejection. AISTATS 2014, pages 850?858, 2014.
C. Scott, G. Blanchard, and G. Handy. Classification with asymmetric label noise: consistency and maximal
denoising. J Mach Learn Res W&CP, 30:489?511, 2013.
D. W. Scott. The curse of dimensionality and dimension reduction. Multivariate Density Estimation: Theory,
Practice, and Visualization, pages 195?217, 2008.
H. Steen and M. Mann. The ABC?s (and XYZ?s) of peptide sequencing. Nat Rev Mol Cell Biol, 5(9):699?711,
2004.
G. Ward, T. Hastie, S. Barry, J. Elith, and J.R. Leathwick. Presence-only data and the EM algorithm. Biometrics,
65(2):554?563, 2009.
9
| 6167 |@word repository:4 briefly:1 version:4 steen:2 proportion:21 thereby:1 mention:1 carry:1 reduction:1 liu:2 contains:4 efficacy:2 score:3 lichman:2 offering:1 past:1 reaction:1 existing:1 recovered:2 comparing:1 spambase:1 mushroom:1 realistic:1 numerical:2 kdd:1 enables:1 designed:1 update:1 unidentifiability:2 v:2 generative:1 selected:4 letouzey:1 characterization:1 provides:3 boosting:1 denis:2 org:3 five:1 direct:1 scholkopf:2 initiative:1 prove:2 shantanu:1 combine:1 introduce:2 theoretically:1 expected:2 themselves:1 nor:3 multi:3 researched:1 curse:4 becomes:1 estimating:9 moreover:4 notation:2 underlying:2 mass:1 agnostic:1 interpreted:1 unspecified:1 minimizes:1 developed:3 finding:1 indiana:4 unobserved:1 transformation:10 pseudo:1 classifier:14 demonstrates:1 platt:2 assoc:1 unit:3 grant:1 radivojac:2 appear:2 reid:2 positive:47 dropped:3 treat:1 sd:1 consequence:1 despite:1 mach:4 analyzing:1 quadruple:1 might:1 therein:1 co:2 limited:2 practical:9 commerce:1 unique:2 enforces:1 testing:1 practice:5 elith:1 definite:1 handy:1 procedure:2 area:1 empirical:1 significantly:2 composite:1 word:1 protein:4 suggest:1 get:1 cannot:3 unlabeled:37 selection:3 operator:1 risk:1 applying:2 cybern:1 optimize:1 missing:1 maximizing:1 convex:2 focused:1 identifying:1 wasserman:1 rule:1 estimator:4 dbi:1 enabled:1 embedding:1 handle:3 laplace:3 target:1 user:2 colinearity:1 anomaly:1 hypothesis:1 elkan:13 satisfying:1 particularly:2 asymmetric:3 breakdown:2 predicts:2 labeled:23 preprint:3 counter:1 technological:1 contamination:2 uncalibrated:1 balanced:1 substantial:1 transforming:2 ong:1 trained:3 algebra:1 pendigit:1 negatively:1 completely:1 mislabeled:1 represented:2 derivation:1 jain:9 effective:2 recognit:1 labeling:8 choosing:2 quite:1 whose:1 larger:1 posed:1 say:4 otherwise:1 ward:3 transform:9 noisy:10 itself:1 housing:1 analytical:1 propose:3 predrag:2 product:2 maximal:1 uci:5 combining:2 mixing:9 poorly:1 intuitive:1 convergence:1 optimum:1 xil:2 generating:1 derive:2 develop:2 stat:1 implemented:1 come:1 nontraditional:3 mann:2 f1:6 investigation:1 biological:1 singularity:1 theor:1 extension:1 correction:2 practically:1 hawkins:2 considered:2 ic:1 lawrence:2 mapping:1 algorithmic:1 noto:13 wine:1 purpose:1 estimation:28 bag:1 label:19 peptide:2 individually:1 vice:2 mclachlan:2 minimization:1 mit:1 clearly:1 gaussian:6 aim:2 shuttle:1 derived:2 focus:3 plessis:3 rank:1 likelihood:2 mainly:1 indicates:2 sequencing:1 contrast:1 rostamizadeh:1 am:1 helpful:1 landsat:1 niculescu:2 membership:1 typically:2 hidden:1 relation:1 transformed:3 translational:1 classification:14 overall:1 issue:5 colt:1 priori:1 development:3 mutual:3 equal:3 having:3 biology:1 identical:1 represents:1 icml:2 throughput:1 contaminated:1 report:1 few:2 primarily:1 randomly:3 composed:1 simultaneously:3 preserve:6 comprehensive:1 bouveyron:2 n1:2 attempt:2 ab:3 detection:1 interest:2 evaluation:1 adjust:1 mixture:23 behind:1 implication:1 accurate:1 partial:1 necessary:2 trosset:2 biometrics:1 re:16 decaestecker:1 theoretical:3 uncertain:1 instance:1 modeling:3 caruana:2 maximization:1 riley:1 cost:1 gilleron:1 subset:2 usefulness:1 reported:1 varies:2 corrupted:1 e97:1 synthetic:4 calibrated:2 combined:2 density:8 twelve:3 sensitivity:1 probabilistic:2 lee:1 michael:1 concrete:1 containing:4 choose:1 booster:1 leading:3 actively:2 syst:1 suggesting:1 converted:1 potential:1 singleton:1 chemistry:1 summarized:1 bold:1 blanchard:5 satisfy:2 pageblock:1 explicitly:5 ranking:1 caused:1 later:1 ramaswamy:2 picked:1 bashir:2 observing:1 start:1 competitive:1 capability:2 identifiability:13 contribution:1 accuracy:1 variance:1 ensemble:1 correspond:2 yes:2 identification:1 none:1 sharing:1 facebook:1 definition:1 servedio:2 proof:4 bloomington:1 treatment:1 adjusting:1 mitchell:2 knowledge:2 emerges:1 dimensionality:8 improves:1 formalize:2 actually:2 feed:1 supervised:5 specify:1 formulation:2 unidentifiable:2 though:3 strongly:1 evaluated:2 hand:3 overlapping:1 menon:2 perhaps:1 semisupervised:1 ieice:1 usa:1 calibrating:1 contain:2 true:13 unbiased:1 manwani:2 read:1 white:2 deal:1 bonferroni:1 uniquely:3 auc:2 samuel:2 criterion:3 demonstrate:1 saerens:2 cp:1 recently:1 nih:2 superior:1 empirically:1 winner:1 defeat:1 extend:1 discussed:1 katz:2 refer:4 composition:2 significant:1 versa:2 rd:1 trivially:1 sastry:2 consistency:1 sugiyama:3 access:2 f0:6 calibration:1 base:2 enzyme:1 posterior:14 multivariate:3 recent:1 identifiablity:1 inf:1 scenario:4 binary:6 success:1 life:4 scoring:3 preserving:6 fortunately:2 greater:1 determine:1 novelty:2 barry:1 monotonically:1 signal:1 preservation:1 u0:4 sound:1 ii:1 multiple:3 semi:3 reduces:1 match:1 multivar:1 long:2 post:2 molecular:1 e1:2 plugging:1 controlled:2 impact:1 regression:1 noiseless:1 expectation:1 arxiv:9 sometimes:1 kernel:4 cell:1 spectrometry:1 whereas:4 addition:1 archive:1 ineffective:1 comment:1 latinne:1 lafferty:1 gmms:1 presence:3 easy:1 variety:3 irreducibility:1 hastie:1 multiclass:1 whether:2 recreate:1 expression:3 url:4 suffer:1 constitute:1 prefers:1 useful:1 tewari:1 transforms:3 nonparametric:13 amount:2 extensively:1 carter:2 simplest:1 reduced:2 generate:4 http:4 restricts:1 problematic:1 canonical:7 nsf:1 estimated:4 deteriorates:1 threshold:1 blum:2 drawn:1 neither:2 clean:1 gmm:2 utilize:1 kept:1 year:1 family:2 reasonable:1 separation:3 draw:1 appendix:8 capturing:1 layer:1 tackled:1 expectedly:1 identifiable:6 precisely:1 constraint:1 aspect:1 u1:4 performing:2 department:1 according:3 combination:1 remain:1 smaller:1 em:3 rev:1 modification:1 restricted:1 classconditional:2 equation:6 visualization:1 previously:2 remains:1 discus:2 fail:1 xyz:1 needed:1 know:1 sanderson:2 serf:2 available:1 permit:1 observe:2 occasional:1 appropriate:1 alternative:1 robustness:1 original:2 remaining:2 include:2 ensure:1 running:1 exploit:3 giving:1 restrictive:1 build:1 prof:1 objective:2 already:1 added:8 occurs:1 font:1 parametric:10 strategy:3 degrades:1 traditional:6 separate:1 thank:1 sci:1 parametrized:1 degrade:1 collected:1 fy:1 discriminant:3 reason:1 assuming:1 providing:1 setup:1 unfortunately:1 statement:5 pima:1 negative:13 rooyen:1 anal:1 proper:2 unknown:1 perform:1 observation:2 enabling:1 gas:1 situation:1 varied:2 leathwick:1 introduced:2 pair:11 required:2 specified:2 extensive:1 connection:1 trans:1 address:4 able:1 usually:2 below:3 scott:16 pattern:1 built:1 reliable:2 unsuccessful:1 max:5 including:1 overlap:1 treated:1 natural:1 circumvent:1 regularized:1 mizil:2 technology:1 imply:2 created:2 categorical:1 health:1 prior:32 discovery:1 acknowledgement:1 dislike:1 loss:4 fully:1 highlight:2 mixed:1 generation:2 interesting:1 asterisk:1 sufficient:1 consistent:2 thresholding:1 bank:1 systematically:1 share:1 mohri:1 surprisingly:1 side:1 bias:1 wide:2 taking:4 absolute:2 sparse:2 distributed:5 tolerance:1 curve:1 dimension:4 van:1 evaluating:1 valid:1 rich:1 forward:1 social:3 emphasize:1 ml:1 uai:1 discriminative:1 continuous:1 decade:1 table:4 learn:7 robust:5 contaminates:1 rodeo:1 obtaining:1 mol:1 du:3 williamson:3 excellent:1 investigated:2 domain:1 did:1 significance:3 spread:1 aistats:2 noise:31 repeated:1 girard:2 x1:1 roc:1 precision:1 explicit:1 comput:2 catalyzes:1 tied:1 removing:1 theorem:8 cortes:2 alt:1 evidence:1 incorporating:1 false:1 msgmm:9 nat:1 rejection:1 simply:1 univariate:11 likely:1 explore:1 prevents:1 expressed:2 xiu:2 satisfies:1 relies:1 determines:1 abc:1 succeed:2 consequently:2 towards:1 alphamax:19 absence:3 fisher:1 martha:2 specifically:2 determined:4 except:1 justify:1 denoising:1 lemma:8 principal:1 called:1 total:1 experimental:2 attempted:3 succeeds:1 indicating:1 select:2 highdimensional:1 support:3 bioinformatics:1 violated:1 avoiding:1 incorporate:1 evaluate:1 biol:1 |
5,711 | 6,168 | Estimating the class prior and posterior from noisy
positives and unlabeled data
Shantanu Jain, Martha White, Predrag Radivojac
Department of Computer Science
Indiana University, Bloomington, Indiana, USA
{shajain, martha, predrag}@indiana.edu
Abstract
We develop a classification algorithm for estimating posterior distributions from
positive-unlabeled data, that is robust to noise in the positive labels and effective
for high-dimensional data. In recent years, several algorithms have been proposed
to learn from positive-unlabeled data; however, many of these contributions remain theoretical, performing poorly on real high-dimensional data that is typically
contaminated with noise. We build on this previous work to develop two practical
classification algorithms that explicitly model the noise in the positive labels and
utilize univariate transforms built on discriminative classifiers. We prove that these
univariate transforms preserve the class prior, enabling estimation in the univariate space and avoiding kernel density estimation for high-dimensional data. The
theoretical development and parametric and nonparametric algorithms proposed
here constitute an important step towards wide-spread use of robust classification
algorithms for positive-unlabeled data.
1
Introduction
Access to positive, negative and unlabeled examples is a standard assumption for most semisupervised binary classification techniques. In many domains, however, a sample from one of the
classes (say, negatives) may not be available, leading to the setting of learning from positive and
unlabeled data (Denis et al., 2005). Positive-unlabeled learning often emerges in sciences and commerce where an observation of a positive example (say, that a protein catalyzes reactions or that a
social network user likes a particular product) is usually reliable. Here, however, the absence of a
positive observation cannot be interpreted as a negative example. In molecular biology, for example,
an attempt to label a data point as positive (say, that a protein is an enzyme) may be unsuccessful for
a variety of experimental and biological reasons, whereas in social networks an explicit dislike of a
product may not be possible. Both scenarios lead to a situation where negative examples cannot be
actively collected.
Fortunately, the absence of negatively labeled examples can be tackled by incorporating unlabeled
examples as negatives, leading to the development of non-traditional classifiers. Here we follow the
terminology by Elkan and Noto (2008) that a traditional classifier predicts whether an example is
positive or negative, whereas a non-traditional classifier predicts whether the example is positive or
unlabeled. Positive vs. unlabeled (non-traditional) training is reasonable because the class posterior
? and also the optimum scoring function for composite losses (Reid and Williamson, 2010) ? in the
traditional setting is monotonically related to the posterior in the non-traditional setting. However,
the true posterior can be fully recovered from the non-traditional posterior only if we know the class
prior; i.e., the proportion of positives in unlabeled data. The knowledge of the class prior is also
necessary for estimation of the performance criteria such as the error rate, balanced error rate or
F-measure, and also for finding the right threshold for the non-traditional scoring function that leads
to an optimal classifier with respect to some criteria (Menon et al., 2015).
Class prior estimation in a nonparametric setting has been actively researched in the past decade
offering an extensive theory of identifiability (Ward et al., 2009; Blanchard et al., 2010; Scott et al.,
2013; Jain et al., 2016) and a few practical solutions (Elkan and Noto, 2008; Ward et al., 2009;
du Plessis and Sugiyama, 2014; Sanderson and Scott, 2014; Jain et al., 2016; Ramaswamy et al.,
2016). Application of these algorithms to real data, however, is limited in that none of the proposed algorithms simultaneously deals with noise in the labels and practical estimation for highdimensional data.
Much of the theory on learning class priors relies on the assumption that either the distribution of
positives is known or that the positive sample is clean. In practice, however, labeled data sets contain class-label noise, where an unspecified amount of negative examples contaminates the positive
sample. This is a realistic scenario in experimental sciences where technological advances enabled
generation of high-throughput data at a cost of occasional errors. One example for this comes from
the studies of proteins using analytical chemistry technology; i.e., mass spectrometry. For example,
in the process of peptide identification (Steen and Mann, 2004), bioinformatics methods are usually
set to report results with specified false discovery rate thresholds (e.g., 1%). Unfortunately, statistical assumptions in these experiments are sometimes violated thereby leading to substantial noise
in reported results, as in the case of identifying protein post-translational modifications. Similar
amounts of noise might appear in social networks such as Facebook, where some users select ?like?,
even when they do not actually like a particular post. Further, the only approach that does consider
similar such noise (Scott et al., 2013) requires density estimation, which is known to be problematic
for high-dimensional data.
In this work, we propose the first classification algorithm, with class prior estimation, designed
particularly for high-dimensional data with noise in the labeling of positive data. We first formalize
the problem of class prior estimation from noisy positive and unlabeled data. We extend the existing
identifiability theory for class prior estimation from positive-unlabeled data to this noise setting.
We then show that we can practically estimate class priors and the posterior distributions by first
transforming the input space to a univariate space, where density estimation is reliable. We prove
that these transformations preserve class priors and show that they correspond to training a nontraditional classifier. We derive a parametric algorithm and a nonparametric algorithm to learn the
class priors. Finally, we carry out experiments on synthetic and real-life data and provide evidence
that the new approaches are sound and effective.
2
Problem formulation
Consider a binary classification problem of mapping an input space X to an output space Y = {0, 1}.
Let f be the true distribution of inputs. It can be represented as the following mixture
f (x) = ?f1 (x) + (1 ? ?)f0 (x),
(1)
where x 2 X , y 2 Y, fy are distributions over X for the positive (y = 1) and negative (y = 0)
class, respectively; and ? 2 [0, 1) is the class prior or the proportion of the positive examples in f .
We will refer to a sample from f as unlabeled data.
Let g be the distribution of inputs for the labeled data. Because the labeled sample contains some
mislabeled examples, the corresponding distribution is also a mixture of f1 and a small proportion,
say 1 ? ?, of f0 . That is,
g(x) = ?f1 (x) + (1 ? ?)f0 (x),
(2)
where ? 2 (0, 1]. Observe that both mixtures have the same components but different mixing
proportions. The simplest scenario is that the mixing components f0 and f1 correspond to the classconditional distributions p(x|Y = 0) and p(x|Y = 1), respectively. However, our approach also
permits transformations of the input space X , thus resulting in a more general setup.
The objective of this work is to study the estimation of the class prior ? = p(Y = 1) and propose
practical algorithms for estimating ?. The efficacy of this estimation is clearly tied to ?, where as ?
gets smaller, the noise in the positive labels becomes larger. We will discuss identifiability of ? and
? and give a practical algorithm for estimating ? (and ?). We will then use these results to estimate
the posterior distribution of the class variable, p(y|x), despite the fact that the labeled set does not
contain any negative examples.
2
3
Identifiability
The class prior is identifiable if there is a unique class prior for a given pair (f, g). Much of the
identifiability characterization in this section has already been considered as the case of asymmetric
noise (Scott et al., 2013); see Section 7 on related work. We recreate these results here, with the aim
to introduce required notation, to highlight several important results for later algorithm development
and to include a few missing results needed for our approach. Though the proof techniques are
themselves quite different and could be of interest, we include them in the appendix due to space.
There are typically two aspects to address with identifiability. First, one needs to determine if a
problem is identifiable, and, second, if it is not, propose a canonical form that is identifiable. In
this section we will see that class prior is not identifiable in general because f0 can be a mixture
containing f1 and vice versa. To ensure identifiability, it is necessary to choose a canonical form
that prefers a class prior that makes the two components as different as possible; this canonical form
was introduced as the mutual irreducibility condition (Scott et al., 2013) and is related to the proper
novelty distribution (Blanchard et al., 2010) and the max-canonical form (Jain et al., 2016).
We discuss identifiability in terms of measures. Let ?, ?, ?0 and ?1 be probability measures defined
on some ?-algebra A on X , corresponding to f , g, f0 and f1 , respectively. It follows that
(3)
(4)
? = ??1 + (1 ? ?)?0
? = ??1 + (1 ? ?)?0 .
Consider a family of pairs of mixtures having the same components
F(?) = {(?, ?) : ? = ??1 + (1 ? ?)?0 , ? = ??1 + (1 ? ?)?0 , (?0 , ?1 ) 2 ?, 0 ? ? < ? ? 1},
where ? is some set of pairs of probability measures defined on A. The family is parametrized
by the quadruple (?, ?, ?0 , ?1 ). The condition ? > ? means that ? has a greater proportion of
?1 compared to ?. This is consistent with our assumption that the labeled sample mainly contains
positives. The most general choice for ? is
?
?all = P all ? P all \ (?, ?) : ? 2 P all ,
?
where P all is the set of all probability measures defined on A and (?, ?) : ? 2 P all is the set of
pairs with equal distributions. Removing equal pairs prevents ? and ? from being identical.
We now define the maximum proportion of a component ?1 in a mixture ?, which is used in the
results below and to specify the criterion that enables identifiability; more specifically,
?
a??1 = max ? 2 [0, 1] : ? = ??1 + (1 ? ?)?0 , ?0 2 P all .
(5)
Of particular interest is the case when a??1 = 0, which should be read as ?? is not a mixture containing ?1 ?. We finally define the set all possible (?, ?) that generate ? and ? when (?0 , ?1 ) varies in
?:
A+ (?, ?, ?) = {(?, ?) : ? = ??1 + (1 ? ?)?0 , ? = ??1 + (1 ? ?)?0 , (?0 , ?1 ) 2 ?, 0 ? ? < ? ? 1}.
If A+ (?, ?, ?) is a singleton set for all (?, ?) 2 F(?), then F(?) is identifiable in (?, ?).
First, we show that the most general choice for ?, ?all , leads to unidentifiability (Lemma 1). Fortunately, however, by choosing a restricted set
?
?res = (?0 , ?1 ) 2 ?all : a??10 = 0, a??01 = 0
as ?, we do obtain identifiability (Theorem 1). In words, ?res contains pairs of distributions, where
each distribution in a pair cannot be expressed as a mixture containing the other. The proofs of the
results below are in the Appendix.
Lemma 1 (Unidentifiability) Given a pair of mixtures (?, ?) 2
(?, ?, ?0 , ?1 ) generate (?, ?) and ?+ = a?? , ? + = a?? . It follows that
F(?all ), let parameters
1. There is a one-to-one relation between (?0 , ?1 ) and (?, ?) and
?0 =
?? ? ??
,
???
?1 =
3
(1 ? ?)? ? (1 ? ?)?
.
???
(6)
2. Both expressions on the right-hand side of Equation 6 are well defined probability measures
if and only if ?/? ? ?+ and (1??)/(1??) ? ? + .
3. A+ (?, ?, ?all ) = {(?, ?) : ?/? ? ?+ , (1??)/(1??) ? ? + }.
4. F(?all ) is unidentifiable in (?, ?); i.e., (?, ?) is not uniquely determined from (?, ?).
5. F(?all ) is unidentifiable in ? and ?, individually; i.e., neither ? nor ? is uniquely determined from (?, ?).
Observe that the definition of a??1 and ? 6= ? imply ?+ < 1 and, consequently, any (?, ?) 2
A+ (?, ?, ?all ) satisfies ? < ?, as expected.
Theorem 1 (Identifiablity) Given (?, ?) 2 F(?all ), let ?+ = a?? and ? + = a?? . Let ??0 =
(???+ ? )/(1??+ ), ??1 = (??? + ?)/(1?? + ) and
It follows that
?? = ?
+
? ? = (1?? )/(1??+ ? + ).
(1?? + )/(1??+ ? + ),
+
(7)
1. (?? , ? ? , ??0 , ??1 ) generate (?, ?)
??
??
2. (??0 , ??1 ) 2 ?res and, consequently, ?? = a?1 , ? ? = a? 1 .
3. F(?res ) contains all pairs of mixtures in F(?all ).
4. A+ (?, ?, ?res ) = {(?? , ? ? )}.
5. F(?res ) is identifiable in (?, ?); i.e., (?, ?) is uniquely determined from (?, ?).
We refer to the expressions of ? and ? as mixtures of components ?0 and ?1 as a max-canonical form
when (?0 , ?1 ) is picked from ?res . This form enforces that ?1 is not a mixture containing ?0 and vice
versa, which leads to ?0 and ?1 having maximum separation, while still generating ? and ?. Each
pair of distributions in F(?res ) is represented in this form. Identifiability of F(?res ) in (?, ?) occurs
precisely when A+ (?, ?, ?res ) = {(?? , ? ? )}, i.e., (?? , ? ? ) is the only pair of mixing proportions
that can appear in a max-canonical form of ? and ?. Moreover, Statement 1 in Theorem 1 and
Statement 1 in Lemma 1 imply that the max-canonical form is unique and completely specified
by (?? , ? ? , ??0 , ??1 ), with ?? < ? ? following from Equation 7. Thus, using F(?res ) to model the
unlabeled and labeled data distributions makes estimation of not only ?, the class prior, but also
?, ?0 , ?1 a well-posed problem. Moreover, due to Statement 3 in Theorem 1, there is no loss in the
modeling capability by using F(?res ) instead of F(?all ). Overall, identifiability, absence of loss of
modeling capability and maximum separation between ?0 and ?1 combine to justify estimating ??
as the class prior.
4
Univariate Transformation
The theory and algorithms for class prior estimation are agnostic to the dimensionality of the data;
in practice, however, this dimensionality can have important consequences. Parametric Gaussian
mixture models trained via expectation-maximization (EM) are known to strongly suffer from colinearity in high-dimensional data. Nonparametric (kernel) density estimation is also known to have
curse-of-dimensionality issues, both in theory (Liu et al., 2007) and in practice (Scott, 2008).
We address the curse of dimensionality by transforming the data to a single dimension. The transformation ? : X ! R, surprisingly, is simply an output of a non-traditional classifier trained to
separate labeled sample, L, from unlabeled sample, U . The transform is similar to that in (Jain
et al., 2016), except that it is not required to be calibrated like a posterior distribution; as shown
below, a good ranking function is sufficient. First, however, we introduce notation and formalize the
data generation steps (Figure 1).
Let X be a random variable taking values in X , capturing the true distribution of inputs, ?, and Y
be an unobserved random variable taking values in Y, giving the true class of the inputs. It follows
that X|Y = 0 and X|Y = 1 are distributed according to ?0 and ?1 , respectively. Let S be a
selection random variable, whose value in S = {0, 1, 2} determines the sample to which an input x
is added (Figure 1). When S = 1, x is added to the noisy labeled sample; when S = 0, x is added
to the unlabeled sample; and when S = 2, x is not added to either of the samples. It follows that
4
Input
Select for
labeling
no
Unlabeled
S = 0
yes
Success of
labeling
yes
Y = 0 w.p.
Y = 1 w.p.
Y = 0 w.p. 1
Y = 1 w.p. 1
Dropped
S = 2
no
0
Noisy positive
S = 1
1
0
1
Figure 1: The labeling procedure, with S taking values from S = {0, 1, 2}. In the first step, the sample is
randomly selected to attempt labeling, with some probability independent of X or Y . If it is not selected, it is
added to the ?Unlabeled? set. If it is selected, then labeling is attempted. If the true label is Y = 1, then with
probability ?1 2 (0, 1), the labeling will succeed and it is added to ?Noisy positives?. Otherwise, it is added
to the ?Dropped? set. If the true label is Y = 0, then the attempted labeling is much more likely to fail, but
because of noise, could succeed. The attempted label of Y = 0 succeeds with probability ?0 , and is added to
?Noisy positives?, even though it is actually a negative instance. ?0 = 0 leads to the no noise case and the noise
increases as ?0 increases. ? = ?1 ?/(?1 ?+?0 (1 ?)), gives the proportion of positives in the ?Noisy positives?.
X u = X|S = 0 and X l = X|S = 1 are distributed according to ? and ?, respectively. We make
the following assumptions which are consistent with the statements above:
p(y|S = 0) = p(y),
(8)
p(y = 1|S = 1) = ?,
(9)
p(x|s, y) = p(x|y).
(10)
Assumptions 8 and 9 states that the proportion of positives in the unlabeled sample and the labeled
sample matches the true proportion in ? and ?, respectively. Assumption 10 states that the distribution of the positive inputs (and the negative inputs) in both the unlabeled and the labeled samples is
equal and unbiased. Lemma 2 gives the implications of these assumptions. Statement 3 in Lemma 2
is particularly interesting and perhaps counter-intuitive as it states that with non-zero probability
some inputs need to be dropped.
Lemma 2 Let X, Y and S be random variables taking values in X , Y and S, respectively, and
X u = X|S = 0 and X l = X|S = 1. For measures ?, ?, ?0 , ?1 , satisfying Equations 3 and 4 and
?1 6= ?0 , let ?, ?0 , ?1 give the distribution of X, X|Y = 0 and X|Y = 1, respectively. If X, Y and
S satisfy assumptions 8, 9 and 10, then
1. X is independent of S = 0; i.e., p(x|S = 0) = p(x)
2. X u and X l are distributed according to ? and ?, respectively.
3. p(S = 2) 6= 0.
The proof is in the Appendix. Next, we highlight the conditions under which the score function ?
preserves ?? . Observing that S serves as the pseudo class label for labeled vs. unlabeled classification as well, we first give an expression for the posterior:
?p (x) = p(S = 1|x, S 2 {0, 1}), 8x 2 X .
(11)
Theorem 2 (?? -preserving transform) Let random variables X, Y, S, X u , X l and measures
?, ?, ?0 , ?1 be as defined in Lemma 2. Let ?p be the posterior as defined in Equation 11 and
? = H ? ?p , where H is a 1-to-1 function on [0, 1] and ? is the composition operator. Assume
1. (?0 , ?1 ) 2 ?res ,
2. X u and X l are continuous with densities f and g, respectively,
3. ?? , ?? , ?? 1 are the measures corresponding to ? (X u ), ? (X l ), ? (X1 ), respectively,
4. (?+ , ? + , ?? , ? ? ) = (a?? , a?? , a??1 , a?? 1 ) and (??+ , ??+ , ??? , ??? ) = (a???? , a???? , a???? 1 , a???? 1 ).
Then
(??+ , ??+ , ??? , ??? ) = (?+ , ? + , ?? , ? ? )
?
and so ? is an ? -preserving transformation.
Moreover, ?p can also be used to compute the true posterior probability:
?
?
?? (1 ? ?? ) p(S = 0) ?p (x)
1 ? ??
p(Y = 1|x) =
?
.
? ? ? ??
p(S = 1) 1 ? ?p (x) 1 ? ??
5
(12)
The proof is in the Appendix. Theorem 2 shows that the ?? is the same for the original data and
the transformed data, if the transformation function ? can be expressed as a composition of ?p and
a one-to-one function, H, defined on [0, 1]. Trivially, ?p itself is one such function. We emphasize,
however, that ?? -preservation is not limited by the efficacy of the calibration algorithm; uncalibrated
scoring that ranks inputs as ?p (x) also preserves ?? . Theorem 2 further demonstrates how the true
posterior, p(Y = 1|x), can be recovered from ?p by plugging in estimates of ?p , p(S=0)/p(S=1),
?? and ? ? in Equation 12. The posterior probability ?p can be estimated directly by using a probabilistic classifier or by calibrating a classifier?s score (Platt, 1999; Niculescu-Mizil and Caruana,
2005); |U |/|L| serves as an estimate of p(S=0)/p(S=1); section 5 gives parametric and nonparametric
approaches for estimation of ?? and ? ? .
5
Algorithms
In this section, we derive a parametric and a nonparametric algorithm to estimate ?? and ? ? from the
unlabeled sample, U = {Xiu }, and the noisy positive sample, L = {Xil }. In theory, both approaches
can handle multivariate samples; in practice, however, to circumvent the curse of dimensionality, we
exploit the theory of ?? -preserving univariate transforms to transform the samples.
Parametric approach. The parametric approach is derived by modeling each sample as a two
component Gaussian mixture, sharing the same components but having different mixing proportions:
Xiu ? ?N (u1 , ?1 ) + (1 ? ?)N (u0 , ?0 )
Xil ? ?N (u1 , ?1 ) + (1 ? ?)N (u0 , ?0 )
where u1 , u0 2 Rd and ?1 , ?0 2 Sd++ , the set of all d?d positive definite matrices. The algorithm is
an extension to the EM approach for Gaussian mixture models (GMMs) where, instead of estimating
the parameters of a single mixture, the parameters of both mixtures (?, ?, u0 , u1 , ?0 , ?1 ) are estimated simultaneously by maximizing the combined likelihood over both U and L. This approach,
which we refer to as a multi-sample GMM (MSGMM), exploits the constraint that the two mixtures
share the same components. The update rules and their derivation are given in the Appendix.
Nonparametric approach. Our nonparametric strategy directly exploits the results of Lemma 1 and
Theorem 1, which give a direct connection between (?+ = a?? , ? + = a?? ) and (?? , ? ? ). Therefore,
for a two-component mixture sample, M , and a sample from one of the components, C, it only
requires an algorithm to estimate the maximum proportion of C in M . For this purpose, we use
the AlphaMax algorithm (Jain et al., 2016), briefly summarized in the Appendix. Specifically, our
two-step approach for estimating ?? and ? ? is as follows: (i) Estimate ?+ and ? + as outputs of
AlphaMax(U, L) and AlphaMax(L, U ), respectively; (ii) Estimate (?? , ? ? ) from the estimates of
(?+ , ? + ) by applying Equation 7. We refer to our nonparametric algorithm as AlphaMax-N.
6
Empirical investigation
In this section we systematically evaluate the new algorithms in a controlled, synthetic setting as
well as on a variety of data sets from the UCI Machine Learning Repository (Lichman, 2013).
Experiments on synthetic data: We start by evaluating all algorithms in a univariate setting where
both mixing proportions, ? and ?, are known. We generate unit-variance Gaussian and unit-scale
Laplace-distributed i.i.d. samples and explore the impact of mixing proportions, the size of the
component sample, and the separation and overlap between the mixing components on the accuracy
of estimation. The class prior ? was varied from {0.05, 0.25, 0.50} and the noise component ? from
{1.00, 0.95, 0.75}. The size of the labeled sample L was varied from {100, 1000}, whereas the size
of the unlabeled sample U was fixed at 10000.
Experiments on real-life data: We considered twelve real-life data sets from the UCI Machine
Learning Repository. To adjust these data to our problems, categorical features were transformed
into numerical using sparse binary representation, the regression data sets were transformed into
classification based on mean of the target variable, and the multi-class classification problems were
converted into binary problems by combining classes. In each data set, a subset of positive and
negative examples was randomly selected to provide a labeled sample while the remaining data
(without class labels) were used as unlabeled data. The size of the labeled sample was kept at 1000
(or 100 for small data sets) and the maximum size of unlabeled data was set 10000.
6
Algorithms: We compare the AlphaMax-N and MSGMM algorithms to the Elkan-Noto algorithm
(Elkan and Noto, 2008) as well as the noiseless version of AlphaMax (Jain et al., 2016). There
are several versions of the Elkan-Noto estimator and each can use any underlying classifier. We
used the e1 alternative estimator combined with the ensembles of 100 two-layer feed-forward neural
networks, each with five hidden units. The out-of-bag scores of the same classifier were used as
a class-prior preserving transformation that created an input to the AlphaMax algorithms. It is
important to mention that neither Elkan-Noto nor AlphaMax algorithm was developed to handle
noisy labeled data. In addition, the theory behind the Elkan-Noto estimator restricts its use to classconditional distributions with non-overlapping supports. The algorithm by du Plessis and Sugiyama
(2014) minimizes the same objective as the e1 Elkan-Noto estimator and, thus, was not implemented.
Evaluation: All experiments were repeated 50 times to be able to draw conclusions with statistical
significance. In real-life data, the labeled sample was created randomly by choosing an appropriate
number of positive and negative examples to satisfy the condition for ? and the size of the labeled
sample, while the remaining data was used as the unlabeled sample. Therefore, the class prior in
the unlabeled data varies with the selection of the noise parameter ?. The mean absolute difference
between the true and estimated class priors was used as a performance measure. The best performing
algorithm on each data set was determined by multiple hypothesis testing using the P-value of 0.05
and Bonferroni correction.
Results: The comprehensive results for synthetic data drawn from univariate Gaussian and Laplace
distributions are shown in Appendix (Table 2). In these experiments no transformation was applied
prior to running any of the algorithms. As expected, the results show excellent performance of the
MSGMM model on the Gaussian data. These results significantly degrade on Laplace-distributed
data, suggesting sensitivity to the underlying assumptions. On the other hand, AlphaMax-N was
accurate over all data sets and also robust to noise. These results suggest that new parametric and
nonparametric algorithms perform well in these controlled settings.
Table 1 shows the results on twelve real data sets. Here, AlphaMax and AlphaMax-N algorithms
demonstrate significant robustness to noise, although the parametric version MSGMM was competitive in some cases. On the other hand, the Elkan-Noto algorithm expectedly degrades with noise.
Finally, we investigated the practical usefulness of the ?? -preserving transform. Table 3 (Appendix)
shows the results of AlphaMax-N and MSGMM on the real data sets, with and without using the
transform. Because of computational and numerical issues, we reduced the dimensionality by using principal component analysis (the original data caused matrix singularity issues for MSGMM
and density estimation issues for AlphaMax-N). MSGMM deteriorates significantly without the
transform, whereas AlphaMax-N preserves some signal for the class prior. AlphaMax-N with the
transform, however, shows superior performance on most data sets.
7
Related work
Class prior estimation in a semi-supervised setting, including positive-unlabeled learning, has been
extensively discussed previously; see Saerens et al. (2002); Cortes et al. (2008); Elkan and Noto
(2008); Blanchard et al. (2010); Scott et al. (2013); Jain et al. (2016) and references therein. Recently, a general setting for label noise has also been introduced, called the mutual contamination
model. The aim under this model is to estimate multiple unknown base distributions, using multiple random samples that are composed of different convex combinations of those base distributions
(Katz-Samuels and Scott, 2016). The setting of asymmetric label noise is a subset of this more
general setting, treated under general conditions by Scott et al. (2013), and previously investigated
under a more restrictive setting as co-training (Blum and Mitchell, 1998). A natural approach is
to use robust estimation to learn in the presence of class noise; this strategy, however, has been
shown to be ineffective, both theoretically (Long and Servedio, 2010; Manwani and Sastry, 2013)
and empirically (Hawkins and McLachlan, 1997; Bashir and Carter, 2005), indicating the need to
explicitly model the noise. Generative mixture model approaches have also been developed, which
explicitly model the noise (Lawrence and Scholkopf, 2001; Bouveyron and Girard, 2009); these algorithms, however, assume labeled data for each class. As the most related work, though Scott et al.
(2013) did not explicitly treat the positive-unlabeled learning with noisy positives, their formulation
can incorporate this setting by using ?0 = ? and ? = 1 ? ?1 . The theoretical and algorithmic
treatment, however, is very different. Their focus is on identifiability and analyzing convergence
rates and statistical properties, assuming access to some ?? function which can obtain proportions
7
Table 1: Mean absolute difference between estimated and true mixing proportion over twelve data sets from
the UCI Machine Learning Repository. Statistical significance was evaluated by comparing the Elkan-Noto
algorithm, AlphaMax, AlphaMax-N, and the multi-sample GMM after applying a multivariate-to-univariate
transform (MSGMM-T). The bold font type indicates the winner and the asterisk indicates statistical significance. For each data set, shown are the true mixing proportion (?), true proportion of the positives in the labeled
sample (?), sample dimensionality (d), the number of positive examples (n1 ), the total number of examples
(n), and the area under the ROC curve (AUC) for a model trained between labeled and unlabeled data.
Data
?
?
AUC d
n1
n
Elkan-Noto AlphaMax AlphaMax-N MSGMM-T
0.095 1.00 0.842 13 5188 45000 0.241
0.070
0.037*
0.163
Bank
0.096 0.95 0.819 13 5188 45000 0.284
0.079
0.036*
0.155
0.101 0.75 0.744 13 5188 45000 0.443
0.124
0.040*
0.127
0.419 1.00 0.685 8
490 1030 0.329
0.141
0.181
0.077*
Concrete
0.425 0.95 0.662 8
490 1030 0.363
0.174
0.231
0.095*
0.446 0.75 0.567 8
490 1030 0.531
0.212
0.272
0.233
0.342 1.00 0.825 127 2565 5574 0.017
0.011
0.017
0.008*
Gas
0.353 0.95 0.795 127 2565 5574 0.078
0.016
0.006
0.006
0.397 0.75 0.672 127 2565 5574 0.396
0.137
0.009
0.006*
0.268 1.00 0.810 13 209 506
0.159
0.087
0.094
0.209
Housing
0.281 0.95 0.777 13 209 506
0.226
0.094
0.110
0.204
0.330 0.75 0.651 13 209 506
0.501
0.125
0.134
0.172
0.093 1.00 0.933 36 1508 6435 0.074
0.009
0.007*
0.157
Landsat
0.103 0.95 0.904 36 1508 6435 0.110
0.015
0.008*
0.152
0.139 0.75 0.788 36 1508 6435 0.302
0.063
0.012*
0.143
0.409 1.00 0.792 126 3916 8124 0.029
0.015*
0.022
0.037
Mushroom 0.416 0.95 0.766 126 3916 8124 0.087
0.015
0.008*
0.037
0.444 0.75 0.648 126 3916 8124 0.370
0.140
0.020
0.024
0.086 1.00 0.885 10 560 5473 0.116
0.026*
0.044
0.129
Pageblock 0.087 0.95 0.858 10 560 5473 0.137
0.031*
0.052
0.125
0.090 0.75 0.768 10 560 5473 0.256
0.041*
0.064
0.111
0.243 1.00 0.875 16 3430 10992 0.030
0.006*
0.009
0.081
Pendigit
0.248 0.95 0.847 16 3430 10992 0.071
0.011
0.005*
0.074
0.268 0.75 0.738 16 3430 10992 0.281
0.093
0.007*
0.062
0.251 1.00 0.735 8
268 768
0.351
0.120
0.111
0.171
Pima
0.259 0.95 0.710 8
268 768
0.408
0.118
0.110
0.168
0.289 0.75 0.623 8
268 768
0.586
0.144
0.156
0.175
0.139 1.00 0.929 9
8903 58000 0.024*
0.027
0.029
0.157
Shuttle
0.140 0.95 0.903 9
8903 58000 0.052
0.004*
0.007
0.157
0.143 0.75 0.802 9
8903 58000 0.199
0.047
0.004*
0.148
0.226 1.00 0.842 57 1813 4601 0.184
0.046
0.041
0.059
Spambase 0.240 0.95 0.812 57 1813 4601 0.246
0.059
0.042*
0.063
0.295 0.75 0.695 57 1813 4601 0.515
0.155
0.044*
0.059
0.566 1.00 0.626 11 4113 6497 0.290
0.083
0.060
0.070
Wine
0.575 0.95 0.610 11 4113 6497 0.322
0.113
0.063
0.076
0.612 0.75 0.531 11 4113 6497 0.420
0.322
0.353
0.293
between samples. They do not explicitly address issues with high-dimensional data nor focus on
algorithms to obtain ?? . In contrast, we focus primarily on the univariate transformation to handle
high-dimensional data and practical algorithms for estimating ?? . Supervised learning used for class
prior-preserving transformation provides a rich set of techniques to address high-dimensional data.
8
Conclusion
In this paper, we developed a practical algorithm for classification of positive-unlabeled data with
noise in the labeled data set. In particular, we focused on a strategy for high-dimensional data,
providing a univariate transform that reduces the dimension of the data, preserves the class prior so
that estimation in this reduced space remains valid and is then further useful for classification. This
approach provides a simple algorithm that simultaneously improves estimation of the class prior and
provides a resulting classifier. We derived a parametric and a nonparametric version of the algorithm
and then evaluated its performance on a wide variety of learning scenarios and data sets. To the best
of our knowledge, this algorithm represents one of the first practical and easy-to-use approaches to
learning with high-dimensional positive-unlabeled data with noise in the labels.
8
Acknowledgements
We thank Prof. Michael W. Trosset for helpful comments. Grant support: NSF DBI-1458477, NIH
R01MH105524, NIH R01GM103725, and the Indiana University Precision Health Initiative.
References
S. Bashir and E. M. Carter. High breakdown mixture discriminant analysis. J Multivar Anal, 93(1):102?111,
2005.
G. Blanchard, G. Lee, and C. Scott. Semi-supervised novelty detection. J Mach Learn Res, 11:2973?3009,
2010.
A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. COLT 1998, pages 92?100,
1998.
C. Bouveyron and S. Girard. Robust supervised classification with mixture models: learning from data with
uncertain labels. Pattern Recognit, 42(11):2649?2658, 2009.
C. Cortes, M. Mohri, M. Riley, and A. Rostamizadeh. Sample selection bias correction theory. ALT 2008,
pages 38?53, 2008.
F. Denis, R. Gilleron, and F. Letouzey. Learning from positive and unlabeled examples. Theor Comput Sci,
348(16):70?83, 2005.
M. C. du Plessis and M. Sugiyama. Class prior estimation from positive and unlabeled data. IEICE Trans Inf
& Syst, E97-D(5):1358?1362, 2014.
C. Elkan and K. Noto. Learning classifiers from only positive and unlabeled data. KDD 2008, pages 213?220,
2008.
D. M. Hawkins and G. J. McLachlan. High-breakdown linear discriminant analysis. J Am Stat Assoc, 92(437):
136?143, 1997.
S. Jain, M. White, M. W. Trosset, and P. Radivojac. Nonparametric semi-supervised learning of class proportions. arXiv preprint arXiv:1601.01944, 2016. URL http://arxiv.org/abs/1601.01944.
J. Katz-Samuels and C. Scott. A mutual contamination analysis of mixed membership and partial label models.
arXiv preprint arXiv:1602.06235, 2016. URL http://arxiv.org/abs/1602.06235.
N. D. Lawrence and B. Scholkopf. Estimating a kernel Fisher discriminant in the presence of label noise. ICML
2001, pages 306?313, 2001.
M. Lichman. UCI Machine Learning Repository, 2013. URL http://archive.ics.uci.edu/ml.
H. Liu, J. D. Lafferty, and L. A. Wasserman. Sparse nonparametric density estimation in high dimensions using
the rodeo. AISTATS 2007, pages 283?290, 2007.
P. M. Long and R. A. Servedio. Random classification noise defeats all convex potential boosters. Mach Learn,
78(3):287?304, 2010.
N. Manwani and P. S. Sastry. Noise tolerance under risk minimization. IEEE T Cybern, 43(3):1146?1151,
2013.
A. K. Menon, B. van Rooyen, C. S. Ong, and R. C. Williamson. Learning from corrupted binary labels via
class-probability estimation. ICML 2015, pages 125?134, 2015.
A. Niculescu-Mizil and R. Caruana. Obtaining calibrated probabilities from boosting. UAI 2005, pages 413?
420, 2005.
J. C. Platt. Probabilistic outputs for support vector machines and comparison to regularized likelihood methods,
pages 61?74. MIT Press, 1999.
H. G. Ramaswamy, C. Scott, and A. Tewari. Mixture proportion estimation via kernel embedding of distributions. arXiv preprint arXiv:1603.02501, 2016. URL https://arxiv.org/abs/1603.02501.
M. D. Reid and R. C. Williamson. Composite binary losses. J Mach Learn Res, 11:2387?2422, 2010.
M. Saerens, P. Latinne, and C. Decaestecker. Adjusting the outputs of a classifier to new a priori probabilities:
a simple procedure. Neural Comput, 14:21?41, 2002.
T. Sanderson and C. Scott. Class proportion estimation with application to multiclass anomaly rejection. AISTATS 2014, pages 850?858, 2014.
C. Scott, G. Blanchard, and G. Handy. Classification with asymmetric label noise: consistency and maximal
denoising. J Mach Learn Res W&CP, 30:489?511, 2013.
D. W. Scott. The curse of dimensionality and dimension reduction. Multivariate Density Estimation: Theory,
Practice, and Visualization, pages 195?217, 2008.
H. Steen and M. Mann. The ABC?s (and XYZ?s) of peptide sequencing. Nat Rev Mol Cell Biol, 5(9):699?711,
2004.
G. Ward, T. Hastie, S. Barry, J. Elith, and J.R. Leathwick. Presence-only data and the EM algorithm. Biometrics,
65(2):554?563, 2009.
9
| 6168 |@word repository:4 briefly:1 version:4 steen:2 proportion:21 thereby:1 mention:1 carry:1 reduction:1 liu:2 contains:4 efficacy:2 score:3 lichman:2 offering:1 past:1 reaction:1 existing:1 recovered:2 comparing:1 spambase:1 mushroom:1 realistic:1 numerical:2 kdd:1 enables:1 designed:1 update:1 unidentifiability:2 v:2 generative:1 selected:4 letouzey:1 characterization:1 provides:3 boosting:1 denis:2 org:3 five:1 direct:1 scholkopf:2 initiative:1 prove:2 shantanu:1 combine:1 introduce:2 theoretically:1 expected:2 themselves:1 nor:3 multi:3 researched:1 curse:4 becomes:1 estimating:9 notation:2 moreover:3 underlying:2 mass:1 agnostic:1 interpreted:1 unspecified:1 minimizes:1 developed:3 finding:1 indiana:4 unobserved:1 transformation:10 pseudo:1 classifier:14 demonstrates:1 platt:2 assoc:1 unit:3 grant:1 radivojac:2 appear:2 reid:2 positive:47 dropped:3 treat:1 sd:1 consequence:1 despite:1 mach:4 analyzing:1 quadruple:1 might:1 therein:1 co:2 limited:2 practical:9 commerce:1 unique:2 enforces:1 testing:1 practice:5 elith:1 definite:1 handy:1 procedure:2 area:1 empirical:1 significantly:2 composite:2 word:1 protein:4 suggest:1 get:1 cannot:3 unlabeled:37 selection:3 operator:1 risk:1 applying:2 cybern:1 missing:1 maximizing:1 convex:2 focused:1 identifying:1 wasserman:1 rule:1 estimator:4 dbi:1 enabled:1 embedding:1 handle:3 laplace:3 target:1 user:2 colinearity:1 anomaly:1 hypothesis:1 elkan:13 satisfying:1 particularly:2 asymmetric:3 breakdown:2 predicts:2 labeled:23 preprint:3 counter:1 technological:1 contamination:2 uncalibrated:1 balanced:1 substantial:1 transforming:2 ong:1 trained:3 algebra:1 pendigit:1 negatively:1 completely:1 mislabeled:1 represented:2 derivation:1 jain:9 effective:2 recognit:1 labeling:8 choosing:2 quite:1 whose:1 larger:1 posed:1 say:4 otherwise:1 ward:3 transform:9 noisy:10 itself:1 housing:1 analytical:1 propose:3 predrag:2 product:2 maximal:1 uci:5 combining:2 mixing:9 poorly:1 intuitive:1 convergence:1 optimum:1 xil:2 generating:1 derive:2 develop:2 stat:1 implemented:1 come:1 nontraditional:1 mann:2 f1:6 investigation:1 biological:1 singularity:1 theor:1 extension:1 correction:2 practically:1 hawkins:2 considered:2 ic:1 lawrence:2 mapping:1 algorithmic:1 noto:13 wine:1 purpose:1 estimation:28 bag:1 label:19 peptide:2 individually:1 vice:2 mclachlan:2 minimization:1 mit:1 clearly:1 gaussian:6 aim:2 shuttle:1 derived:2 focus:3 plessis:3 rank:1 likelihood:2 mainly:1 indicates:2 sequencing:1 contrast:1 rostamizadeh:1 am:1 helpful:1 landsat:1 niculescu:2 membership:1 typically:2 hidden:1 relation:1 transformed:3 translational:1 classification:14 overall:1 issue:5 colt:1 priori:1 development:3 mutual:3 equal:3 having:3 biology:1 identical:1 represents:1 icml:2 throughput:1 contaminated:1 report:1 few:2 primarily:1 randomly:3 composed:1 simultaneously:3 preserve:6 comprehensive:1 bouveyron:2 n1:2 attempt:2 ab:3 detection:1 interest:2 evaluation:1 adjust:1 mixture:23 behind:1 implication:1 accurate:1 partial:1 necessary:2 trosset:2 biometrics:1 re:16 decaestecker:1 theoretical:3 uncertain:1 instance:1 modeling:3 caruana:2 maximization:1 riley:1 cost:1 gilleron:1 subset:2 usefulness:1 reported:1 varies:2 corrupted:1 e97:1 synthetic:4 calibrated:2 combined:2 density:8 twelve:3 sensitivity:1 probabilistic:2 lee:1 michael:1 concrete:1 containing:4 choose:1 booster:1 leading:3 actively:2 syst:1 suggesting:1 converted:1 potential:1 singleton:1 chemistry:1 summarized:1 bold:1 blanchard:5 satisfy:2 pageblock:1 explicitly:5 ranking:1 caused:1 later:1 ramaswamy:2 picked:1 bashir:2 observing:1 start:1 competitive:1 capability:2 identifiability:13 contribution:1 accuracy:1 variance:1 ensemble:1 correspond:2 yes:2 identification:1 none:1 sharing:1 facebook:1 definition:1 servedio:2 proof:4 bloomington:1 treatment:1 adjusting:1 mitchell:2 knowledge:2 emerges:1 dimensionality:8 improves:1 formalize:2 actually:2 feed:1 supervised:5 follow:1 specify:1 formulation:2 unidentifiable:2 though:3 strongly:1 evaluated:2 hand:3 overlapping:1 menon:2 perhaps:1 semisupervised:1 ieice:1 usa:1 calibrating:1 contain:2 true:13 unbiased:1 manwani:2 read:1 white:2 deal:1 bonferroni:1 uniquely:3 auc:2 samuel:2 criterion:3 demonstrate:1 saerens:2 cp:1 recently:1 nih:2 superior:1 empirically:1 winner:1 defeat:1 extend:1 discussed:1 katz:2 refer:4 composition:2 significant:1 versa:2 rd:1 trivially:1 sastry:2 consistency:1 sugiyama:3 access:2 f0:6 calibration:1 base:2 enzyme:1 posterior:14 multivariate:3 recent:1 identifiablity:1 inf:1 scenario:4 binary:6 success:1 life:4 scoring:3 preserving:6 fortunately:2 greater:1 determine:1 novelty:2 barry:1 monotonically:1 signal:1 preservation:1 u0:4 sound:1 ii:1 multiple:3 semi:3 reduces:1 match:1 multivar:1 long:2 post:2 molecular:1 e1:2 plugging:1 controlled:2 impact:1 regression:1 noiseless:1 expectation:1 arxiv:9 sometimes:1 kernel:4 cell:1 spectrometry:1 whereas:4 addition:1 archive:1 ineffective:1 comment:1 latinne:1 lafferty:1 gmms:1 presence:3 easy:1 variety:3 irreducibility:1 hastie:1 multiclass:1 whether:2 recreate:1 expression:3 url:4 suffer:1 constitute:1 prefers:1 useful:1 tewari:1 transforms:3 nonparametric:13 amount:2 extensively:1 carter:2 simplest:1 reduced:2 generate:4 http:4 restricts:1 problematic:1 canonical:7 nsf:1 estimated:4 deteriorates:1 terminology:1 threshold:2 blum:2 drawn:1 neither:2 clean:1 gmm:2 utilize:1 kept:1 year:1 family:2 reasonable:1 separation:3 draw:1 appendix:8 capturing:1 layer:1 tackled:1 expectedly:1 identifiable:6 precisely:1 constraint:1 aspect:1 u1:4 performing:2 department:1 according:3 combination:1 remain:1 smaller:1 em:3 rev:1 modification:1 restricted:1 classconditional:2 equation:6 visualization:1 previously:2 remains:1 discus:2 fail:1 xyz:1 needed:1 know:1 sanderson:2 serf:2 available:1 permit:1 observe:2 occasional:1 appropriate:1 alternative:1 robustness:1 original:2 remaining:2 include:2 ensure:1 running:1 exploit:3 giving:1 restrictive:1 build:1 prof:1 objective:2 already:1 added:8 occurs:1 font:1 parametric:10 strategy:3 degrades:1 traditional:9 separate:1 thank:1 sci:1 parametrized:1 degrade:1 collected:1 fy:1 discriminant:3 reason:1 assuming:1 providing:1 setup:1 unfortunately:1 statement:5 pima:1 negative:13 rooyen:1 anal:1 proper:1 unknown:1 perform:1 observation:2 enabling:1 gas:1 situation:1 varied:2 leathwick:1 introduced:2 pair:11 required:2 specified:2 extensive:1 connection:1 trans:1 address:4 able:1 usually:2 below:3 scott:16 pattern:1 built:1 reliable:2 unsuccessful:1 max:5 including:1 overlap:1 treated:1 natural:1 circumvent:1 regularized:1 mizil:2 technology:1 imply:2 created:2 categorical:1 health:1 prior:32 discovery:1 acknowledgement:1 dislike:1 loss:4 fully:1 highlight:2 mixed:1 generation:2 interesting:1 asterisk:1 sufficient:1 consistent:2 bank:1 systematically:1 share:1 mohri:1 surprisingly:1 side:1 bias:1 wide:2 taking:4 absolute:2 sparse:2 distributed:5 tolerance:1 curve:1 dimension:4 van:1 evaluating:1 valid:1 rich:1 forward:1 social:3 emphasize:1 ml:1 uai:1 discriminative:1 continuous:1 decade:1 table:4 learn:7 robust:5 contaminates:1 rodeo:1 obtaining:1 mol:1 du:3 williamson:3 excellent:1 investigated:2 domain:1 did:1 significance:3 spread:1 aistats:2 noise:31 repeated:1 girard:2 x1:1 roc:1 precision:1 explicit:1 comput:2 catalyzes:1 tied:1 removing:1 theorem:8 cortes:2 alt:1 evidence:1 incorporating:1 false:1 msgmm:9 nat:1 rejection:1 simply:1 univariate:11 likely:1 explore:1 prevents:1 expressed:2 xiu:2 satisfies:1 relies:1 determines:1 abc:1 succeed:2 consequently:2 towards:1 alphamax:19 absence:3 fisher:1 martha:2 specifically:2 determined:4 except:1 justify:1 denoising:1 lemma:8 principal:1 called:1 total:1 experimental:2 attempted:3 succeeds:1 indicating:1 select:2 highdimensional:1 support:3 bioinformatics:1 violated:1 avoiding:1 incorporate:1 evaluate:1 biol:1 |
5,712 | 6,169 | Approximate maximum entropy principles via
Goemans-Williamson with applications to provable
variational methods
Yuanzhi Li
Department of Computer Science
Princeton University
Princeton, NJ, 08450
[email protected]
Andrej Risteski
Department of Computer Science
Princeton University
Princeton, NJ, 08450
[email protected]
Abstract
The well known maximum-entropy principle due to Jaynes, which states that
given mean parameters, the maximum entropy distribution matching them is in an
exponential family has been very popular in machine learning due to its ?Occam?s
razor? interpretation. Unfortunately, calculating the potentials in the maximumentropy distribution is intractable [BGS14]. We provide computationally efficient
versions of this principle when the mean parameters are pairwise moments: we
design distributions that approximately match given pairwise moments, while
having entropy which is comparable to the maximum entropy distribution matching
those moments.
We additionally provide surprising applications of the approximate maximum
entropy principle to designing provable variational methods for partition function
calculations for Ising models without any assumptions on the potentials of the
model. More precisely, we show that we can get approximation guarantees for the
log-partition function comparable to those in the low-temperature limit, which is
the setting of optimization of quadratic forms over the hypercube. ([AN06])
1
Introduction
Maximum entropy principle The maximum entropy principle [Jay57] states that given mean parameters, i.e. E? [?t (x)] for a family of functionals ?t (x), t ? [1, T ], where ? is distribution over the
hypercube {?1, 1}n , the entropy-maximizing distribution ? is an exponential family distribution,
PT
i.e. ?(x) ? exp( t=1 Jt ?t (x)) for some potentials Jt , t ? [1, T ]. 1 This principle has been one
of the reasons for the popularity of graphical models in machine learning: the ?maximum entropy?
assumption is interpreted as ?minimal assumptions? on the distribution other than what is known
about it.
However, this principle is problematic from a computational point of view. Due to results of
[BGS14, SV14], the potentials Jt of the Ising model, in many cases, are impossible to estimate well
in polynomial time, unless NP = RP ? so merely getting the description of the maximum entropy
distribution is already hard. Moreover, in order to extract useful information about this distribution,
usually we would also like to at least be able to sample efficiently from this distribution ? which is
typically NP-hard or even #P-hard.
1
There is a more general way to state this principle over an arbitrary domain, not just the hypercube, but for
clarity in this paper we will focus on the hypercube only.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
In this paper we address this problem in certain cases. We provide a ?bi-criteria? approximation
for the special case where the functionals ?t (x) are ?i,j (x) = xi xj , i.e. pairwise moments: we
produce a efficiently sampleable distribution over the hypercube which matches these moments up
to multiplicative constant factors, and has entropy at most a constant factor smaller from from the
entropy of the maximum entropy distribution. 2
Furthermore, the distribution which achieves this is very natural: the sign of a multivariate normal
variable. This provides theoretical explanation for the phenomenon observed by the computational
neuroscience community [BB07] that this distribution (there named dichotomized Gaussian there)
has near-maximum entropy.
Variational methods The above results also allow us to get results for a seemingly unrelated problem
P
PT
? approximating the partition function Z = x?{?1,1}n exp( t=1 Jt ?t (x)) of a member of an
exponential family. The reason this task is important is that it is tied to calculating marginals.
One of the ways this task is solved is variational methods: namely, expressing log Z as an optimization
problem. While there is a plethora of work on variational methods, of many flavors (mean field,
Bethe/Kikuchi relaxations, TRBP, etc. for a survey, see [WJ08]), they typically come either with no
guarantees, or with guarantees in very constrained cases (e.g. loopless graphs; graphs with large girth,
etc. [WJW03, WJW05]). While this is a rich area of research, the following extremely basic research
question has not been answered:
What is the best approximation guarantee on the partition function in the worst case (with no
additional assumptions on the potentials)?
PT
In the low-temperature limit, i.e. when |Jt | ? ?, log Z ? maxx?{?1,1}n t=1 Jt ?t (x) - i.e. the
question reduces to purely optimization. In this regime, this question has very satisfying answers
for many families ?t (x). One classical example is when the functionals are ?i,j (x) = xi xj . In the
graphical model community, these are known as Ising models, and in the optimization community this
is the problem of optimizing quadratic forms and has been studied by [CW04, AN06, AMMN06].
In the optimization version, the previous papers showed that in the worst case, one can get O(log n)
factor multiplicative factor approximation of it, and that unless P = NP, one cannot get better than
constant factor approximations of it.
In the finite-temperature version, it is known that it is NP-hard to achieve a 1 + factor approximation
to the partition function (i.e. construct a FPRAS) [SS12], but nothing is known about coarser
approximations. We prove in this paper, informally, that one can get comparable multiplicative
guarantees on the log-partition function in the finite temperature case as well ? using the tools and
insights we develop on the maximum entropy principles.
Our methods are extremely generic, and likely to apply to many other exponential families, where
algorithms based on linear/semidefinite programming relaxations are known to give good guarantees
in the optimization regime.
2
Statements of results and prior work
Approximate maximum entropy The main theorem in this section is the following one.
Theorem 2.1. For any covariance matrix ? of a centered distribution ? : {?1, 1}n ? R, i.e.
E? [xi xj ] = ?i,j , E? [xi ] = 0, there is an efficiently sampleable distribution ?
?, which can be sampled
G
1
as sign(g), where g ? N (0, ? + ?I) and satisfies
?i,j ? E?? [Xi Xj ] ?
?i,j and has
1
+
?
1
+
?
1/4 ?
2
n (3 ? ??1)
1
entropy H(?
?) ? 25
, for any ? ? 31/2
.
3?
There are two prior works on computational issues relating to maximum entropy principles, both
proving hardness results.
[BGS14] considers the ?hard-core? model where the functionals ?t are such that the distribution ?(x)
puts zero mass on configurations x which are not independent sets with respect to some graph G.
2
In fact, we produce a distribution with entropy ?(n), which implies the latter claim since the maximum
entropy of any distribution of over {?1, 1}n is at most n
2
They show that unless NP = RP, there is no FPRAS for calculating the potentials Jt , given the mean
parameters E? [?t (x)].
[SV14] prove an equivalence between calculating the mean parameters and calculating partition
functions. More precisely, they show that given an oracle that can calculate the mean parameters up
to a (1 + ) multiplicative factor in time O(poly(1/)), one can calculate the partition function of the
same exponential family up to (1 + O(poly())) multiplicative factor, in time O(poly(1/)). Note,
the in this work potentially needs to be polynomially small in n (i.e. an oracle that can calculate the
mean parameters to a fixed multiplicative constant cannot be used.)
Both results prove hardness for fine-grained approximations to the maximum entropy principle, and
ask for outputting approximations to the mean parameters. Our result circumvents these hardness
results by providing a distribution which is not in the maximum-entropy exponential family, and is
allowed to only approximately match the moments as well. To the best of our knowledge, such an
approximation, while very natural, has not been considered in the literature.
Provable variational methods The main theorems in this section will concern the approximation
factor that can be achieved by degree-2 pseudo-moment relaxations of the standard variational
principle due to Gibbs. ([Ell12]) As outlined before, we will be concerned with a particularly popular
exponential family: Ising models. We will prove the following three results:
Theorem 2.2 (Ferromagnetic Ising, informal). There is a convex programming relaxation based on
degree-2 pseudo-moments that calculates up to multiplicative approximation factor
50 the value of
X
log Z where Z is the partition function of the exponential distribution ?(x) ? exp(
Ji,j xi xj ) for
i,j
Ji,j > 0.
Theorem 2.3 (Ising model, informal). There is a convex programming relaxation based on degree-2
pseudo-moments that calculates up to multiplicative approximation factor O(log X
n) the value of
log Z where Z is the partition function of the exponential distribution ?(x) ? exp(
Ji,j xi xj ).
i,j
Theorem 2.4 (Ising model, informal). There is a convex programming relaxation based on degree-2
pseudo-moments that calculates up to multiplicative approximation factor O(log ?(G))
X the value of
log Z where Z is the partition function of the exponential distribution ?(x) ? exp(
Ji,j xi xj )
i,j?E(G)
where G = (V (G), E(G)) is a graph with chromatic number ?(G).
3
While a lot of work is done on variational methods in general (see the survey by [WJ08] for a detailed
overview), to the best of our knowledge nothing is known about the worst-case guarantee that we
are interested in here. Moreover, other than a recent paper by [Ris16], no other work has provided
provable bounds for variational methods that proceed via a convex relaxation and a rounding thereof.4
[Ris16] provides guarantees in the case of Ising models that are also based on pseudo-moment
relaxations of the variational principle, albeit only in the special case when the graph is ?dense? in a
suitably defined sense. 5 The results there are very specific to the density assumption and can not be
adapted to our worst-case setting.
Finally, we mention that in the special case of the ferromagnetic Ising models, an algorithm based on
MCMC was provided by [JS93], which can give an approximation factor of (1 + ) to the partition
function and runs in time O(n11 poly(1/)). In spite of this, the focus of this part of our paper is
to provide understanding of variational methods in certain cases, as they continue to be popular in
practice for their faster running time compared to MCMC-based methods but are theoretically much
more poorly studied.
3
Theorem 2.4 is strictly more general than Theorem 2.3, however the proof of Theorem 2.3 uses less heavy
machinery and is illuminating enough that we feel merits being presented as a separate theorem.
4
In some sense, it is possible to give provable bounds for Bethe-entropy based relaxations, via analyzing
belief propagation directly, which has been done in cases where there is correlation decay and the graph is locally
tree-like. [WJ08] has a detailed overview of such results.
P
5
More precisely, they prove that in the case when ?i, j, ?|Ji,j | ? n?2 i,j |Ji,j |, one can get an additive
P
O( ? )
( i,j Ji,j ) approximation to log Z in time n 2 .
3
3
Approximate maximum entropy principles
Let us recall what the problem we want to solve:
Approximate maximum entropy principles We are given a positive-semidefinite matrix ? ? Rn?n
with ?i,i = 1, ?i ? [n], which is the covariance matrix of a centered distribution over {?1, 1}n ,
i.e. E? [xi xj ] = ?i,j , E? [xi ] = 0, for a distribution ? : {?1, 1}n ? R. We wish to produce a
distribution ?
? : {?1, 1}n ? R with pairwise covariances that match the given ones up to constant
factors, and entropy within a constant factor of the maximum entropy distribution with covariance ?.
6
Before stating the result formally, it will be useful to define the following constant:
Definition 3.1. Define the constant G = mint?[?1,1] ?2 arcsin(t)/t ? 0.64.
We will prove the following main theorem:
Theorem 3.1 (Main, approximate entropy principle). For any positive-semidefinite matrix ? with
?i,i = 1, ?i, there is an efficiently sampleable distribution ?
? : {?1, 1}n ? R, which can be sampled
G
1
as sign(g), where g ? N (0, ? + ?I), and satisfies 1+? ?i,j ? E?? [xi xj ] ? 1+?
?i,j and has entropy
H(?
?) ?
1/4 ?
2
n (3 ? ??1)
,
25
3?
where ? ?
1
.
31/2
Note ?
? is in fact very close to the the one which is classically used to round semidefinite relaxations
for solving the MAX-CUT problem. [GW95] We will prove Theorem 3.1 in two parts ? by first lower
bounding the entropy of ?
?, and then by bounding the moments of ?
?.
Theorem 3.2. The entropy of the distribution ?
? satisfies H(?
?) ?
2
1/4 ?
n (3 ? ??1)
25
3?
when ? ?
1
.
31/2
? can be produced by sampling g1 ? N (0, ?), g2 ? N (0, ?I) and
Proof. A sample g from N (0, ?)
setting g = g1 + g2 . The sum of two multivariate normals is again a multivariate normal. Furthermore,
?
the mean of g is 0, and since g1 , g2 are independent, the covariance of g is ? + ?I = ?.
Let?s denote the random variable Y = sign(g1 + g2 ) which is distributed according to ?
?. We wish
to lower bound the entropy of Y. Toward that goal, denote the random variable S := {i ? [n] :
|(g1 )i | ? cD} for c, D to be chosen. Then, we have: for ? = c?1
c ,
X
X
H(Y) ? H(Y|S) =
Pr[S = S]H(Y|S = S) ?
Pr[S = S]H(Y|S = S)
S?[n]
S?[n],|S|??n
where the first inequality follows since conditioning doesn?t decrease entropy, and the latter by the
non-negativity of entropy. Continue the calculation we can get:
X
X
Pr[S = S]H(Y|S = S) ?
Pr[S = S]
min
H(Y|S = S)
S?[n],|S|??n
S?[n],|S|??n
S?[n],|S|??n
=
Pr [|S| ? ?n]
min
H(Y|S = S)
S?[n],|S|??n
Pn
We will lower"bound Pr[|S| ?#?n] first. Notice that E[ i=1 (g1 )2i ] = n, therefore by Markov?s
n
X
Pn
1
inequality, Pr
(g1 )2i ? Dn ? . On the other hand, if i=1 (g1 )2i ? Dn, then |{i : (g1 )2i ?
D
i=1
which means that |{i : (g1 )2i ? cD}| ? n ? nc = (c?1)n
= ?n. Putting things together,
c
1
this means Pr [|S| ? ?n] ? 1 ? .
D
It remains to lower bound minS?[n],|S|??n H(Y|S = S). For every S ? [n], |S| ? ?n, denote by
YS the coordinates of Y restricted to S, we get
cD}| ?
n
c,
H(Y|S = S) ? H(YS |S = S) ? H? (YS |S = S) = ? log(max Pr[YS = yS |S = S])
yS
Note for a distribution over {?1, 1}n , the maximal entropy a distribution can have is n, which is achieved
by the uniform distribution.
6
4
(where H? is the min-entropy) so we only need to bound maxyS Pr[YS = yS |S = S]
We will now, for any yS , upper bound Pr[YS = yS |S = S]. Recall that the event S = S implies that
?i ? S, |(g1 )i | ? cD. Since g2 is independent of g1 , we know that for every fixed g ? Rn :
Pr[YS = yS |S = S, g1 = g] = ?i?S Pr[sign([g]i + [g2 ]i ) = yi ]
For a fixed i ? [S], consider the term Pr[sign([g]i + [g2 ]i ) = yi ]. Without loss of generality, let?s
assume [g]i > 0 (the proof is completely symmetric in the other case). Then, since [g]i is positive
1
and g2 has mean 0, we have Pr[[g]i + (g2 )i < 0] ? .
2
Moreover,
Pr [[g]i + [g2 ]i > 0]
The first term is upper bounded by
standard Gaussian tail bounds:
=
1
2
Pr[[g2 ]i > 0] Pr [[g]i + [g2 ]i > 0 | [g2 ]i > 0]
+ Pr[[g2 ]i < 0] Pr [[g]i + [g2 ]i > 0 | [g2 ]i < 0]
since Pr[[g2 ]i > 0] ? 12 . The second term we will bound using
Pr [[g]i + [g2 ]i > 0 | [g2 ]i < 0] ?
Pr [|[g2 ]i | ? |[g]i | | [g2 ]i < 0]
=
Pr[|[g2 ]i | ? |[g]i |] ? Pr[([g2 ]i )2 ? cD]
=
1 ? Pr[([g2 ]i )2 > cD]
?
?r
?
2
?
1 ? ? exp (?cD/2?) ?
cD
2?
r
?
cD
!3 ?
?
which implies
1
Pr[[g2 ]i < 0] Pr[[g]i + [g2 ]i > 0 | [g2 ]i < 0] ?
2
2
1 ? ? exp (?cD/2?)
2?
r
r
?
?
cD
?
cD
!3 !!
Putting together, we have
?r
1
Pr[sign((g1 )i + (g2 )i ) = yi ] ? 1 ? ? exp (?cD/2?) ?
2?
Together with the fact that |S| ? ?n we get
?
?
?
cD
?r
1
?
Pr[YS = yS |S = s, g1 = g] ? ?1 ? ? exp (?cD/2?) ?
?
cD
2?
r
r
?
cD
?
cD
!3 ?
?
!3 ???n
??
which implies that
H(Y) ?
? 1?
1
D
?
?r
1
(c ? 1)n
?
log ?1 ? ? exp (?cD/2?) ?
?
c
cD
2?
r
?
cD
!3 ??
??
?
By setting c = D?= 31/4 ? and a straightforward (albeit unpleasant) calculation, we can check that
1/4
2
n (3 ? ??1)
H(Y) ? 25
, as we need.
3?
We next show that the moments of the distribution are preserved up to a constant
Lemma 3.1. The distribution ?
? has
G
1+? ?i,j
? E?? [Xi Xj ] ?
5
1
1+? ?i,j
G
1+? .
? i,j = hvi , vj i. Then, N (0, ?)
? is in distribution equal
Proof. Consider the Gram decomposition of ?
to (sign(hv1 , si), . . . , sign(hvn , si)) where s ? N (0, I). Similarly as in the analysis of Goemans2
Williamson [GW95], if v?i = kv1i k vi , we have Gh?
vi , v?j i ? E?? [Xi Xj ] =
arcsin(h?
vi , v?j i) ?
?
1
1
1
? i,j =
vi , v?j i =
h?
vi , v?j i. However, since h?
hvi , vj i =
?
?i,j and kvi k =
kv
kkv
k
kv
kkv
k
kvi kkvj k
i
j
i
j
q
p
? i,i = 1 + ?, ?i ? [1, n], we get that G ?i,j ? E?? [Xi Xj ] ? 1 ?i,j as we want.
?
1+?
1+?
Lemma 3.2 and 3.1 together imply Theorem 3.1.
4
Provable bounds for variational methods
We will in this section consider applications of the approximate maximum entropy principles we
developed for calculating partition functions of Ising models. Before we dive into the results, we give
brief preliminaries on variational methods and pseudo-moment convex relaxations.
Preliminaries on variational methods and pseudo-moment convex relaxations Recall, variational methods are based on the following simple lemma, which characterizes log Z as the solution
of an optimization problem. It essentially dates back to Gibbs [Ell12], who used it in the context of
statistical mechanics, though it has been rediscovered by machine learning researchers [WJ08]:
Lemma 4.1 (Variational characterization of log Z). Let us denote by M the polytope of distributions
over {?1, 1}n . Then,
(
)
X
log Z = max
Jt E? [?t (x)] + H(?)
(1)
??M
t
While the above lemma reduces calculating log Z to an optimization problem, optimizing over
the polytope M is impossible in polynomial time. We will proceed in a way which is natural for
optimization problems ? by instead optimizing over a relaxation M0 of that polytope.
The relaxation will be associated with the degree-2 Lasserre hierarchy. Intuitively, M0 has as
variables tentative pairwise moments of a distribution of {?1, 1}n , and it imposes all constraints on
the moments that hold for distributions over {?1, 1}n . To define M0 more precisely we will need
the following notion: (for a more in-depth review of moment-based convex hierarchies, the reader
can consult [BKS14])
? ? [?] is a linear operator mapping polynomials of
Definition 4.1. A degree-2 pseudo-moment 7 E
2
?
?
degree 2 to R, such that E? [x ] = 1, and E? [p(x)2 ] ? 0 for any polynomial p(x) of degree 1.
i
We will be optimizing over the polytope M0 of all degree-2 pseudo-moments, i.e. we will consider
solving
(
)
X
? ? [?t (x)] + H(
? ? [?])
? E
max
Jt E
? ? [?]?M0
E
t
? will be a proxy for the entropy we will have to define (since entropy is a global property
where H
? ? only contains information about second order moments).
that depends on all moments, and E
To see this optimization problem is convex, we show that it can easily be written as a semidefinite
program. Namely, note that the pseudo-moment operators are linear, so it suffices to define them over
? ? (xS ) for all monomials xS of degree at most
monomials only. Hence, the variables will simply be E
? ? [x2 ] = 1 then are clearly linear, as is the ?energy part? of the objective function.
2. The constraints E
i
? ? [p(x)2 ] ? 0 and the entropy functional.
So we only need to worry about the constraint E
? ? [p(x)2 ] ? 0 can be written as a PSD constraint: namely if we define the
We claim the constraint E
matrix Q, which is indexed by all the monomials of degree at most 1, and it satisfies Q(xS , xT ) =
? ? [xS xT ]. It is easy to see that E
? ? [p(x)2 ] ? 0 ? Q 0.
E
? ? [?] is called a pseudo-moment, is that it behaves like the moments of a distribution ? :
The reason E
n
{?1, 1} ? [0, 1], albeit only over polynomials of degree at most 2.
7
6
Hence, the final concern is how to write an expression for the entropy in terms of the low-order
moments, since entropy is a global property that depends on all moments. There are many candidates
for this in machine learning are like Bethe/Kikuchi entropy, tree-reweighted Bethe entropy, logdeterminant etc. However, in the worst case ? none of them come with any guarantees. We will in
fact show that the entropy functional is not an issue ? we will relax the entropy trivially to n.
Given all of this, the final relaxation we will consider is:
(
)
X
? ? [?t (x)] + n
max
Jt E
? ? [?]?M0
E
(2)
t
From the prior setup it is clear that the solution to 2 is an upper bound to log Z. To prove a claim like
Theorem 2.3 or Theorem 2.4, we will then provide a rounding
P of the solution. In this instance, this
?) comparable to
will mean producing a distribution ?
? which has the value of t Jt E?? [?t (x)] + H(?
the value of the solution. Note this is slightly different than the usual requirement in optimization,
where one cares only about producing a single x ? {?1, 1}n with comparable value to the solution.
Our distribution ?
? will have entropy ?(n), and preserves the ?energy? portion of the objective
P
t Jt E? [?t (x)] up to a comparable factor to what is achievable in the optimization setting.
Warmup: exponential family analogue of MAX-CUT As a warmup, to illustrate the basic ideas
behind the above rounding strategy, before we consider Ising models we consider the exponential
2
family analogue of MAX-CUT. It is defined by the functionals ?i,j (x) = (x
?i ? xj ) . Concretely,
?
X
we wish to approximate the partition function of the distribution ?(x) ? exp ?
Ji,j (xi ? xj )2 ?.
i,j
We will prove the following simple observation:
Observation 4.1. The relaxation 2 provides a factor 2 approximation of log Z.
Proof. We proceed as outlined in the previous section, by providing a rounding of 2. We point out
again, unlike the standard case in optimization, where typically one needs to produce an assignment of
the variables, because of the entropy term here it is crucial that the rounding produces a distribution.
The distribution ?
? we produce here will be especially simple: we will round each xi independently
with probability 12 . Then, clearly H(?
?) = n. On the other hand, we similarly have Pr?? [(xi ? xj )2 =
1
1
2
1] = 2 , since xi and xj are rounded independently.
P Hence, E?? [(xi ? xj ) ] ? 2 . Altogether, this
P
2
implies i,j Ji,j E?? [(xi ? xj )2 ] + H(?
?) ? 12
i,j Ji,j E? [(xi ? xj ) ] + n as we needed.
4.1
Ising models
We proceed with the main results of this section on Ising models, which is the case where ?i,j (x) =
xi xj . We will split into the ferromagnetic and general case separately, as outlined in Section 2.
To be concrete, we will beP
given potentials Ji,j , and we wish to calculate the partition function of the
Ising model ?(x) ? exp( i,j Ji,j xi xj ).
Ferromagnetic case
Recall, in the ferromagnetic case of Ising model, we have the conditions that the potentials Ji,j > 0.
We will provide a convex relaxation which has a constant factor approximation in this case. First, recall
the famous first Griffiths inequality due to Griffiths [Gri67] which states that in the ferromagnetic
case, E? [xi xj ] ? 0, ?i, j.
Using this inequality, we will look at the following natural strenghtening of the relaxation 2:
max
(
X
? ? [?]?M0 ;E
? ? [xi xj ]?0,?i,j
E
)
? ? [?t (x)] + n
Jt E
(3)
t
We will prove the following theorem, as a straightforward implication of our claims from Section 3:
7
Theorem 4.1. The relaxation 3 provides a factor 50 approximation of log Z.
Proof. Notice, due to Griffiths? inequality, 3 is in fact a relaxation of the Gibbs variational principle
and hence an upper bound)of log Z. Same as before, we will provide a rounding of 3. We will use the
distribution ?
? we designed in Section 3 the sign of a Gaussian with covariance
matrix ? + ?I, for a
1/4 ?
2
n (3 ? ??1)
1
? which we will specify. By Lemma 3.2, we then have H(?
?) ? 25
whenever ? ? 31/2
.
3?
G ?
E? [xi xj ]
By Lemma 3.1, on the other hand, we can prove that E?? [xi xj ] ?
1+?
By setting ? = 21.8202, we get
1/4 ?
2
n (3 ? ??1)
25
3?
? 0.02 and
?
X
Ji,j E?? [xi xj ] + H(?
?) ? 0.02 ?
i,j
G
1+?
? 0.02, which implies that
?
X
? ? [xi xj ] + n?
Ji,j E
i,j
which is what we need.
? ? [xi xj ] can be
Note that the above proof does not work in the general Ising model case: when E
? ? [xi xj ] up to a constant factor, this may not
either positive or negative, even if we preserved each E
P
? ? [xi xj ] due to cancellations in that expression.
preserve the sum i,j Ji,j E
General Ising models case
Finally, we will tackle the general Ising model case. As noted in the previous section, the straightforward application of the results proven in Section 3 doesn?t work, so we have to consider a different
rounding ? again inspired by roundings used in optimization.
The intuition is the same as in the ferromagnetic case: we wish to design a rounding which preserves
the ?energy? portion of the objective, while having a high entropy. In the previous section, this
was achieved by modifying the Goemans-Williamson rounding so that it produces a high-entropy
distribution. We will do a similar thing here, by modifying rounding due to [CW04] and [AMMN06].
The convex relaxation we will consider will just be the basic one: 2 and we will prove the following
two theorems:
Theorem 4.2. The relaxation 2 provides a factor O(log n) approximation to log Z when ?i,j (x) =
xi xj .
Theorem 4.3. The relaxation 2 provides a factor O(log(?(G))) approximation to log Z when
?i,j (x) = xi xj for i, j ? E(G) of some graph G = (V (G), E(G)), and ?(G) is the chromatic
number of G.
Since the chromatic number of a graph is bounded by n, the second theorem is in fact strictly stronger
than the first, however the proof of the first theorem uses less heavy machinery, and is illuminating
enough to be presented on its own.
Due to space constraints, the proofs of these theorems are forwarded to the appendix.
5
Conclusion
In summary, we presented computationally efficient approximate versions of the classical maxentropy principle by [Jay57]: efficiently sampleable distributions which preserve given pairwise
moments up to a multiplicative constant factor, while having entropy within a constant factor of the
maximum entropy distribution matching those moments. Additionally, we applied our insights to
designing provable variational methods for Ising models which provide comparable guarantees for
approximating the log-partition function to those in the optimization setting. Our methods are based
on convex relaxations of the standard variational principle due to Gibbs, and are extremely generic
and we hope they will find applications for other exponential families.
8
References
[AMMN06] Noga Alon, Konstantin Makarychev, Yury Makarychev, and Assaf Naor. Quadratic
forms on graphs. Inventiones mathematicae, 163(3):499?522, 2006.
[AN06] Noga Alon and Assaf Naor. Approximating the cut-norm via grothendieck?s inequality.
SIAM Journal on Computing, 35(4):787?803, 2006.
[BB07] Matthias Bethge and Philipp Berens. Near-maximum entropy models for binary neural
representations of natural images. 2007.
[BGS14] Guy Bresler, David Gamarnik, and Devavrat Shah. Hardness of parameter estimation
in graphical models. In Advances in Neural Information Processing Systems, pages
1062?1070, 2014.
[BKS14] Boaz Barak, Jonathan A Kelner, and David Steurer. Rounding sum-of-squares relaxations. In Proceedings of the 46th Annual ACM Symposium on Theory of Computing,
pages 31?40. ACM, 2014.
[CW04] Moses Charikar and Anthony Wirth. Maximizing quadratic programs: extending
grothendieck?s inequality. In Foundations of Computer Science, 2004. Proceedings.
45th Annual IEEE Symposium on, pages 54?60. IEEE, 2004.
[Ell12] Richard S Ellis. Entropy, large deviations, and statistical mechanics, volume 271.
Springer Science & Business Media, 2012.
[EN78] Richard S Ellis and Charles M Newman. The statistics of curie-weiss models. Journal
of Statistical Physics, 19(2):149?161, 1978.
[Gri67] Robert B Griffiths. Correlations in ising ferromagnets. i. Journal of Mathematical
Physics, 8(3):478?483, 1967.
[GW95] Michel X Goemans and David P Williamson. Improved approximation algorithms for
maximum cut and satisfiability problems using semidefinite programming. Journal of
the ACM (JACM), 42(6):1115?1145, 1995.
[Jay57] Edwin T Jaynes. Information theory and statistical mechanics. Physical review,
106(4):620, 1957.
[JS93] Mark Jerrum and Alistair Sinclair. Polynomial-time approximation algorithms for the
ising model. SIAM Journal on computing, 22(5):1087?1116, 1993.
[Ris16] Andrej Risteski. How to compute partition functions using convex programming
hierarchies: provable bounds for variational methods. In Proceedings of the Conference
on Learning Theory (COLT), 2016.
[SS12] Allan Sly and Nike Sun. The computational hardness of counting in two-spin models
on d-regular graphs. In Foundations of Computer Science (FOCS), 2012 IEEE 53rd
Annual Symposium on, pages 361?369. IEEE, 2012.
[SV14] Mohit Singh and Nisheeth K Vishnoi. Entropy, optimization and counting. In Proceedings of the 46th Annual ACM Symposium on Theory of Computing, pages 50?59. ACM,
2014.
[WJ08] Martin J Wainwright and Michael I Jordan. Graphical models, exponential families,
and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1?305,
2008.
[WJW03] Martin J Wainwright, Tommi S Jaakkola, and Alan S Willsky. Tree-reweighted belief
propagation algorithms and approximate ml estimation by pseudo-moment matching.
2003.
[WJW05] Martin J Wainwright, Tommi S Jaakkola, and Alan S Willsky. A new class of upper
bounds on the log partition function. Information Theory, IEEE Transactions on,
51(7):2313?2335, 2005.
9
| 6169 |@word version:4 achievable:1 polynomial:6 stronger:1 norm:1 suitably:1 covariance:6 decomposition:1 mention:1 moment:30 configuration:1 contains:1 jaynes:2 surprising:1 si:2 written:2 additive:1 partition:18 dive:1 designed:1 core:1 provides:6 characterization:1 philipp:1 kelner:1 warmup:2 mathematical:1 dn:2 symposium:4 focs:1 prove:12 naor:2 assaf:2 theoretically:1 pairwise:6 mohit:1 allan:1 hardness:5 mechanic:3 inspired:1 spain:1 provided:2 moreover:3 unrelated:1 bounded:2 mass:1 medium:1 what:5 interpreted:1 developed:1 nj:2 guarantee:10 pseudo:12 every:2 tackle:1 wj08:5 producing:2 before:5 positive:4 limit:2 analyzing:1 approximately:2 studied:2 equivalence:1 bi:1 practice:1 area:1 maxx:1 matching:4 griffith:4 regular:1 spite:1 get:11 cannot:2 close:1 andrej:2 operator:2 put:1 context:1 impossible:2 maximumentropy:1 maximizing:2 fpras:2 straightforward:3 independently:2 convex:12 survey:2 bep:1 insight:2 proving:1 notion:1 coordinate:1 feel:1 pt:3 hierarchy:3 programming:6 us:2 designing:2 trend:1 satisfying:1 particularly:1 nisheeth:1 cut:5 ising:21 coarser:1 gw95:3 observed:1 solved:1 worst:5 calculate:4 ferromagnetic:7 sun:1 decrease:1 intuition:1 singh:1 solving:2 purely:1 completely:1 edwin:1 easily:1 newman:1 solve:1 relax:1 forwarded:1 statistic:1 jerrum:1 g1:15 final:2 seemingly:1 matthias:1 outputting:1 maximal:1 date:1 poorly:1 achieve:1 description:1 kv:2 getting:1 requirement:1 plethora:1 extending:1 produce:7 kikuchi:2 illustrate:1 develop:1 stating:1 alon:2 c:2 come:2 implies:6 tommi:2 nike:1 modifying:2 centered:2 suffices:1 preliminary:2 strictly:2 hold:1 considered:1 normal:3 exp:12 mapping:1 makarychev:2 claim:4 m0:7 achieves:1 hvi:2 estimation:2 tool:1 hope:1 clearly:2 gaussian:3 pn:2 chromatic:3 jaakkola:2 focus:2 check:1 sense:2 inference:1 typically:3 interested:1 issue:2 colt:1 constrained:1 yuanzhil:1 special:3 field:1 construct:1 equal:1 having:3 sampling:1 look:1 hv1:1 np:5 richard:2 sv14:3 preserve:4 psd:1 rediscovered:1 semidefinite:6 behind:1 implication:1 machinery:2 unless:3 tree:3 indexed:1 dichotomized:1 theoretical:1 minimal:1 instance:1 konstantin:1 elli:2 assignment:1 deviation:1 monomials:3 uniform:1 rounding:12 answer:1 density:1 siam:2 physic:2 rounded:1 michael:1 together:4 bethge:1 concrete:1 again:3 classically:1 guy:1 sinclair:1 michel:1 li:1 potential:8 yury:1 vi:5 depends:2 multiplicative:10 view:1 lot:1 characterizes:1 portion:2 curie:1 square:1 spin:1 who:1 efficiently:5 famous:1 produced:1 none:1 researcher:1 mathematicae:1 whenever:1 definition:2 inventiones:1 energy:3 thereof:1 proof:9 associated:1 sampled:2 popular:3 ask:1 recall:5 knowledge:2 satisfiability:1 back:1 worry:1 specify:1 wei:1 improved:1 done:2 though:1 generality:1 furthermore:2 just:2 sly:1 correlation:2 hand:3 propagation:2 hence:4 symmetric:1 reweighted:2 round:2 razor:1 noted:1 criterion:1 temperature:4 gh:1 image:1 variational:21 gamarnik:1 charles:1 behaves:1 functional:2 ji:16 overview:2 physical:1 conditioning:1 volume:1 tail:1 interpretation:1 relating:1 marginals:1 expressing:1 gibbs:4 rd:1 outlined:3 trivially:1 similarly:2 cancellation:1 risteski:3 etc:3 logdeterminant:1 multivariate:3 own:1 showed:1 recent:1 optimizing:4 mint:1 certain:2 inequality:7 binary:1 continue:2 yi:3 additional:1 care:1 reduces:2 alan:2 match:4 faster:1 calculation:3 y:15 n11:1 calculates:3 basic:3 essentially:1 achieved:3 preserved:2 want:2 fine:1 separately:1 crucial:1 noga:2 unlike:1 thing:2 member:1 jordan:1 consult:1 near:2 counting:2 split:1 enough:2 concerned:1 easy:1 xj:32 idea:1 expression:2 proceed:4 useful:2 detailed:2 informally:1 clear:1 locally:1 problematic:1 notice:2 moses:1 sign:10 neuroscience:1 popularity:1 write:1 putting:2 clarity:1 graph:10 relaxation:25 merely:1 sum:3 run:1 named:1 family:13 reader:1 circumvents:1 bgs14:4 appendix:1 comparable:7 bound:14 quadratic:4 oracle:2 annual:4 adapted:1 precisely:4 constraint:6 x2:1 answered:1 extremely:3 min:4 martin:3 department:2 charikar:1 according:1 smaller:1 slightly:1 alistair:1 intuitively:1 restricted:1 pr:31 computationally:2 remains:1 devavrat:1 needed:1 know:1 merit:1 informal:3 yuanzhi:1 apply:1 generic:2 vishnoi:1 shah:1 altogether:1 rp:2 running:1 graphical:4 calculating:7 especially:1 approximating:3 hypercube:5 classical:2 objective:3 already:1 question:3 strategy:1 usual:1 separate:1 polytope:4 considers:1 reason:3 provable:8 toward:1 willsky:2 providing:2 nc:1 setup:1 unfortunately:1 robert:1 statement:1 potentially:1 negative:1 design:2 steurer:1 upper:5 observation:2 markov:1 finite:2 rn:2 arbitrary:1 community:3 david:3 namely:3 tentative:1 trbp:1 barcelona:1 nip:1 address:1 able:1 usually:1 regime:2 program:2 max:8 explanation:1 belief:2 analogue:2 wainwright:3 event:1 natural:5 business:1 brief:1 imply:1 negativity:1 extract:1 grothendieck:2 prior:3 literature:1 understanding:1 review:2 loss:1 bresler:1 proven:1 foundation:3 illuminating:2 degree:12 proxy:1 imposes:1 principle:21 occam:1 heavy:2 cd:21 summary:1 allow:1 barak:1 distributed:1 depth:1 gram:1 rich:1 doesn:2 concretely:1 polynomially:1 transaction:1 functionals:5 approximate:10 boaz:1 ml:1 global:2 xi:34 lasserre:1 additionally:2 bethe:4 williamson:4 poly:4 berens:1 anthony:1 domain:1 vj:2 main:5 dense:1 bounding:2 nothing:2 allowed:1 wish:5 exponential:14 candidate:1 tied:1 wirth:1 grained:1 theorem:25 specific:1 xt:2 jt:13 kvi:2 decay:1 x:4 concern:2 intractable:1 albeit:3 arcsin:2 flavor:1 entropy:56 girth:1 simply:1 likely:1 hvn:1 jacm:1 g2:28 springer:1 satisfies:4 acm:5 goal:1 hard:5 lemma:7 called:1 goemans:3 formally:1 unpleasant:1 mark:1 latter:2 jonathan:1 mcmc:2 princeton:6 phenomenon:1 |
5,713 | 617 | ?
a
Statistical Mechanics of Learning In
Large Committee Machine
Holm Schwarze
CONNECT, The Niels Bohr Institute
Blegdamsvej 17, DK-2100 Copenhagen 0, Denmark
John Hertz?
Nordita
Blegdamsvej 17, DK-2100 Copenhagen 0, Denmark
Abstract
We use statistical mechanics to study generalization in large committee machines. For an architecture with nonoverlapping receptive fields a replica calculation yields the generalization error in the
limit of a large number of hidden units. For continuous weights the
generalization error falls off asymptotically inversely proportional
to Q, the number of training examples per weight. For binary
weights we find a discontinuous transition from poor to perfect
generalization followed by a wide region of metastability. Broken
replica symmetry is found within this region at low temperatures.
For a fully connected architecture the generalization error is calculated within the annealed approximation. For both binary and
continuous weights we find transitions from a symmetric state to
one with specialized hidden units, accompanied by discontinuous
drops in the generalization error.
1
Introduction
There has been a good deal of theoretical work on calcula.ting the generalization
ability of neural networks within the fra.mework of statistical mechanics (for a review
? Address in 1993: Laboratory of Neuropsychology, NIMH, Bethesda, MD 20892, USA
523
524
Schwarze and Hertz
see e.g. Watkin et.al., 1992; Seung et.al., 1992). This approach has mostly been
applied to single-layer nets (e.g. Gyorgyi and Tishby, 1990; Seung et.al., 1992).
Extensions to networks with a hidden layer include a model with small hidden
receptive fields (Sompolinskyand Tishby, 1990), some general results on networks
whose outputs are continuous functions of their inputs (Seung et.al., 1992; Krogh
and Hertz, 1992), and calculations for a so-called committee machine (Nilsson,
1965), a two-layer Boolean network, which implements a majority decision of the
hidden units (Schwarze et.al., 1992; Schwarze and Hertz, 1992; Mato and Parga,
1992; Barkai et.al., 1992; Engel et.al., 1992). This model has previomlly been studied
when learning a function which could be implemented by a simple perceptron (i.e.
one with no hidden units) in the high-temperature (i.e. high-noise) limit (Schwarze
et.al., 1992). In most practical applications, however, the function to be learnt is
not linearly separable. Therefore, we consider here a committee machine trained on
a rule which itself is defined by another committee machine (the 'teacher' network)
and hence not linearly separable.
We calculate the generalization error, the probability of misclassifying an arbitrary
new input, as a function of 0, the ratio of the number of training examples P to the
number of adjustable weights in the network. First we present results for the 'tree'
committee machine, a restricted version of the model in which the receptive fields of
the hidden units do not overlap. In section 3 we study a fully connected architecture
allowing for correlations between different hidden units in the student network. In
both cases we study a large-net limit in which the total number of inputs (N) and
the number of hidden units (K) both go to infinity, but with K ?: N.
2
Committee machine with nonoverlapping receptive fields
In this model each hidden unit receives its input from N I K input units, subject to
the restriction that different hidden units do not share common inputs. Therefore
there is only one path from each input unit to the output. The hidden-output
weights are all fixed to +1 as to implement a majority decision of the hidden units.
The overall network output for inputs 5, E R N/K, 1 = 1, ... , K, to the K branches
is given by
0"(
where
0"1
{S,}) = sign ( ~
t.
0",
(5,?) ,
(1)
is the output of the lth hidden unit, given by
0".(5,)
= sign (
1ft w, . 5.) .
(2)
Here W, is the N I K -dimensional weight vector connecting the input with the Ith
hidden unit. The training examples ({~#-' ,}, r( {~#-' ,}), j.? = I, ... , P, are generated by
another committee machine with weight vectors 11, and an overall output r({~#-'I})'
defined analogously to (1). There are N adjustable weights in the network, and
therefore we have 0 = PIN.
Statistical Mechanics of Learning in a Large Committee Machine
As in the corresponding calculations for simple perceptrons (Gardner and Derrida, 1988; Gyorgyi and Tishby, 1990; Seung et.al., 1992), we consider a stochastic
learning algorithm which for long training times yields a Gibbs distribution of networks. The statistical mechanics approach starts out from the partition function
Z = jdpo({W,})e- 13E ({W , }), an integral over weight space with a priori measure
Po({W,}), weighted with a thermal factor e- 13E ({w ,l), where E is the total error
on the training examples
p
E({W,}) =
I:e[-u({{IL,}) .r({{IL I })].
(3)
1L=1
The formal temperature T = 1/ f3 defines the level of noise during the training
process. For T = 0 this procedure corresponds to simply minimizing the training
error E.
From this the average free energy F = -T ((lnZ)), averaged over all possible sets
of training examples can be calculated using the replica method (for details see
Schwarze and Hertz, 1992). Like the calculations for simple perceptrons, our theory
has two sets of order parameters:
0.13 _ K Wo. WI3
q,
- N -I
?-1
a.
RI
= NK
a.
WI ?V , .
Note that these are the only order parameters in this model. Due to the tree structure no correlations between different hidden units exist. Assuming both replica
symmetry and 'translational symmetry' we are left with two parameters: q, the
pattern average of the square of the average input-hidden weight vector, and R,
the average overlap between this weight vector and a corresponding one for the
teacher.
We then obtain expressions for the replica-symmetric free energy of the form
G(q, R, tI, R)
0 G1(q, R) + G 2 (q, R, tI, R), where the 'entropy' terms G 2 for the
continuous- and binary-weight cases are exactly the same as in the simple perceptron (Gyorgyi and Tishby, 1990, Seung et.al., 1992). In the large-K limit another
simplification similar to the zero-temperature capacity calculation (Barkai et.al.,
1992) is found in the tree model. The 'energy' term G 1 is the same as the corresponding term in the calculation for the simple perceptron, except that the order
parameters have to be replaced by f(q) = (2/1r) sin- 1 q and f(R) = (2/1r) sin- 1 R.
The generalization error
=
?g
= -1 arccos If(R)]
(4)
7r
can then be obtained from the value of R at the saddle point of the free energy.
For a network with continuous weights, the solution of the saddle point equations
yields an algebraically decreasing generalization error. There is no phase transition
at any value of 0 or T. For T = 0 the asymptotic form of the generalization error
in powers of 1/0 can be easily obtained as 1.25/0 + ('1/0 2 ), twice the ?g found for
the simple perceptron in this limit.
525
526
Schwarze and Henz
0.50
0.400.30
'"
\I)
0.20
0.10
0.00
2
0
4
3
Figure 1: Learning curve for the large-K tree committee (solid line) with binary
weights at T = 1. The phase transition occurs at Oc = 1.98, and the spinodal point
is at 0, = 3.56. The analytic results are compared with Monte Carlo simulations
with K
9, N
75 and T
I, averaged over 10 runs. In each simulation
the number of training examples is gradually increased (dotted line) and decreased
(dashed line), respectively. The broken line shows the generalization error for the
simple perceptron.
=
=
=
In contrast, the model with binary weights exhibits a phase transition at all temperatures from poor to perfect generalization. The corresponding generalization
error as a function of 0 is shown in figure 1. At small values of 0 the free energy
has two saddle points, one at R < 1 and the other at R
1. Initially the solution
with R < 1 and poor generalization ability has the lower free energy and therefore
corresponds to the equilibrium state. When the load parameter is increased to a
critical value Oc, the situation changes and the solution at R = 1 becomes the global
minimum of the free energy. The system exhibits a first order phase transition to
the state of perfect generalization. In the region Oc < 0 < 0, the R < 1 solution
remains metastable and disappears at the spinodal point 0,. We find the same
qualitative picture at all temperatures, and the complete replica symmetric phase
diagram is shown in figure 2. The solid line corresponds to the phase transition
to perfect generalization, and in the region between the solid and the dashed lines
the R < 1 state of poor generalization is metastable. Below the dotted line, the
replica-symmetric solution yields a negative entropy for the metastable state. This
is unphysical in a binary system and replica symmetry has to be broken in this
region, indicating the existence of many different metastable states.
=
The simple perceptron without hidden units corresponds to the case K = 1 in
our model. A comparison of the generalization properties with the large-K limit
shows that both limits exhibit qualitatively similar behavior. The locations of the
thermodynamic transitions and the spinodal line, however, are different and the
generalization error of the R < 1 state in the large- K committee machine is higher
than in the simple perceptron.
The case of general finite K is rather more involved, but the annealed approximation
Statistical Mechanics of Learning in a Large Committee Machine
1.0
R<1
0.8
metastability
0.6
.???????~?>?;.?r
~
0.4
~
R=1
~
/
I
RSB
0.2
I
0.0
1.0
1.5
2.0
2.5
3.0
0/
Figure 2: Replica-symmetric phase diagram ofthe large-K tree committee machine
with binary weights. The solid line shows the locations of the phase transition, and
the spinodal line is shown dashed. Below the the dotted line the replica-symmetric
solution is incorrect.
for finite K indicates a rather smooth K -dependence for 1
Parga, 1992).
<K <
00
(Mato and
We performed Monte-Carlo simulations to check the validity of the assumptions
made in our calculation and found good agreements with our analytic results. Figure
1 compares the analytic predictions for large K with Monte Carlo simulations for
K = 9. The simulations were performed for a slowly increasing and decreasing
training set size, respectively, yielding a hysteresis loop around the location of the
phase transition.
3
Fully connected committee machine
In contrast to the previous model the hidden units in the fully connected committee
machine receive inputs from the entire input layer. Their output for a given Ndimensional input vector 5 is given by
0',(5) = sign
(.Jw W, . 5),
(5)
while the overall output is again of the form (1). Note that the weight vectors W,
are now N-dimensional, and the load parameter is given by a = P / (K N).
For this model we solved the annealed approximation, which replaces ((In Z)) by
In ((Z)). This approximation becomes exact at high temperatures (high noise level
during training). For learnable target rules, as in the present problem, previous work
indicates that the annealed approximation yields qualitatively correct results and
correctly predicts the shape of the learning curves even at low temperatures (Seung
et.al., 1992). Performing the average over all possible training sets again leads to
two sets of order parameters: the overlaps between the student and teacher weight
527
528
Schwarze and Hertz
=
vectors, RlIe = N- 1 W, . V An and the mutual overlaps in the student network CUe
N -1 W,, Wk' The weight vectors of the target rule are assumed to be un correlated
and normalized, N- 1L . V k = O,k. As in the previous model we make symmetry
assumptions for the order parameters. In the fully connected architecture we have
to allow for correlations between different hidden units (RlIe, ClIe :f! 0 for l =f. Ie) but
also include the possibility of a specialization of individual units (Rll =f. RlIe). This
is necessary because the ground state of the system with vanishing generalization
error is achieved for the choice R'k = C'k = O,k. Therefore we make the ansatz
(6)
C'k = C + (1 - C)O/k
R'k = R + 1101111,
and evaluate the annealed free energy of the system using the saddle point method
(details will be reported elsewhere). The values of the order parameters at the
minimum of the free energy finally yield the average generalization error fg as a
function of o.
For a network with continuous weights and small 0 the global minimum of the free
energy occurs at 11 = 0 and R '" qK- 3 / 4 ). Hence, for small training sets each
hidden unit in the student network has a small symmetric overlap to all the hidden
units in the teacher network. The information obtained from the training examples
is not sufficient for a specialization of hidden units, and the generalization error
approaches a plateau. To order 1/VK, this approach is given by
?g
= fO +
~ + 0(1/ K),
fO =
~ arccos ( )2/71") ~ 0.206,
(7)
with 'Y({3) = )71"/2 - 1 [(1 - e-~)-1 - foJ/(471"). Figure 3 shows the generalization
error as a function of 0, including 1/VK-corrections for different values of K.
0.50
0.40
D'
\U
0.30 ......
(x,
0.20!:" ~'~'~" "':'''':'''.:::.:::':: :':'-",:,?::?::?..:r.:.-.: t.
0.10
(Xc
-'''''r''
i
?'?'?-?-.L.~
0.00 t.......o~...............................L.....~...........~~L........o.-.........:l
o
5
10
15
20
25
a=P/KN
Figure 3: Generalization error for continuous weights and T = 0.5. The approach to
the residual error is shown including 1/VJ(-corrections for K=5 (solid line), K=ll
(dotted line), and K=100 (dashed line). The broken line corresponds to the solution
with nonvanishing 11.
When the training set size is increased to a critical value
0,
of the load parameter,
Statistical Mechanics of Learning in a Large Committee Machine
a second minimum of the free energy appears at a finite value of /:::,. close to 1. For
a larger value Oc > 0, this becomes the global minimum of the free energy and
the system exhibits a first order phase transition. The generalization error of the
specialized solution decays smoothly with an asymptotic behavior inversely proportional to o. However, the poorly-generalizing symmetric state remains metastable
for all ? > Oc. Therefore, a stochastic learning procedure starting with /:::,. = 0 will
first settle into the metastable state. For large N it will take an exponentially long
time to cross the free energy barrier to the global minimum of the free energy.
In a network with binary weights and for large K we find the same initial approach
to a finite generalization error as in (7) for continuous weights. In the large-K limit
the discreteness of the weights does not influence the behavior for small training
sets. However, while a perfect match of the student to the teacher network (Rue =
e'k = Olk) cannot happen for ? < 00 in the continuous model, such a 'freezing'
is possible in a discrete system. The free energy of the binary model always has
e'k
Olk. When the load parameter is increased to a
a local minimum at R'k
critical value, this minimum becomes the global minimum of the free energy, and
a discontinuous transition into this perfectly generalizing state occurs, just as in
the binary-weight simple perceptron and the tree described in section 2. As in the
case of continuous weights, the symmetric solution remains metastable here even
for large values of o. Figure 4 shows the generalization error for binary weights,
including 1/v'K-corrections for K = 5. The predictions of the large-K theory
are compared with Monte Carlo simulations. Although we cannot expect a good
quantitative agreement for such a small committee, the simulations support our
qualitative results. Note that the leading order correction to ?o in eqn. (7) is only
small for ~ 11K. However, we have obtained a different solution, which is valid
for ? "-' (111 K). The corresponding generalization error is shown as a dotted line
in figure 4.
=
=
?
0.50
?....
0.40
0.30
'"
?
UJ
~
0.20
1M
1M
t
t
lI(
?
tt
0.10
1M
~
0.00
0
5
10
ex
15
20
25
30
= P/KN
Figure 4: Generalization error for binary weights at T = 5. The large-K theory
for different regions of ? is compared with simulations for K = 5 and N = 45
averaged over all simulations (+) and simulations, in which no freezing occurred (*),
respectively. The solid line shows the finite-o results including II v'K-corrections.
The dotted line shows the small-o solution.
529
530
Schwarze and Hertz
Compared to the tree model the fully connected committee machine shows a qualitatively different behavior. This difference is particularly pronounced in the continuous model. While the generalization error of the tree architecture decays smoothly
for all values of a, the fully connected model exhibits a discontinuous phase transition. Compared to the tree model, the fully connected architecture has an additional
symmetry, because each permutation of hidden units in the student network yields
the same output for a given input (Barkai et.al., 1992). This additional degree of
freedom causes the poor generalization ability for small training sets. Only if the
training set size is sufficiently large can the hidden units specialize on one of the
hidden units in the teacher network and achieve good generalization. However, the
poorly generalizing states remain metastable even for arbitrarily large a. A similar
phenomenon has also been found in a different architecture with only 2 hidden units
performing a parity operation (Hansel et.al., 1992).
Acknowledgements
H. Schwarze acknowledges support from the EC under the SCIENCE programme
and by the Danish Natural Science Council and the Danish Technical Research
Council through CONNECT.
References
E. Barkai, D. Hansel, and H. Sompolinsky (1992), Phys.Rev. A 45, 4146.
A. Engel, H.M. Kohler, F. Tschepke, H. Vollmayr, and A. Zippelius (1992),
Phys.Rev. A 45, 7590.
E. Gardner, B. Derrida (1989), J.Phys. A 21, 271.
G. Gyorgyi and N. Tishby (1990) in Neural Networks and Spin Glasses, edited K.
Thuemann and R. Koberle (World Scientific, Singapore).
D. Hansel, G. Mato, and C. Meunier (1992), Europhys.Lett. 20, 471.
A. Krogh, l. Hertz (1992), Advances in Neural Information Processing Systems IV,
edited by l.E. Moody, S.l. Hanson, and R.P. Lippmann, (Morgan Kaufmann, San
Mateo).
G. Mato, N. Parga (1992), J.Phys. A 25, 5047.
N.J. Nilsson (1965) Learning Machines, (McGraw-Hill, New York).
H. Schwarze, M. Opper, and W. Kinzel (1992), Phys.Rev. A 45, R6185.
H. Schwarze, J. Hertz (1992), Europhys.Lett. 20,375.
H.S. Seung, H. Sompolinsky, and N. Tishby (1992), Phys.Rev. A 45, 6056.
H. Sompolinsky, N. Tishby (1990), Europhys.Lett. 13,567.
T. Watkin, A. Rau, and M. Biehl (1992), to be published in Review of Modern
Physics.
| 617 |@word version:1 simulation:10 solid:6 initial:1 john:1 happen:1 partition:1 shape:1 analytic:3 drop:1 cue:1 ith:1 vanishing:1 location:3 qualitative:2 incorrect:1 specialize:1 behavior:4 mechanic:7 decreasing:2 increasing:1 becomes:4 zippelius:1 quantitative:1 ti:2 exactly:1 unit:26 local:1 limit:8 path:1 twice:1 studied:1 mateo:1 metastability:2 averaged:3 practical:1 implement:2 procedure:2 cannot:2 close:1 influence:1 restriction:1 rll:1 annealed:5 go:1 starting:1 rule:3 target:2 exact:1 agreement:2 particularly:1 predicts:1 ft:1 solved:1 calculate:1 region:6 connected:8 sompolinsky:3 neuropsychology:1 edited:2 broken:4 nimh:1 seung:7 trained:1 po:1 easily:1 monte:4 europhys:3 whose:1 larger:1 biehl:1 ability:3 g1:1 itself:1 net:2 loop:1 poorly:2 achieve:1 pronounced:1 perfect:5 derrida:2 krogh:2 implemented:1 discontinuous:4 correct:1 stochastic:2 settle:1 rlie:3 generalization:33 extension:1 correction:5 around:1 sufficiently:1 ground:1 equilibrium:1 niels:1 hansel:3 council:2 engel:2 weighted:1 always:1 rather:2 vk:2 indicates:2 check:1 contrast:2 glass:1 entire:1 initially:1 hidden:27 overall:3 translational:1 spinodal:4 priori:1 arccos:2 mutual:1 field:4 f3:1 modern:1 individual:1 replaced:1 phase:11 freedom:1 possibility:1 yielding:1 bohr:1 integral:1 necessary:1 tree:9 iv:1 theoretical:1 increased:4 boolean:1 tishby:7 reported:1 connect:2 kn:2 teacher:6 learnt:1 ie:1 off:1 physic:1 ansatz:1 lnz:1 connecting:1 analogously:1 moody:1 nonvanishing:1 again:2 slowly:1 watkin:2 leading:1 li:1 nonoverlapping:2 accompanied:1 student:6 wk:1 hysteresis:1 performed:2 start:1 il:2 square:1 spin:1 qk:1 kaufmann:1 yield:7 ofthe:1 parga:3 carlo:4 mato:4 published:1 plateau:1 fo:2 phys:6 danish:2 energy:16 involved:1 appears:1 higher:1 jw:1 just:1 correlation:3 receives:1 eqn:1 freezing:2 defines:1 schwarze:12 scientific:1 barkai:4 usa:1 validity:1 normalized:1 hence:2 symmetric:9 laboratory:1 deal:1 sin:2 during:2 ll:1 oc:5 hill:1 complete:1 tt:1 temperature:8 common:1 specialized:2 kinzel:1 exponentially:1 occurred:1 rau:1 gibbs:1 binary:12 arbitrarily:1 morgan:1 minimum:9 additional:2 algebraically:1 dashed:4 ii:1 branch:1 thermodynamic:1 smooth:1 technical:1 match:1 calculation:7 cross:1 long:2 olk:2 prediction:2 achieved:1 receive:1 decreased:1 diagram:2 subject:1 architecture:7 perfectly:1 expression:1 specialization:2 wo:1 york:1 cause:1 gyorgyi:4 exist:1 misclassifying:1 singapore:1 dotted:6 sign:3 per:1 correctly:1 discrete:1 nordita:1 discreteness:1 replica:10 asymptotically:1 run:1 decision:2 meunier:1 layer:4 followed:1 simplification:1 replaces:1 infinity:1 ri:1 performing:2 separable:2 metastable:8 poor:5 hertz:9 remain:1 wi:1 bethesda:1 rev:4 nilsson:2 restricted:1 gradually:1 equation:1 remains:3 pin:1 committee:18 operation:1 existence:1 include:2 xc:1 ting:1 uj:1 occurs:3 receptive:4 dependence:1 md:1 exhibit:5 blegdamsvej:2 majority:2 capacity:1 evaluate:1 denmark:2 assuming:1 holm:1 ratio:1 minimizing:1 mostly:1 negative:1 adjustable:2 allowing:1 finite:5 thermal:1 situation:1 arbitrary:1 copenhagen:2 hanson:1 address:1 foj:1 below:2 pattern:1 including:4 rsb:1 power:1 critical:3 overlap:5 natural:1 ndimensional:1 residual:1 inversely:2 picture:1 gardner:2 disappears:1 acknowledges:1 koberle:1 review:2 acknowledgement:1 asymptotic:2 fully:8 expect:1 permutation:1 proportional:2 degree:1 sufficient:1 share:1 elsewhere:1 parity:1 free:15 formal:1 allow:1 perceptron:8 institute:1 wide:1 fall:1 barrier:1 fg:1 curve:2 calculated:2 lett:3 transition:13 valid:1 world:1 opper:1 qualitatively:3 made:1 san:1 programme:1 ec:1 lippmann:1 mcgraw:1 global:5 assumed:1 continuous:11 un:1 correlated:1 symmetry:6 rue:1 vj:1 linearly:2 noise:3 load:4 fra:1 learnable:1 dk:2 decay:2 nk:1 entropy:2 smoothly:2 generalizing:3 simply:1 saddle:4 corresponds:5 lth:1 change:1 except:1 unphysical:1 called:1 total:2 perceptrons:2 indicating:1 support:2 kohler:1 phenomenon:1 ex:1 |
5,714 | 6,170 | Privacy Odometers and Filters: Pay-as-you-Go
Composition
Ryan Rogers?
Aaron Roth?
Jonathan Ullman?
Salil Vadhan?
Abstract
In this paper we initiate the study of adaptive composition in differential privacy
when the length of the composition, and the privacy parameters themselves can
be chosen adaptively, as a function of the outcome of previously run analyses.
This case is much more delicate than the setting covered by existing composition
theorems, in which the algorithms themselves can be chosen adaptively, but the
privacy parameters must be fixed up front. Indeed, it isn?t even clear how to define
differential privacy in the adaptive parameter setting. We proceed by defining two
objects which cover the two main use cases of composition theorems. A privacy
filter is a stopping time rule that allows an analyst to halt a computation before his
pre-specified privacy budget is exceeded. A privacy odometer allows the analyst
to track realized privacy loss as he goes, without needing to pre-specify a privacy
budget. We show that unlike the case in which privacy parameters are fixed, in the
adaptive parameter setting, these two use cases are distinct. We show that there
exist privacy filters with bounds comparable (up to constants) with existing privacy composition theorems. We also give a privacy odometer that nearly matches
non-adaptive private composition theorems, but is sometimes worse by a small
asymptotic factor. Moreover, we show that this is inherent, and that any valid
privacy odometer in the adaptive parameter setting must lose this factor, which
shows a formal separation between the filter and odometer use-cases.
1
Introduction
Differential privacy [DMNS06] is a stability condition on a randomized algorithm, designed to guarantee individual-level privacy during data analysis. Informally, an algorithm is differentially private
if any pair of close inputs map to similar probability distributions over outputs, where similarity is
measured by two parameters ? and ?. Informally, ? measures the amount of privacy and ? measures
the failure probability that the privacy loss is much worse than ?. A signature property of differential
privacy is that it is preserved under composition?combining many differentially private subroutines
into a single algorithm preserves differential privacy and the privacy parameters degrade gracefully.
Composability is essential for both privacy and for algorithm design. Since differential privacy is
composable, we can design a sophisticated algorithm and prove it is private without having to rea?
Department of Applied Mathematics and Computational Science, University of Pennsylvania.
[email protected].
?
Department of Computer and Information Sciences,
University of Pennsylvania.
[email protected]. Supported in part by an NSF CAREER award, NSF grant CNS-1513694,
and a grant from the Sloan Foundation.
?
College of Computer and Information Science, Northeastern University. [email protected]
?
Center for Research on Computation & Society and John A. Paulson School of Engineering & Applied Sciences, Harvard University. [email protected]. Work done while visiting the Department of Applied
Mathematics and the Shing-Tung Yau Center at National Chiao-Tung University in Taiwan. Also supported by
NSF grant CNS-1237235, a grant from the Sloan Foundation, and a Simons Investigator Award.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
son directly about its output distribution. Instead, we can rely on the differential privacy of the basic
building blocks and derive a privacy bound on the whole algorithm using the composition rules.
The composition theorem for differential privacy is very strong, and holds even if the choice of
which differentially private subroutine to run is adaptive?that is, the choice of the next algorithm
may depend on the output of previous algorithms. This property is essential in algorithm design,
but also more generally in modeling unstructured sequences of data analyses that might be run
by a human data analyst, or even by many data analysts on the same data set, while only loosely
coordinating with one another. Even setting aside privacy, it can be very challenging to analyze
the statistical properties of general adaptive procedures for analyzing a dataset, and the fact that
adaptively chosen differentially private algorithms compose has recently been used to give strong
guarantees of statistical validity for adaptive data analysis [DFH+ 15, BNS+ 16].
However, all the known composition theorems for differential privacy [DMNS06, DKM+ 06,
DRV10, KOV15, MV16] have an important and generally overlooked caveat. Although the choice
of the next subroutine in the composition may be adaptive, the number of subroutines called and
choice of the privacy parameters ? and ? for each subroutine must be fixed in advance. Indeed, it is
not even clear how to define differential privacy if the privacy parameters are not fixed in advance.
This is generally acceptable when designing a single algorithm (that has a worst-case analysis),
since worst-case eventualities need to be anticipated and budgeted for in order to prove a theorem.
However, it is not acceptable when modeling the unstructured adaptivity of a data analyst, who may
not know ahead of time (before seeing the results of intermediate analyses) what he wants to do with
the data. When controlling privacy loss across multiple data analysts, the problem is even worse.
As a simple stylized example, suppose that A is some algorithm (possibly modeling a human data
analyst) for selecting statistical queries5 as a function of the answers to previously selected queries.
It is known that for any one statistical query q and any data set x, releasing the perturbed answer
a
? = q(x)+Z where Z ? Lap(1/?) is a Laplace random variable, ensures (?, 0)-differential privacy.
Composition theorems allow us to reason about the composition of k such operations, where the
queries can be chosen adaptively by A, as in the following simple program.
Example1(x):
For i = 1 to k: Let qi = A(?
a1 , . . . , a
?i?1 ) and let a
?i = qi (x) + Lap(1/?).
Output (?
a1 , . . . , a
?k ).
The ?basic? composition theorem [DMNS06] asserts that Example1 is (?k, 0)-differentially private.
The ?advanced? composition theorem [DRV10] gives a more sophisticated
bound and asserts that
p
(provided that ? is sufficiently small), the algorithm satisfies (? 8k ln(1/?), ?)-differential privacy
for any ? > 0. There is even an ?optimal? composition theorem [KOV15] too complicated to describe here. These analyses crucially assume that both the number of iterations k and the parameter
? are fixed up front, even though it allows for the queries qi to be adaptively chosen.6
Now consider a similar example where the number of iterations is not fixed up front, but actually
depends on the answers to previous queries. This is a special case of a more general setting where the
privacy parameter ?i in every round may be chosen adaptively?halting in our example is equivalent
to setting ?i = 0 in all future rounds.
Example2(x, ? ):
Let i ? 1, a
?1 ? q1 (x) + Lap(1/?).
While a
?i ? ? : Let i ? i + 1, qi = A(?
a1 , . . . , a
?i?1 ), and let a
?i = qi (x) + Lap(1/?).
Output (?
a1 , . . . , a
?i ).
Example2 cannot be said to be differentially private ex ante for any non-trivial fixed values of ? and ?,
because the computation might run for an arbitrarily long time and privacy may degrade indefinitely.
What can we say about privacy after we run the algorithm? If the algorithm/data-analyst happens to
stop after k rounds, can we apply the composition theorem ex post to conclude that it is (?k, 0)- and
5
A statistical query is parameterized by a predicate ?, and asks ?how many elements of the dataset satisfy
??? Changing a single element of the dataset can change the answer to the statistical query by at most 1.
6
The same analysis holds for hetereogeneous parameters (?1 , . . . , ?k ) are used in each round as long as
P
they are all fixed in advance. For basic composition ?k is replaced with ki=1 ?i and for advanced composition
q
?
Pk
2
? k is replaced with
i=1 ?i .
2
p
(? 8k log(1/?), ?)-differentially private, as we could if the algorithm were constrained to always
run for at most k rounds?
In this paper, we study the composition properties of differential privacy when everything?the
choice of algorithms, the number of rounds, and the privacy parameters in each round?may be
adaptively chosen. We show that this setting is much more delicate than the settings covered by
previously known composition theorems, but that these sorts of ex post privacy bounds do hold
with only a small (but in some cases unavoidable) loss over the standard setting. We note that the
conceptual discussion of differential privacy focuses a lot on the idea of arbitrary composition and
our results give more support for this conceptual interpretation.
1.1
Our Results
We give a formal framework for reasoning about the adaptive composition of differentially private
algorithms when the privacy parameters themselves can be chosen adaptively. When the parameters
are chosen non-adaptively, a composition theorem gives a high probability bound on the worst case
privacy loss that results from the output of an algorithm. In the adaptive parameter setting, it no
longer makes sense to have fixed bounds on the privacy loss. Instead, we propose two kinds of
primitives capturing two natural use cases for composition theorems:
1. A privacy odometer takes as input a global failure parameter ?g . After every round i in the
composition of differentially private algorithms, the odometer outputs a number ?i that may
depend on the realized privacy parameters ?i , ?i in the previous rounds. The privacy odometer
guarantees that with probability 1 ? ?g , for every round i, ?i is an upper bound on the privacy
loss in round i.
2. A privacy filter is a way to cut off access to the dataset when the privacy loss is too large. It
takes as input a global privacy ?budget? (?g , ?g ). After every round, it either outputs CONT (?continue?) or HALT depending on the privacy parameters from the previous rounds. The privacy filter
guarantees that with probability 1 ? ?g , it will output HALT before the privacy loss exceeds ?g .
When used, it guarantees that the resulting interaction is (?g , ?g )-DP.
A tempting heuristic is to take the realized privacy parameters ?1 , ?1 , . . . , ?i , ?i and apply one of the
existing composition theorems to those parameters, using that value as a privacy odometer or implementing a privacy filter by halting when getting a value that exceeds the global budget. However
this heuristic does not necessarily give valid bounds.
We first prove that the heuristic does work for the basic composition theorem [DMNS06] in which
the parameters ?i and ?i add up. We prove that summing the realized privacy parameters yields both
a valid privacy odometer and filter. The idea of a privacy filter was also considered in [ES15], who
show that basic composition works in the privacy filter application.
We then show that the heuristic breaks for the advanced composition theorem [DRV10]. However,
we give a valid privacy filter that gives the same asymptotic bound as the advanced composition
theorem, albeit with worse constants. On the other hand, we show that, in some parameter regimes,
the asymptotic bounds given by our privacy filter cannot be achieved by a privacy odometer. This
result gives a formal separation between the two models when the parameters may be chosen adaptively, which does not exist when the privacy parameters are fixed. Finally, we give a valid privacy
odometer with a bound that is only slightly worse asymptotically than the bound that the advanced
composition theorem would give if itpwere used (improperly) as a heuristic. Our bound is worse
by a factor that is never larger than log log(n) (here, n is the size of the dataset) and for some
parameter regimes is only a constant.
2
Privacy Preliminaries
Differential privacy is defined based on the following notion of similarity between two distributions.
Definition 2.1 (Indistinguishable). Two random variables X and Y taking values from domain
D are (?, ?)-indistinguishable, denoted as X ??,? Y , if ?S ? D, P [X ? S] ? e? P [Y ? S] +
? and P [Y ? S] ? e? P [X ? S] + ?.
3
There is a slight variant of indistinguishability, called point-wise indistinguishability, which is nearly
equivalent, but will be the more convenient notion for the generalizations we give in this paper.
Definition 2.2 (Point-wise Indistinguishable). Two random variables X and Y taking values from
D are (?, ?)-point-wise
if with probability at least 1 ? ? over either a ? X or
indistinguishable
P[X=a]
a ? Y , we have log P[Y =a] ? ?.
Lemma 2.3 ([KS14]). Let X and Y be two random variables taking values from D. If X and
Y are (?,
?)-point-wise indistinguishable, then X ??,? Y . Also, if X ??,? Y then X and Y are
2?
2?, e? ? -point-wise indistinguishable.
We say two databases x, x0 ? X n are neighboring if they differ in at most one entry, i.e. if there
exists an index i ? [n] such that x?i = x0?i . We can now state differential privacy in terms of
indistinguishability.
Definition 2.4 (Differential Privacy [DMNS06]). A randomized algorithm M : X n ? Y with
arbitrary output range Y is (?, ?)-differentially private (DP) if for every pair of neighboring databases
x, x0 : M(x) ??,? M(x0 ).
We then define the privacy loss LossM (a; x,
x0 ) for outcome
a ? Y and neighboring datasets
P[M(x)=a]
0
n
0
x, x ? X as LossM (a; x, x ) = log P[M(x
. We note that if we can bound
0 )=a]
LossM (a; x, x0 ) for any neighboring datasets x, x0 with high probability over a ? M(x), then
Theorem 2.3 tells us that M is differentially private. Moreover, Theorem 2.3 also implies that
this approach is without loss of generality (up to a small difference in the parameters). Thus, our
composition theorems will focus on bounding the privacy loss with high probability.
A useful property of differential privacy is that it is preserved under post-processing without degrading the parameters:
Theorem 2.5 (Post-Processing [DMNS06]). Let M : X n ? Y be (?, ?)-DP and f : Y ? Y 0 be
any randomized algorithm. Then f ? M : X n ? Y 0 is (?, ?)-DP.
We next recall a useful characterization from [KOV15]: any DP algorithm can be written as the
post-processing of a simple, canonical algorithm which is a generalization of randomized response.
Definition 2.6. For any ?, ? ? 0, we define the randomized response algorithm RR?,? : {0, 1} ?
{0, >, ?, 1} as the following (Note that if ? = 0, we will simply write the algorithm RR?,? as RR? .)
P [RR?,? (0) = 0] = ?
P [RR?,? (1) = 0] = 0
?
e
P [RR?,? (0) = >] = (1 ? ?) 1+e
?
1
P [RR?,? (1) = >] = (1 ? ?) 1+e
?
1
P [RR?,? (0) = ?] = (1 ? ?) 1+e
?
P [RR?,? (0) = 1] = 0
e
P [RR?,? (1) = ?] = (1 ? ?) 1+e
?
P [RR?,? (1) = 1] = ?
?
Kairouz, Oh, and Viswanath [KOV15] show that any (?, ?)?DP algorithm can be viewed as a postprocessing of the output of RR?,? for an appropriately chosen input.
Theorem 2.7 ([KOV15], see also [MV16]). For every (?, ?)-DP algorithm M and for all neighboring databases x0 and x1 , there exists a randomized algorithm T where T (RR?,? (b)) is identically
distributed to M(xb ) for b ? {0, 1}.
This theorem will be useful in our analyses, because it allows us to without loss of generality analyze
compositions of these simple algorithms RR?,? with varying privacy parameters.
We now define the adaptive composition of differentially private algorithms in the setting introduced
by [DRV10] and then extended to heterogenous privacy parameters in [MV16], in which all of the
privacy parameters are fixed prior to the start of the computation. The following ?composition
game? is an abstract model of composition in which an adversary can adaptively select between
neighboring datasets at each round, as well as a differentially private algorithm to run at each round
? both choices can be a function of the realized outcomes of all previous rounds. However, crucially,
the adversary must select at each round an algorithm that satisfies the privacy parameters which
have been fixed ahead of time ? the choice of parameters cannot itself be a function of the realized
outcomes of previous rounds. We define this model of interaction formally in Algorithm 1 where
the output is the view of the adversary A which includes any random coins she uses RA and the
outcomes A1 , ? ? ? , Ak of every round.
4
Algorithm 1 FixedParamComp(A, E = (E1 , ? ? ? , Ek ), b), where A is a randomized algorithm,
E1 , ? ? ? , Ek are classes of randomized algorithms, and b ? {0, 1}.
b
Select coin tosses RA
for A uniformly at random.
for i = 1, ? ? ? , k do
b
A = A(RA
, Ab1 , ? ? ? , Abi?1 ) gives neighboring datasets xi,0 , xi,1 , and Mi ? Ei
A receives Abi = Mi (xi,b )
b
return view V b = (RA
, Ab1 , ? ? ? , Abk )
Definition 2.8 (Adaptive Composition [DRV10], [MV16]). We say that the sequence of parameters
?1 , ? ? ? , ?k ? 0, ?1 , ? ? ? , ?k ? [0, 1) satisfies (?g , ?g )-differential privacy under adaptive composition
if for every adversary A, and E = (E1 , ? ? ? , Ek ) where Ei is the class of (?i , ?i )-DP algorithms, we
have FixedParamComp(A, E, ?) is (?g , ?g )-DP in its last argument, i.e. V 0 ??g ,?g V 1 .
We first state a basic composition theorem which shows that the adaptive composition satisfies differential privacy where ?the parameters just add up.?
Theorem 2.9 (Basic Composition [DMNS06], [DKM+ 06]). The sequence ?1 , ? ? ? , ?k and ?1 , ? ? ? ?k
Pk
satisfies (?g , ?g )-differential privacy under adaptive composition where ?g = i=1 ?i , and ?g =
Pk
i=1 ?i .
We now state the advanced composition bound from [DRV10] which gives a quadratic improvement
to the basic composition bound.
Theorem 2.10 (Advanced Composition). For any ?? > 0, the sequence ?1 , ? ? ? , ?k and ?1 , ? ? ? ?k
where ? = ?i and ? = ?i for all i ? q
[k] satisfies (?g , ?g )-differential privacy under adaptive
?
? and ?g = k? + ?.
?
composition where ?g = ? (e ? 1) k + ? 2k log(1/?),
This theorem can be easily generalized to hold for values of ?i that are not all equal (as done in
[KOV15]). However, this is not as all-encompassing as it would appear at first blush, because this
straightforward generalization would not allow for the values of ?i and ?i to be chosen adaptively by
the data analyst. Indeed,the definition of differential privacy itself (Definition 2.4) does not straightforwardly extend to this case. The remainder of this paper is devoted to laying out a framework for
sensibly talking about the privacy parameters ?i and ?i being chosen adaptively by the data analyst,
and to prove composition theorems (including an analogue of Theorem 2.10) in this model.
3
Composition with Adaptively Chosen Parameters
We now introduce the model of composition with adaptive parameter selection, and define privacy
in this setting.
We want to model composition as in the previous section, but allow the adversary the ability to also
choose the privacy parameters (?i , ?i ) as a function of previous rounds of interaction. We will define
the view of the interaction, similar to the view in FixedParamComp, to be the tuple that includes A?s
random coin tosses RA and the outcomes A = (A1 , ? ? ? , Ak ) of the algorithms she chose. Formally,
we define an adaptively chosen privacy parameter composition game in Algorithm 2 which takes as
input an adversary A, a number of rounds of interaction k,7 and an experiment parameter b ? {0, 1}.
We then define the privacy loss with respect to AdaptParamComp(A, k, b) in the following way
for a fixed view v = (r, a) where r represents the random coin tosses of A and we write v<i =
7
Note that in the adaptive parameter composition game, the adversary has the option of effectively stopping
the composition early at some round k0 < k by simply setting ?i = ?i = 0 for all rounds i > k0 . Hence, the
parameter k will not appear in our composition theorems the way it does when privacy parameters are fixed.
This means that we can effectively take k to be infinite. For technical reasons, it is simpler to have a finite
parameter k, but the reader should imagine it as being an enormous number.
5
Algorithm 2 AdaptParamComp(A, k, b)
b
Select coin tosses RA
for A uniformly at random.
for i = 1, ? ? ? , k do
b
A = A(RA
, Ab1 , ? ? ? , Abi?1 ) gives neighboring xi,0 , xi,1 , parameters (?i , ?i ), Mi that is
(?i , ?i )-DP
A receives Abi = Mi (xi,b )
b
return view V b = (RA
, Ab1 , ? ? ? , Abk )
(r, a1 , ? ? ? , ai?1 ):
Loss(v) = log
!
k
X
P V0 =v
=
log
P [V 1 = v]
i=1
!
k
X
P Mi (xi,0 ) = vi |v<i
def
=
Lossi (v?i ).
P [Mi (xi,1 ) = vi |v<i ]
i=1
(1)
Note that the privacy parameters (?i , ?i ) depend on the previous outcomes that A receives. We will
frequently shorten our notation ?t = ?t (v<t ) and ?t = ?t (v<t ) when the outcome is understood.
It no longer makes sense to claim that the privacy loss of the adaptive parameter composition experiment is bounded by any fixed constant, because the privacy parameters (with which we would
presumably want to use to bound the privacy loss) are themselves random variables. Instead, we
define two objects which can be used by a data analyst to control the privacy loss of an adaptive
composition of algorithms.
The first object, which we call a privacy odometer will be parameterized by one global parameter
?g and will provide a running real valued output that will, with probability 1 ? ?g , upper bound the
privacy loss at each round of any adaptive composition in terms of the realized values of ?i and ?i
selected at each round.
Definition 3.1 (Privacy Odometer). A function COMP?g : R2k
?0 ? R ? {?} is a valid privacy
odometer if for all adversaries in AdaptParamComp(A, k, b), with probability at most ?g over v ?
V 0 : |Loss(v)| > COMP?g (?1 , ?1 , ? ? ? , ?k , ?k ) .
The second object, which we call a privacy filter, is a stopping time rule. It takes two global parameters (?g , ?g ) and will at each round either output CONT or HALT. Its guarantee is that with probability
1 ? ?g , it will output HALT if the privacy loss has exceeded ?g .
Definition 3.2 (Privacy Filter). A function COMP?g ,?g : R2k
?0 ? {HALT, CONT} is a valid
privacy filter for ?g , ?g ? 0 if for all adversaries A in AdaptParamComp(A, k, b), the following ?bad event? occurs with probability at most ?g when v ? V 0 : |Loss(v)| >
?g and COMP?g ,?g (?1 , ?1 , ? ? ? , ?k , ?k ) = CONT.
We note two things about the usage of these objects. First, a valid privacy odometer
can be used to provide a running upper bound on the privacy loss at each intermediate
round: the privacy loss at round k 0 < k must with high probability be upper bounded by
COMP?g (?1 , ?1 , . . . , ?k0 , ?k0 , 0, 0, . . . , 0, 0) ? i.e. the bound that results by setting all future privacy parameters to 0. This is because setting all future privacy parameters to zero is equivalent to stopping the computation at round k 0 , and is a feasible choice for the adaptive adversary A. Second, a privacy filter can be used to guarantee that with high probability, the
stated privacy budget ?g is never exceeded ? the data analyst at each round k 0 simply queries
COMP?g ,?g (?1 , ?1 , . . . , ?k0 , ?k0 , 0, 0, . . . , 0, 0) before she runs algorithm k 0 , and runs it only if the
filter returns CONT. Again, this is guaranteed because the continuation is a feasible choice of the
adversary, and the guarantees of both a filter and an odometer are quantified over all adversaries.
We first give an adaptive parameter version of the basic composition in Theorem 2.9. See the full
version for the proof.
Theorem 3.3. For each nonnegative ?g , COMP?g is a valid privacy odometer where
Pk
COMP? (?1 , ?1 , ? ? ? , ?k , ?k ) = ? if i=1 ?i > ?g and otherwise COMP?g (?1 , ?1 , ? ? ? , ?k , ?k ) =
Pk g
? 0, COMP?g ,?g is a valid privacy filter where
i=1 ?i . Additionally, for any ?g , ?g
Pk
Pk
COMP?g ,?g (?1 , ?1 , ? ? ? , ?k , ?k ) = HALT if t=1 ?t > ?g or i=1 ?i > ?g and CONT otherwise.
6
4
Concentration Preliminaries
We give a useful concentration bound that will be pivotal in proving an improved valid privacy
odometer and filter from that given in Theorem 3.3. To set this up, we present some notation: let
(?, F, P) be a probability triple where ? = F0 ? F1 ? ? ? ? ? F is an increasing sequence of
?-algebras. Let Xi be a real-valued Fi -measurable random variable, such that E [Xi |Fi?1 ] = 0 a.s.
Pk
for each i. We then consider the martingale where M0 = 0 and Mk = i=1 Xi , ?k ? 1. We
use results from [dlPKLL04] and [vdG02] to prove the following (see supplementary file).
Theorem 4.1. For Mk given above, if there exists two random variables Ci < Di which are Fi?1
measurable for i ? 1 such that Ci ? Xi ? Di almost surely ?i ? 1. and we define U02 = 0,
Pk
2
and Uk2 = r i=1 (Di ? Ci ) , ?k ? 1, then for any fixed k ? 1, ? > 0 and ? ? 1/e, we have
2
2
Uk
Uk
+
?
2
+
log
+
1
log(1/?)
? ?.
P |Mk | ?
4
4?
We will use this martingale inequality in our analysis for deriving composition bounds for both
privacy filters and odometers. The martingale we form will be the sum of the privacy loss from
a sequence of randomized response algorithms from Definition 2.6. Note that for pure-differential
privacy (where ?i = 0) the privacy loss at round i is then ??i , which are fixed given the previous
outcomes. See the supplementary file for the case when ?i > 0 at each round i.
We then use the result from Theorem 2.7 to conclude that every differentially private algorithm is
a post processing function of randomized response. Thus determining a high probability bound on
the martingale formed from the sum of the privacy losses of a sequence of randomized response
algorithms suffices for computing a valid privacy filter or odometer.
5
Advanced Composition for Privacy Filters
We next show that we can essentially get the same asymptotic bound as Theorem 2.10 for the privacy
filter setting using the bound in Theorem 4.1 for the martingale based on the sum of privacy losses
from a sequence of randomized response algorithms (see the supplementary file for more details).
Theorem 5.1. COMP?g ,?g is a valid privacy filter for ?g ? (0, 1/e) and ?g > 0 where
Pk
COMP?g ,?g (?1 , ?1 , ? ? ? , ?k , ?k ) = HALT if i=1 ?i > ?g /2 or if ?g is smaller than
k
X
?j (e?j ? 1) /2
j=1
v
u
k
u X
+ t2
?2i +
i=1
?2g
log(1/?g )
!
1
1 + log
2
!!
Pk
log(1/?g ) i=1 ?2i
+1
log(2/?g )
?2g
(2)
and otherwise COMP?g ,?g (?1 , ?1 , ? ? ? , ?k , ?k ) = CONT.
Note that if we have
Pk
2
i=1 ?i
= O (1/ log(1/?g )) and set ?g = ?
q
Pk
2
i=1 ?i
log(1/?g )
in (2),
we are then getting the same asymptotic bound on the privacy loss as in [KOV15] and in Theo1
rem 2.10 for the case when ?i = ? for i ? [k]. If k?2 ? 8 log(1/?
, then Theorem 2.10 gives a
g)
p
bound on the privacy loss of ? 8k log(1/?g ). Note that there may be better choices for the constant
p
28.04 that we divide ?2g by in (2), but for the case when ?g = ? 8k log(1/?g ) and ?i = ? for every
i ? [n], it is nearly optimal.
6
Advanced Composition for Privacy Odometers
One might hope to achieve the same sort of bound on the privacy loss from Theorem 2.10 when
the privacy parameters may be chosen adversarially. However we show that this cannot be the
case for any validpprivacy odometer. In particular, even if an adversary selects the same privacy
parameter ? = o( log(log(n)/?g )/k) each round but can adaptively select a time to stop interacting
7
with AdaptParamComp (which is a restricted special case of the power of the general adversary ?
stopping is equivalent to setting all future p
?i , ?i = 0), then we show that there can be no valid
privacy odometer achieving a bound of o(? k log (log(n)/?g )). This gives a separation between
the achievable bounds for a valid privacy odometers and filters. But for privacy applications, it is
worth noting that ?g is typically set to be (much) smaller than 1/n, in which case this gap disappears
(since log(log(n)/?g ) = (1 + o(1)) log(1/?g ) ). We prove the following with an anti-concentration
bound for random walks from [LT91] (see full version).
Theorem 6.1. For any ?g ? (0, O(1)) there is no valid COMP?g privacy
odometer where
qP
?
Pk
i
k
2
COMP?g (?1 , 0, ? ? ? , ?k , 0) = i=1 ?i ee?i ?1
i=1 ?i log(log(n)/?g )
+1 + o
We now give our main positive result for privacy odometers, which is similar to our privacy filter
in Theorem 5.1 except that ?g is replaced by ?g / log(n),
P as is necessary from Theorem 6.1. Note
that the bound incurs an additive 1/n2 loss to the i ?2i term that is present without privacy. In
any reasonable setting of parameters, this translates to at most a constant-factor multiplicative loss,
1
because there is no utility running any differentially private algorithm with ?i < 10n
(we know
0
that if A is (?i , 0)-DP then A(x) and A(x ) for neighboring inputs have statistical distance at most
e?i n ? 1 < 0.1, and hence the output is essentially independent of the input - note that a similar
statement holds for (?i , ?i )-DP.) The proof of the following result uses Theorem 4.1 along with a
Pk
union bound over log(n2 ) choices for ?, which are discretized values for i=1 ?2i ? [1/n2 , 1]. See
the full version for the complete proof.
Theorem 6.2 (Advanced Privacy Odometer). COMP?g is a valid privacy odometer for ?g ? (0, 1/e)
Pk
Pk
where COMP?g (?1 , ?1 , ? ? ? , ?k , ?k ) = ? if i=1 ?i > ?g /2, otherwise if i=1 ?2i ? [1/n2 , 1] then
v
u k
?i
k
?
uX
X
e ?1
?2i 1 + log
3 log(4 log2 (n)/?g ).
+ 2t
COMP?g (?1 , ?1 , ? ? ? , ?k , ?k ) =
?i
2
i=1
i=1
(3)
2
2
?
?
/
[1/n
,
1]
then
COMP
(?
,
?
,
?
?
?
,
?
,
?
)
is
equal
to
?
1
1
k
k
g
i
i=1
v
!
!!
u
?i
k
k
k
u
X
X
X
1
e ?1
t
2
2
2
2
?i
+ 2 1/n +
?i
1 + log 1 + n
?i
log(4 log2 (n))/?g ).
2
2
i=1
i=1
i=1
and if
Pk
(4)
Acknowledgements
The authors are grateful Jack Murtagh for his collaboration in the early stages of this work, and for
sharing his preliminary results with us. We thank Andreas Haeberlen, Benjamin Pierce, and Daniel
Winograd-Cort for helpful discussions about composition. We further thank Daniel Winograd-Cort
for catching an incorrectly set constant in an earlier version of Theorem 5.1.
8
References
[BNS+ 16] Raef Bassily, Kobbi Nissim, Adam D. Smith, Thomas Steinke, Uri Stemmer, and
Jonathan Ullman. Algorithmic stability for adaptive data analysis. In Proceedings
of the 48th Annual ACM on Symposium on Theory of Computing, STOC, 2016.
+
[DFH 15] Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, and
Aaron Leon Roth. Preserving statistical validity in adaptive data analysis. In Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, pages
117?126. ACM, 2015.
+
[DKM 06] Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni
Naor. Our data, ourselves: Privacy via distributed noise generation. In Advances in
Cryptology-EUROCRYPT 2006, pages 486?503. Springer, 2006.
[dlPKLL04] Victor H. de la Pea, Michael J. Klass, and Tze Leung Lai. Self-normalized processes:
exponential inequalities, moment bounds and iterated logarithm laws. Ann. Probab.,
32(3):1902?1933, 07 2004.
[DMNS06] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise
to sensitivity in private data analysis. In TCC ?06, pages 265?284, 2006.
[DRV10] Cynthia Dwork, Guy N. Rothblum, and Salil P. Vadhan. Boosting and differential
privacy. In 51th Annual IEEE Symposium on Foundations of Computer Science, FOCS
2010, October 23-26, 2010, Las Vegas, Nevada, USA, pages 51?60, 2010.
[ES15] Hamid Ebadi and David Sands. Featherweight PINQ. CoRR, abs/1505.02642, 2015.
[KOV15] Peter Kairouz, Sewoong Oh, and Pramod Viswanath. The composition theorem for
differential privacy. In Proceedings of the 32nd International Conference on Machine
Learning, ICML 2015, Lille, France, 6-11 July 2015, pages 1376?1385, 2015.
[KS14] S.P. Kasiviswanathan and A. Smith. On the ?Semantics? of Differential Privacy: A
Bayesian Formulation. Journal of Privacy and Confidentiality, Vol. 6: Iss. 1, Article
1, 2014.
[LT91] M. Ledoux and M. Talagrand. Probability in Banach Spaces: Isoperimetry and Processes. A Series of Modern Surveys in Mathematics Series. Springer, 1991.
[MV16] Jack Murtagh and Salil P. Vadhan. The complexity of computing the optimal composition of differential privacy. In Theory of Cryptography - 13th International Conference, TCC 2016-A, Tel Aviv, Israel, January 10-13, 2016, Proceedings, Part I, pages
157?175, 2016.
[vdG02] Sara A van de Geer. On Hoeffding?s inequality for dependent random variables.
Springer, 2002.
9
| 6170 |@word private:18 version:5 achievable:1 nd:1 crucially:2 q1:1 incurs:1 asks:1 moment:1 series:2 selecting:1 daniel:2 cort:2 existing:3 must:5 written:1 john:1 additive:1 designed:1 aside:1 selected:2 smith:3 indefinitely:1 caveat:1 characterization:1 kairouz:2 boosting:1 kasiviswanathan:1 simpler:1 along:1 differential:28 symposium:3 focs:1 prove:7 naor:1 compose:1 introduce:1 privacy:143 x0:8 upenn:2 ra:8 indeed:3 themselves:4 frequently:1 discretized:1 rem:1 increasing:1 spain:1 provided:1 moreover:2 notation:2 bounded:2 what:2 israel:1 kind:1 degrading:1 kenthapadi:1 guarantee:8 every:10 pramod:1 abk:2 sensibly:1 uk:2 indistinguishability:3 control:1 grant:4 appear:2 before:4 positive:1 engineering:1 understood:1 ak:2 analyzing:1 rothblum:1 odometer:29 might:3 chose:1 quantified:1 challenging:1 sara:1 range:1 confidentiality:1 union:1 block:1 procedure:1 convenient:1 pre:2 dfh:2 seeing:1 get:1 cannot:4 close:1 selection:1 equivalent:4 map:1 measurable:2 roth:2 center:2 go:2 primitive:1 straightforward:1 survey:1 unstructured:2 shorten:1 pure:1 mironov:1 rule:3 deriving:1 oh:2 his:3 stability:2 proving:1 notion:2 laplace:1 controlling:1 suppose:1 imagine:1 us:2 designing:1 harvard:2 element:2 viswanath:2 cut:1 database:3 winograd:2 tung:2 worst:3 ensures:1 benjamin:1 complexity:1 salil:4 signature:1 depend:3 grateful:1 algebra:1 aaroth:1 easily:1 stylized:1 k0:6 distinct:1 describe:1 query:8 tell:1 outcome:9 heuristic:5 larger:1 valued:2 supplementary:3 say:3 otherwise:4 ability:1 raef:1 itself:2 sequence:8 rr:14 ledoux:1 nevada:1 propose:1 tcc:2 interaction:5 remainder:1 neighboring:9 combining:1 omer:1 achieve:1 asserts:2 differentially:15 getting:2 sea:1 adam:2 object:5 derive:1 depending:1 cryptology:1 measured:1 school:1 sa:1 strong:2 implies:1 differ:1 filter:27 pea:1 human:2 rogers:1 everything:1 implementing:1 sand:1 f1:1 generalization:3 suffices:1 preliminary:3 hamid:1 ryan:1 hold:5 sufficiently:1 considered:1 presumably:1 algorithmic:1 claim:1 m0:1 early:2 lose:1 hope:1 always:1 varying:1 blush:1 focus:2 she:3 improvement:1 sense:2 helpful:1 dependent:1 stopping:5 chiao:1 leung:1 typically:1 subroutine:5 selects:1 france:1 semantics:1 denoted:1 constrained:1 special:2 equal:2 never:2 having:1 represents:1 adversarially:1 lille:1 icml:1 nearly:3 anticipated:1 future:4 t2:1 inherent:1 krishnaram:1 modern:1 preserve:1 national:1 individual:1 replaced:3 ourselves:1 cns:2 delicate:2 ab:1 dwork:4 devoted:1 mcsherry:2 xb:1 tuple:1 necessary:1 loosely:1 divide:1 walk:1 logarithm:1 catching:1 mk:3 modeling:3 earlier:1 cover:1 entry:1 predicate:1 seventh:1 front:3 too:2 straightforwardly:1 answer:4 perturbed:1 adaptively:16 international:2 randomized:12 sensitivity:1 off:1 michael:1 ilya:1 again:1 unavoidable:1 shing:1 possibly:1 choose:1 hoeffding:1 guy:1 worse:6 yau:1 ek:3 kobbi:2 return:3 ullman:2 halting:2 de:2 includes:2 satisfy:1 sloan:2 depends:1 vi:2 multiplicative:1 break:1 lot:1 view:6 analyze:2 start:1 sort:2 option:1 complicated:1 simon:1 ante:1 formed:1 who:2 yield:1 bayesian:1 iterated:1 comp:20 cc:1 worth:1 r2k:2 sharing:1 neu:1 definition:10 failure:2 lossi:1 proof:3 mi:6 di:3 stop:2 dataset:5 hardt:1 recall:1 sophisticated:2 actually:1 exceeded:3 specify:1 response:6 improved:1 formulation:1 done:2 though:1 generality:2 just:1 stage:1 talagrand:1 hand:1 receives:3 ei:2 aviv:1 building:1 usage:1 validity:2 normalized:1 calibrating:1 usa:1 hence:2 moritz:1 round:33 indistinguishable:6 during:1 game:3 self:1 generalized:1 complete:1 reasoning:1 postprocessing:1 wise:5 jack:2 vega:1 recently:1 fi:3 qp:1 banach:1 extend:1 he:2 interpretation:1 slight:1 composition:66 feldman:1 ai:1 mathematics:3 access:1 f0:1 similarity:2 longer:2 v0:1 pitassi:1 add:2 moni:1 eurocrypt:1 inequality:3 arbitrarily:1 continue:1 victor:1 preserving:1 surely:1 forty:1 tempting:1 july:1 multiple:1 full:3 needing:1 exceeds:2 technical:1 match:1 long:2 lai:1 post:6 e1:3 award:2 halt:8 a1:7 qi:5 variant:1 basic:9 essentially:2 iteration:2 sometimes:1 achieved:1 preserved:2 rea:1 want:3 appropriately:1 releasing:1 unlike:1 abi:4 file:3 thing:1 vitaly:1 reingold:1 call:2 vadhan:3 ee:1 noting:1 intermediate:2 identically:1 pennsylvania:2 andreas:1 idea:2 translates:1 utility:1 improperly:1 peter:1 proceed:1 generally:3 useful:4 covered:2 clear:2 informally:2 amount:1 continuation:1 exist:2 nsf:3 canonical:1 uk2:1 coordinating:1 track:1 write:2 vol:1 enormous:1 achieving:1 changing:1 budgeted:1 asymptotically:1 sum:3 run:9 parameterized:2 you:1 almost:1 reader:1 reasonable:1 separation:3 ab1:4 dkm:3 acceptable:2 comparable:1 capturing:1 bound:34 ki:1 pay:1 def:1 guaranteed:1 quadratic:1 nonnegative:1 annual:3 ahead:2 argument:1 bns:2 leon:1 department:3 dmns06:8 across:1 slightly:1 son:1 smaller:2 happens:1 restricted:1 ln:1 previously:3 initiate:1 know:2 operation:1 apply:2 coin:5 thomas:1 running:3 paulson:1 log2:2 society:1 realized:7 occurs:1 concentration:3 visiting:1 said:1 dp:12 distance:1 thank:2 gracefully:1 degrade:2 nissim:2 trivial:1 reason:2 taiwan:1 analyst:12 laying:1 length:1 cont:7 index:1 october:1 statement:1 stoc:1 frank:2 stated:1 design:3 upper:4 datasets:4 finite:1 anti:1 incorrectly:1 january:1 defining:1 extended:1 interacting:1 arbitrary:2 overlooked:1 introduced:1 david:1 pair:2 specified:1 barcelona:1 heterogenous:1 nip:1 adversary:14 regime:2 program:1 including:1 analogue:1 power:1 event:1 natural:1 rely:1 u02:1 isoperimetry:1 advanced:10 disappears:1 isn:1 prior:1 probab:1 acknowledgement:1 determining:1 asymptotic:5 law:1 toniann:1 loss:33 encompassing:1 adaptivity:1 generation:1 composable:1 triple:1 foundation:3 article:1 sewoong:1 collaboration:1 supported:2 last:1 formal:3 allow:3 steinke:1 stemmer:1 taking:3 distributed:2 van:1 valid:17 author:1 adaptive:26 global:5 conceptual:2 summing:1 conclude:2 xi:12 additionally:1 career:1 tel:1 example1:2 necessarily:1 domain:1 pk:18 main:2 whole:1 bounding:1 noise:2 n2:4 cryptography:1 pivotal:1 x1:1 i:1 bassily:1 martingale:5 exponential:1 composability:1 theorem:50 northeastern:1 example2:2 bad:1 cynthia:4 essential:2 exists:3 albeit:1 effectively:2 corr:1 ci:4 pierce:1 budget:5 theo1:1 uri:1 gap:1 lap:4 simply:3 tze:1 ux:1 talking:1 klass:1 springer:3 satisfies:6 acm:3 murtagh:2 viewed:1 ann:1 toss:4 feasible:2 change:1 infinite:1 except:1 uniformly:2 lemma:1 called:2 geer:1 la:2 aaron:2 select:5 college:1 formally:2 support:1 jonathan:2 investigator:1 ex:3 |
5,715 | 6,171 | The Limits of Learning with Missing Data
Brian Bullins
Elad Hazan
Princeton University
Princeton, NJ
{bbullins,ehazan}@cs.princeton.edu
Tomer Koren
Google Brain
Mountain View, CA
[email protected]
Abstract
We study linear regression and classification in a setting where the learning algorithm is allowed to access only a limited number of attributes per example, known
as the limited attribute observation model. In this well-studied model, we provide
the first lower bounds giving a limit on the precision attainable by any algorithm for
several variants of regression, notably linear regression with the absolute loss and
the squared loss, as well as for classification with the hinge loss. We complement
these lower bounds with a general purpose algorithm that gives an upper bound on
the achievable precision limit in the setting of learning with missing data.
1
Introduction
The primary objective of linear regression is to determine the relationships between multiple variables
and how they may affect a certain outcome. A standard example is that of medical diagnosis, whereby
the data gathered for a given patient provides information about their susceptibility to certain illnesses.
A major drawback to this process is the work necessary to collect the data, as it requires running
numerous tests for each person, some of which may be discomforting. In such cases it may be
necessary to impose limitations on the amount of data available for each example. For medical
diagnosis, this might mean having each patient only undergo a small subset of tests.
A formal setting for capturing regression and learning with limits on the number of attribute observations is known as the Limited Attribute Observation (LAO) setting, first introduced by Ben-David
and Dichterman [1]. For example, in a regression problem, the learner has access to a distribution
D over data (x, y) 2 Rd ? R, and fits the best (generalized) linear model according to a certain loss
function, i.e., it approximately solves the optimization problem
min
w: kw k p ?B
L D (w),
f
L D (w) = E(x, y)?D `(w> x
g
y) .
In the LAO setting, the learner does not have complete access to the examples x, which the reader
may think of as attributes of a certain patient. Rather, the learner can observe at most a fixed number
of these attributes, denoted k ? d. If k = d, this is the standard regression problem which can be
solved to arbitrary precision.
The main question we address: is it possible to compute an arbitrarily accurate solution if the number
of observations per example, k, is strictly less than d? More formally, given any " > 0, can one
compute a vector w for which
L D (w) ?
min
kw? k p ?B
L D (w? ) + ".
Efficient algorithms for regression with squared loss when k < d have been shown in previous work
[2], and the sample complexity bounds have since been tightened [6, 8]. However, similar results for
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
other common loss functions such as e.g. absolute loss have only been shown by relaxing the hard
limit of k attributes per example [3, 6].
In this paper we show, for the first time, that in fact this problem cannot be solved in general. Our
main result shows that even for regression with the absolute loss function, for any k ? d 1, there
is an information-theoretic lower bound on the error attainable by any algorithm. That is, there is
some " 0 > 0 for which an " 0 -optimal solution cannot be determined, irrespective of the number
of examples the learner sees. Formally, with constant probability, any algorithm returning a vector
w 2 Rd must satisfy
L D (w) > min
L D (w? ) + " 0 .
?
kw k p ?B
We further show that this ultimate achievable precision parameter is bounded from below by a
polynomial in the dimension, i.e., " 0 = ?(d 3/2 ).
Additionally, for the basic setting of Ridge regression (with the squared loss), we give a tight lower
bound for the LAO setting. Cesa-Bianchi et al. [2] provided the first efficient algorithm for this
setting with sample complexity of O(d 2 /k" 2 ) for " error. Hazan and Koren [6] improved upon this
result and gave a tight sample complexity of O(d/k" 2 ) to achieve " error. In both cases, however, the
algorithms only work when k 2. We complete the picture and show that k 2 attributes are in
fact necessary to obtain arbitrarily low error. That is, with only one attribute per example, there is an
information-theoretic limit on the accuracy attainable by any regression algorithm. We remark that a
similar impossibility result was proven by Cesa-Bianchi et al. [3] in the related setting of learning
with noisy examples.
Classification may be similarly cast in the LAO setting. For classification with the hinge loss, namely
soft-margin SVM, we give a related lower bound, showing that it is impossible to achieve arbitrarily
low error if the number of observed attributes is bounded by k ? d 1. However, unlike our lower
bound for regression, the lower bound we prove for classification scales exponentially with the
dimension. Although Hazan et al. [7] showed how classification may be done with missing data, their
work includes low rank assumptions and so it is not in contradiction with the lower bounds presented
here.
Similar to the LAO setting, the setting of learning with missing data [9, 4, 10, 11] presents the learner
with examples where the attributes are randomly observed. Since the missing data setting is at least
as difficult as the LAO setting, our lower bounds extend to this case as well.
We complement these lower bounds with a general purpose algorithm for regression and classification
p
with missing data that, given a sufficient number of samples, can achieve an error of O(1/ d). This
result leaves only a small polynomial gap compared to the information-theoretic lower bound that we
prove.
2
Setup and Statement of Results
The general framework of linear regression involves a set of instances, each of the form (x, y) where
x 2 Rd is the attribute vector and y 2 R is the corresponding target value. Under the typical statistical
learning framework [5], each (x, y) pair is drawn from a joint distribution D over Rd ? R. The
learner?s objective is to determine some linear predictor w such that w> x does well in predicting y.
The quality of prediction is measured according to a loss function ` : R 7! R. Two commonly used
loss functions for regression are the squared loss `(w> x y) = 12 (w> x y) 2 and the absolute loss
`(w> x y) = |w> x y|. Since our examples are drawn from some arbitrary distribution D, it is best
to consider the expected loss
?
?
L D (w) = E(x, y)?D `(w> x y) .
The learner?s goal then is to determine a regressor w that minimizes the expected loss L D (w).
To avoid overfitting, a regularization term is typically added, which up to some constant factor is
equivalent to
min L D (w) s.t. kwk p ? B
w2R d
for some regularization parameter B > 0, where k ? k p is the standard ` p norm, p 1. Two common
variants of regression are Ridge regression (p = 2 with squared loss) and Lasso regression (p = 1
with squared loss).
2
The framework for classification is nearly identical to that of linear regression. The main distinction
comes from a different meaning of y 2 R, namely that y acts as a label for the corresponding example.
The loss function also changes when learning a classifier, and in this paper we are interested in the
hinge loss `(y ? w> x) = max{0, 1 y ? w> x}. The overall goal of the learner, however, remains the
same: namely, to determine a classifier w such that L D (w) is minimized. Throughout the paper, we
let w? denote the minimizer of L D (w).
2.1
Main Results
As a first step, for Lasso and Ridge regressions, we show that one always needs to observe at least two
attributes to be able to learn a regressor to arbitrary precision. This is given formally in Theorem 1.
1
Theorem 1. Let 0 < " < 32
and let ` be the squared loss. Then there exists a distribution D over
{x : ||x||1 ? 1} ? [ 1, 1] such that kw? k1 ? 2, and any regression algorithm that can observe at
? such that
most one attribute of each training example of a training set S cannot output a regressor w
? < L D (w? ) + ".
ES [L D (w)]
1
Corollary 2. Let 0 < " < 64
and let ` be the squared loss. Then there exists a distribution D over
{x : ||x||2 ? 1} ? [ 1, 1] such that kw? k2 ? 2, and any regression algorithm that can observe at
? such that
most one attribute of each training example of a training set S cannot output a regressor w
? < L D (w? ) + ".
ES [L D (w)]
The lower bounds are tight?recall that with two attributes, it is indeed possible to learn a regressor to
within arbitrary precision [2, 6]. Also, notice the order of quantification in the theorems: it turns out
that there exists a distribution which is hard for all algorithms (rather than a different hard distribution
for any algorithm).
For regression with absolute loss, we consider the setting where the learner is limited to seeing k or
fewer attributes of each training sample. Theorem 3 below shows that in the case where k < d the
learner cannot hope to learn an "-optimal regressor for some " > 0.
3
1
Theorem 3. Let d 4, d ? 0 (mod 2), 0 < " < 60
d 2 , and let ` be the absolute loss. Then there
exists a distribution D over {x : ||x||1 ? 1} ? [ 1, 1] such that kw? k1 ? 2, and any regression
algorithm that can observe at most d 1 attributes of each training example of a training set S cannot
? such that ES [L D (w)]
? < L D (w? ) + ".
output a regressor w
1
Corollary 4. Let 0 < " < 60
d 2 , and let ` be the absolute loss. Then there exists a distribution D
over {x : ||x||2 ? 1} ? [ 1, 1] such that kw? k2 ? 1, and any regression algorithm that can observe at
? such
most d 1 attributes of each training example of a training set S cannot output a regressor w
? < L D (w? ) + ".
that ES [L D (w)]
We complement our findings for regression with the following analogous lower bound for classification with the hinge loss (a.k.a., soft margin SVM).
Theorem 5. Let d 4, d ? 0 (mod 2), and let ` be the hinge loss. Then there exists an " 0 > 0
such that the following holds: there exists a distribution D over {x : ||x||2 ? 1} ? [ 1, 1] such that
kw? k2 ? 1, and any classification algorithm that can observe at most d 1 attributes of each training
? such that ES [L D (w)]
? < L D (w? ) + " 0 .
example of a training set S cannot output a regressor w
3
Lower Bounds
In this section we discuss our lower bounds for regression with missing attributes. As a warm-up,
we first prove Theorem 1 for regression with the squared loss. While the proof is very simple,
it illustrates some of the main ideas used in all of our lower bounds. Then, we give a proof of
Theorem 3 for regression with the absolute loss. The proofs of the remaining bounds are deferred to
the supplementary material.
3.1
Lower bounds for the squared loss
Proof of Theorem 1. It is enough to prove the theorem for deterministic learning algorithms, namely,
for algorithms that do not use any external randomization (i.e., any randomization besides the random
samples drawn from the data distribution itself). This is because any randomized algorithm can
3
be thought of as a distribution over deterministic algorithms, which is independent of the data
distribution.
1
Now, suppose 0 < " < 32
. Let X1 = {(0, 0), (1, 1)}, X2 = {(0, 1), (1, 0)}, and let D1 and D2
be uniform distributions over X1 ? {1} and X2 ? {1}, respectively. The main observation is that
any learner that can observe at most one attribute of each example cannot distinguish between the
two distributions with probability greater than 12 , no matter how many samples it is given. This is
because the marginal distributions of the individual attributes under both D1 and D2 are exactly the
same. Thus, to prove the theorem it is enough to show that the sets of "-optimal solutions under the
distributions D1 and D2 are disjoint. Indeed, suppose that there is a learning algorithm that emits a
? such that E[L D (w)
?
vector w
L D (w? )] < "/2 (where the expectation is over the random samples
? < L D (w? ) + " with
from D used by the algorithm). By Markov?s inequality, it holds that L D (w)
probability > 1/2. Hence, the output of the algorithm allows one to distinguish between the two
distributions with probability > 1/2, contradicting the indistinguishability property.
We set to characterize the sets of "-optimal solutions under D1 and D2 . For D1 , we have
1 X 1 >
1 1
L D1 (w) =
(w x 1) 2 = + (w1 + w2 1) 2,
2 x2X 2
4 4
1
while for D2 ,
L D2 (w) =
1 X 1 >
(w x
2 x2X 2
1) 2 =
2
1
(w1
4
1) 2 +
1
(w2
4
1) 2 .
p
Note that the set of "-optimal regressors for L D1 is S1 = {w : |w> 1 p1| ? 2 "}, whereas for L D2
p
the set is S2 = {w : kw 1k2 ? 2 "}. Let S20 = {w : |w> 1 2| ? 2 2"}. Then S2 ? S20 , so it is
0
sufficient to show that S1 and S2 are disjoint.
1
Since " < 32
, for any w 2 S1 , |w> 1 1| < 12 , meaning w> 1 < 32 . However, for any w 2 S20 ,
1
>
|w 1 2| < 2 meaning w> 1 > 32 , and so w cannot be a member of both S1 and S2 . As we argued
earlier, this suffices to prove the theorem.
?
3.2
Lower bounds for the absolute loss
As in the proof of Theorem 1, the main idea is to show that one can design two distributions that are
indistinguishable to a learner who can observe no more than d 1 attributes of any sample given by
the distribution (i.e., that their marginals over any choice of d 1 attributes are identical), but whose
respective sets of "-optimal regressors are disjoint. However, in contrast to Theorem 1, both handling
general d along with switching to the absolute loss introduce additional complexities to the proof that
require different techniques.
We start by constructing these two distributions D1 and D2 . Let X1 = {x = (x 1, . . . , x d ) : x 2
{0, 1} d , kxk1 ? 0 (mod 2)} and X2 = {x = (x 1, . . . , x d ) : x 2 {0, 1} d , kxk1 ? 1 (mod 2)}, and let D1
and D2 be uniform over X1 ? {1} and X2 ? {1}, respectively. From this construction, it is not hard to
see that for any choice of k ? d 1 attributes, the marginals over the k attributes of both distributions
are identical: they are both a uniform distribution over k bits. Thus, the distributions D1 and D2 are
indistinguishable to a learner that can only observe at most d 1 attributes of each example.
Let `(w> x
y) = |w> x
y|, and let
1
L D1 (w) = E(x, y)?D1 [`(w> x, y)] =
2d
L D2 (w) = E(x, y)?D2 [`(w> x, y)] =
2d 1
and
1
1
X
|w> x
1|,
X
|w> x
1|.
x2X1
x2X2
It turns out that the subgradients of L D1 (w) and L D2 (w), which we denote by @L D1 (w) and @L D2 (w)
respectively, can be expressed precisely. In fact, the full subgradient set at every point in the domain
for both functions can be made explicit. With these representations in hand, we can show that
2
w?1 = d2 1d and w?2 = d+2
1d are minimizers of L D1 (w) and L D2 (w), respectively.
4
Figure 1: Geometric intuition for Lemmas 6 and 7. The lower bounding absolute value function acts
as a relaxation of the true expected loss L D (depicted here as a cone).
In fact, using the subgradient sets we can prove a much stronger property of the expected losses
L D1 and L D2 , akin to a ?directional strong convexity? property around their respective minimizers.
The geometric idea behind this property is shown in Figure 1, whereby L D is lower bounded by an
absolute value function.
Lemma 6. Let w?1 = d2 1d . For any w 2 Rd we have
p
2?
L D1 (w) L D1 (w?1 )
p ? 1>d (w w?1 ) .
e4 d
2
Lemma 7. Let w?2 = d+2
1d . For any w 2 Rd we have
p
2?
?
L D2 (w) L D2 (w2 )
p ? 1>d (w w?2 ) .
e4 d
Given Lemmas 6 and 7, the proof of Theorem 3 is immediate.
Proof of Theorem 3. As a direct consequence of Lemmas 6 and 7, we obtain that the sets
p
8
9
>
>
2?
>
?
=
S1 = <
> w : e4 pd ? 1d (w w1 ) ? " >
:
;
and
p
8
9
>
>
2?
>
?
=
S2 = <
w
:
?
1
(w
w
)
?
"
p
2
d
>
>
4
e d
:
;
contain the sets of "-optimal regressors for L D1 (w) and L D2 (w), respectively. All that is needed now
3
1
is to show a separation of their "-optimal sets for 0 < " < 60
d 2 , and this is done by showing a
3
1
separation of the more manageable sets S1 and S2 . Indeed, fix 0 < " < 60
d 2 and observe that for
any w 2 S1 we have
p
2?
p
e4 d
? 1>d (w
1>d w
w?1 ) ?
2
1
60 d
3
2
1
>2
2d
On the other hand, for any w 2 S2 we have
p
2?
p
e4 d
and so, for d
4,
1
2d + 3
=
.
d+2
d+2
? 1>d (w
w?2 ) ?
1
60 d
3
2
, thus
2d
1
2d
1
2d + 1
+
<
+
=
.
d + 2 2d
d+2 d+2
d+2
We see that no w can exist in both S1 and S2 , so these sets are disjoint. Theorem 3 follows by the
same reasoning used to conclude the proof of Theorem 1.
?
1>d w ?
5
It remains to prove Lemmas 6 and 7. As the proofs are very similar, we will only prove Lemma 6
here and defer the proof of Lemma 7 to the supplementary material.
Proof of Lemma 6. We first write
1
@L D1 (w) =
Letting w?1 =
2d 1
2
d
X
@`(w> x, 1) =
x2X1
1
2d 1
X
sign(w> x
x2X1
1) ? x.
? 1d , we have that
1 X
@L D1 (w?1 ) = d 1
sign(w?>
1 x 1) ? x
2
x2X1
1 ? X
= d 1
sign(w?>
1 x 1) ? x
2
x2X ,
1
kx k1 = d2
+
X
x2X1,
kx k1 > d2
=
1 ? X
2d
1
sign(w?>
1 x
x2X1,
kx k1 = d2
1) ? x +
sign(0) ? x +
X
X
x2X1,
kx k1 < d2
x
x2X1,
kx k1 > d2
sign(w?>
1 x
X
x2X1,
kx k1 < d2
1) ? x
?
?
x ,
where sign(0) can be any number in [ 1, 1]. Next, we compute
X
X
x
x2X1,
kx k1 > d2
d
x=
x2X1,
kx k1 < d2
2
X
i= d4 +1
d
2i
d
=
2 2
X
i=0
d
=
d
2
( 1)
i
!
1
? 1d
1
d
1
i
!
2
? 1d ,
2
!
d
4 1
X
d
2i
i=1
!
1
? 1d
1
? 1d
? ?
? ?
P
where the last equality follows from the elementary identity ki=0 ( 1) i ni = ( 1) k nk 1 , which we
prove in Lemma 9 in the supplementary material. Now, let X ? = {x 2 X1 : kxk1 = d2 }, let m = |X ? |,
and let X = [x1, . . . , xm ] 2 Rd?m be the matrix formed by all x 2 X ? . Then we may express the
entire subgradient set explicitly as
!
? 1 ?
?
d 2
@L D1 (w?1 ) = d 1 Xr + d
? 1d
r 2 [ 1, 1]m .
2
2
2
Thus, any choice of r 2 [ 1, 1]m will result in a specific subgradient of L D1 (w?1 ). Consider two such
?d 1?
choices: r1 = 0 and r2 = 1d . Note that Xr1 = 0 and Xr2 =
? 1d ; to see the last equality,
d
2 1
consider any fixed coordinate i and notice that the number of elements in X ? with non-zero values in
the i?th coordinate is equal to the number of ways to choose the remaining d2 1 non-zero coordinates
from the other d 1 coordinates. We then observe that the corresponding subgradients are
!
!
!
1
d 2
1 d 2
h+ = d 1 Xr1 + d
? 1d = d 1 d
? 1d ,
2
2
2
2
2
2
and
!
!
!
2
1 d 2
?
1
=
? 1d .
d
d
2
2d 1
2d 1 d2 1
2
Note that, since the set of subgradients of L D1 (w?1 ) is a convex set, by taking a convex combination
of h+ and h it follows that 0 2 @L D1 (w?1 ) and so we see that w?1 is a minimizer of L D1 (w).
h =
1
Xr2 +
d
6
Given a handle on the subgradient
that these coefficients are polynomial in d.
p set, we now show
p
Observe that, using the fact that 2?n( ne ) n ? n! ? e n( ne ) n, we have
?
?d 2
p
!
+/
2?(d 2) de 2
1 d 2
1 *.
.. q
/
q ?
d
d
d /
?
?
?
d
1
d
1
2 d 2
2
2
2
2
e2 d 2 4 d2 d2e4 2
2e
,
p
!d 2
1 *
2?
d 2
. p ?
? +/
d
2d 1 e2 d d1 1
2
,
p
!d 2
2? +
2
* p
1
2
d 2
,e dp
2?
p .
4
e d
p
p ? 1d . Since h? can be written as a convex combination of h+ and 0, we see that
Let h? = 42?
e d
?
h 2 @L D1 (w?1 ). Similarly we may see that
?
?
p
p
p
!
d 2 d 2
+/
1 d 2
1 *. 2?(d 2) e
2?
2?
?
?
p
p .
?
?
.
/=
2
4
2d 1 d2 1
2d 1 e2 ( d 1) d 2 d 2
e d 2
e d
2
2e
,
Again, since h? can be written as a convex combination of the vectors h and 0 in the subgradient
set, we may conclude that h? 2 @L D1 (w?1 ) as well.
By the subgradient inequality it follows that, for all w 2 Rd ,
p
L D1 (w)
L D1 (w?1 )
h?> (w
w?1 ) =
h?> (w
w?1 ) =
and
L D1 (w)
L D1 (w?1 )
which taken together imply that
L D1 (w)
p
L D1 (w?1 )
e4
2?
p ? 1>d (w
d
as required.
4
2?
p ? 1>d (w w?1 )
d
p
2?
p ? 1>d (w w?1 ),
4
e d
e4
w?1 )
?
General Algorithm for Limited Precision
Although we have established limits on the attainable precision for some learning problems, there is
still the possibility of reaching this limit. In this section we provide a general algorithm,
whereby a
p
learner that can observe k < d attributes can always achieve an expected loss of O( 1 k/d).
We provide the pseudo-code in Algorithm 1. Although similar to the AERR algorithm of Hazan and
Koren [6]?which is designed to work only with the squared loss?Algorithm 1 avoids the necessity
of an unbiased gradient estimator by replacing the original loss function with a slightly biased one.
As long as the new loss function is chosen carefully (and the functions are Lipschitz bounded), and
given enough samples, the algorithm can return a regressor of limited precision. This is in contrast to
AERR whereby an arbitrarily precise regressor can always be achieved with enough samples.
Formally, for Algorithm 1 we prove the following (proof in the supplementary material).
Theorem 8. Let ` : R 7! R be an H-Lipschitz function defined over [ 2B, 2B]. Assume the
?
distribution D is such that kxk2 ? 1 and |y| ? B with probability 1. Let B? = max{B, 1}, and let w
? 2 Rd with
p . Then, k wk
?
be the output of Algorithm 1, when run with ? = G2B
?
B,
and
for
any
w
2
m
kw? k2 ? B,
r
2H B
k
?
2
? ? L D (w ) + p + 2H B? 1
E[L D (w)]
.
d
m
7
Algorithm 1 General algorithm for regression/classification with missing attributes
Input: Loss function `, training set S = {(xt , yt )}t 2[m], k, B, ? > 0
? with k wk
? 2?B
Output: Regressor w
1: Initialize w1 , 0, kw1 k2 ? B arbitrarily
2: for t = 1 to m do
3:
Uniformly choose subset of k indices {i t,r }r 2[k] from [d] without replacement
P
4:
Set x? t = rk =1 x[i t,r ] ? ei t, r
5:
Regression case:
? t yt )
6:
Choose ?t 2 @`(w>
t x
7:
Classification case:
?t )
8:
Choose ?t 2 @`(yt ? w>
t x
9:
Update
B
wt+1 =
? (wt ?( ?t ? x? t ))
max{kwt ?( ?t ? x? t )k2, B}
10: end for
Pm
? = m1 t=1
11: Return w
wt
In particular, for m = d/(d
k) we have
r
2
?
? ? L D (w ) + 4H B 1
E[L D (w)]
?
and so when the learner observes k = d
optimum.
5
k
,
d
p
1 attributes, the expected loss is O(1/ d)-away from
Conclusions and Future Work
In the limited attribute observation setting, we have shown information-theoretic lower bounds for
some variants of regression, proving that a distribution-independent algorithm for regression with
absolute loss that attains " error cannot exist and closing the gap for ridge regression as suggested
by Hazan and Koren [6]. We have also shown that the proof technique applied for regression
with absolute loss can be extended to show a similar bound for classification with the hinge loss.
In addition, we have described a general purpose algorithm which complements these results by
providing a means of achieving error up to a certain precision limit.
An interesting possibility for future work would be to try to bridge the gap between the upper and
lower bounds of the precision limits, particularly in the case of the exponential gap for classification
with hinge loss. Another direction would be to develop a more comprehensive understanding of these
lower bounds in terms of more general functions, one example being classification with logistic loss.
References
[1] S. Ben-David and E. Dichterman. Learning with restricted focus of attention. Journal of
Computer and System Sciences, 56(3):277?298, 1998.
[2] N. Cesa-Bianchi, S. Shalev-Shwartz, and O. Shamir. Efficient learning with partially observed
attributes. In Proceedings of the 27th International Conference on Machine Learning, 2010.
[3] N. Cesa-Bianchi, S. Shalev-Shwartz, and O. Shamir. Online learning of noisy data. IEEE
Transactions on Information Theory, 57(12):7907?7931, 2011.
[4] O. Dekel, O. Shamir, and L. Xiao. Learning to classify with missing and corrupted features.
Machine Learning Journal, 81(2):149?178, 2010.
[5] D. Haussler. Decision theoretic generalizations of the PAC model for neural net and other
learning applications. Information and Computation, 100(1):78?150, 1992.
[6] E. Hazan and T. Koren. Linear regression with limited observation. In Proceedings of the 29th
International Conference on Machine Learning (ICML?12), Edinburgh, Scotland, UK, 2012.
8
[7] E. Hazan, R. Livni, and Y. Mansour. Classification with low rank and missing data. In
Proceedings of the 32nd International Conference on Machine Learning, 2015.
[8] D. Kukliansky and O. Shamir. Attribute efficient linear regression with data-dependent sampling.
In Proceedings of the 32nd International Conference on Machine Learning, 2015.
[9] R. J. A. Little and D. B. Rubin. Statistical Analysis with Missing Data, 2nd Edition. WileyInterscience, 2002.
[10] P.-L. Loh and M. J. Wainwright. High-dimensional regression with noisy and missing data:
Provable guarantees with non-convexity. In Advances in Neural Information Processing Systems,
2011.
[11] A. Rostamizadeh, A. Agarwal, and P. Bartlett. Learning with missing features. In The 27th
Conference on Uncertainty in Artificial Intelligence, 2011.
[12] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical
Society, Series B, 58(1):267?288, 1996.
[13] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In
Proceedings of the 20th International Conference on Machine Learning, 2003.
9
| 6171 |@word manageable:1 polynomial:3 achievable:2 norm:1 stronger:1 dekel:1 nd:3 d2:34 attainable:4 necessity:1 series:1 com:1 must:1 written:2 designed:1 update:1 intelligence:1 leaf:1 fewer:1 scotland:1 provides:1 along:1 direct:1 prove:11 introduce:1 notably:1 indeed:3 expected:6 p1:1 brain:1 little:1 spain:1 provided:1 bounded:4 mountain:1 minimizes:1 finding:1 nj:1 guarantee:1 pseudo:1 every:1 act:2 exactly:1 returning:1 classifier:2 k2:7 uk:1 indistinguishability:1 medical:2 limit:10 consequence:1 switching:1 approximately:1 might:1 studied:1 collect:1 relaxing:1 limited:8 xr:1 thought:1 seeing:1 cannot:11 selection:1 impossible:1 equivalent:1 deterministic:2 zinkevich:1 missing:13 yt:3 attention:1 convex:5 contradiction:1 estimator:1 haussler:1 proving:1 handle:1 coordinate:4 analogous:1 target:1 suppose:2 construction:1 shamir:4 programming:1 element:1 particularly:1 observed:3 kxk1:3 solved:2 observes:1 intuition:1 pd:1 convexity:2 complexity:4 tight:3 upon:1 learner:15 joint:1 artificial:1 outcome:1 shalev:2 whose:1 elad:1 supplementary:4 think:1 noisy:3 itself:1 online:2 net:1 achieve:4 optimum:1 r1:1 ben:2 develop:1 measured:1 strong:1 solves:1 c:1 involves:1 come:1 direction:1 drawback:1 attribute:34 material:4 argued:1 require:1 suffices:1 fix:1 generalization:1 randomization:2 brian:1 elementary:1 strictly:1 hold:2 around:1 major:1 susceptibility:1 purpose:3 label:1 bridge:1 hope:1 always:3 rather:2 reaching:1 avoid:1 shrinkage:1 corollary:2 focus:1 rank:2 impossibility:1 contrast:2 attains:1 rostamizadeh:1 dependent:1 minimizers:2 typically:1 entire:1 interested:1 overall:1 classification:16 denoted:1 initialize:1 marginal:1 equal:1 having:1 sampling:1 identical:3 kw:10 icml:1 nearly:1 future:2 minimized:1 randomly:1 kwt:1 individual:1 comprehensive:1 replacement:1 x2x2:1 possibility:2 x2x1:11 deferred:1 behind:1 accurate:1 ehazan:1 necessary:3 respective:2 instance:1 classify:1 soft:2 earlier:1 subset:2 predictor:1 uniform:3 characterize:1 corrupted:1 person:1 international:5 randomized:1 regressor:12 together:1 w1:4 squared:11 again:1 cesa:4 choose:4 external:1 return:2 de:1 wk:2 includes:1 coefficient:1 matter:1 satisfy:1 explicitly:1 view:1 try:1 hazan:7 kwk:1 start:1 defer:1 formed:1 ni:1 accuracy:1 who:1 gathered:1 directional:1 infinitesimal:1 e2:3 proof:14 aerr:2 emits:1 recall:1 carefully:1 improved:1 done:2 hand:2 replacing:1 ei:1 google:2 logistic:1 quality:1 contain:1 true:1 unbiased:1 regularization:2 hence:1 equality:2 indistinguishable:2 whereby:4 d4:1 generalized:2 complete:2 theoretic:5 ridge:4 reasoning:1 meaning:3 common:2 exponentially:1 extend:1 illness:1 m1:1 marginals:2 rd:9 pm:1 similarly:2 closing:1 kw1:1 access:3 showed:1 certain:5 inequality:2 arbitrarily:5 greater:1 additional:1 impose:1 determine:4 multiple:1 full:1 long:1 prediction:1 variant:3 regression:39 basic:1 patient:3 expectation:1 agarwal:1 achieved:1 whereas:1 addition:1 x2x:3 w2:3 biased:1 unlike:1 ascent:1 undergo:1 member:1 mod:4 enough:4 affect:1 fit:1 gave:1 lasso:3 idea:3 bartlett:1 ultimate:1 akin:1 loh:1 remark:1 xr2:2 amount:1 exist:2 notice:2 xr1:2 sign:7 disjoint:4 per:4 tibshirani:1 diagnosis:2 write:1 express:1 achieving:1 drawn:3 subgradient:7 relaxation:1 cone:1 run:1 uncertainty:1 throughout:1 reader:1 separation:2 decision:1 bit:1 capturing:1 bound:25 ki:1 koren:5 distinguish:2 precisely:1 x2:4 min:4 subgradients:3 according:2 combination:3 slightly:1 s1:8 bullins:1 restricted:1 taken:1 remains:2 turn:2 discus:1 needed:1 letting:1 end:1 available:1 observe:14 away:1 dichterman:2 original:1 running:1 remaining:2 hinge:7 giving:1 k1:10 society:1 objective:2 question:1 added:1 primary:1 gradient:2 dp:1 provable:1 besides:1 code:1 index:1 relationship:1 providing:1 difficult:1 setup:1 statement:1 design:1 bianchi:4 upper:2 observation:7 markov:1 immediate:1 extended:1 precise:1 mansour:1 arbitrary:4 tomer:1 introduced:1 complement:4 david:2 cast:1 namely:4 pair:1 required:1 s20:3 distinction:1 established:1 barcelona:1 nip:1 address:1 able:1 suggested:1 below:2 xm:1 max:3 royal:1 wainwright:1 quantification:1 warm:1 predicting:1 lao:6 imply:1 numerous:1 picture:1 ne:2 irrespective:1 geometric:2 understanding:1 loss:45 interesting:1 limitation:1 proven:1 sufficient:2 xiao:1 rubin:1 tightened:1 last:2 formal:1 taking:1 absolute:14 livni:1 edinburgh:1 dimension:2 avoids:1 commonly:1 made:1 regressors:3 transaction:1 overfitting:1 conclude:2 shwartz:2 additionally:1 learn:3 ca:1 constructing:1 domain:1 main:7 s2:8 bounding:1 edition:1 contradicting:1 allowed:1 x1:6 precision:11 explicit:1 exponential:1 kxk2:1 theorem:19 e4:7 rk:1 specific:1 xt:1 showing:2 pac:1 r2:1 svm:2 tkoren:1 exists:7 illustrates:1 margin:2 kx:8 gap:4 nk:1 depicted:1 expressed:1 partially:1 minimizer:2 goal:2 identity:1 lipschitz:2 hard:4 change:1 determined:1 typical:1 uniformly:1 wt:3 lemma:10 w2r:1 e:5 formally:4 princeton:3 d1:35 wileyinterscience:1 handling:1 |
5,716 | 6,172 | Image Restoration Using Very Deep Convolutional
Encoder-Decoder Networks with Symmetric Skip
Connections
?
Xiao-Jiao Mao? , Chunhua Shen? , Yu-Bin Yang?
State Key Laboratory for Novel Software Technology, Nanjing University, China
?
School of Computer Science, University of Adelaide, Australia
Abstract
In this paper, we propose a very deep fully convolutional encoding-decoding framework for image restoration such as denoising and super-resolution. The network is
composed of multiple layers of convolution and deconvolution operators, learning
end-to-end mappings from corrupted images to the original ones. The convolutional layers act as the feature extractor, which capture the abstraction of image
contents while eliminating noises/corruptions. Deconvolutional layers are then
used to recover the image details. We propose to symmetrically link convolutional
and deconvolutional layers with skip-layer connections, with which the training
converges much faster and attains a higher-quality local optimum. First, the skip
connections allow the signal to be back-propagated to bottom layers directly, and
thus tackles the problem of gradient vanishing, making training deep networks
easier and achieving restoration performance gains consequently. Second, these
skip connections pass image details from convolutional layers to deconvolutional
layers, which is beneficial in recovering the original image. Significantly, with
the large capacity, we can handle different levels of noises using a single model.
Experimental results show that our network achieves better performance than recent
state-of-the-art methods.
1
Introduction
The task of image restoration is to recover a clean image from its corrupted observation, which
is known to be an ill-posed inverse problem. By accommodating different types of corruption
distributions, the same mathematical model applies to problems such as image denoising and superresolution. Recently, deep neural networks (DNNs) have shown their superior performance in image
processing and computer vision tasks, ranging from high-level recognition, semantic segmentation to
low-level denoising, super-resolution, deblur, inpainting and recovering raw images from compressed
ones. Despite the progress that DNNs achieve, some research questions remain to be answered. For
example, can a deeper network in general achieve better performance? Can we design a single deep
model which is capable to handle different levels of corruptions?
Observing recent superior performance of DNNs on image processing tasks, we propose a convolutional neural network (CNN)-based framework for image restoration. We observe that in order to
obtain good restoration performance, it is beneficial to train a very deep model. Meanwhile, we show
that it is possible to achieve very promising performance with a single network when processing
multiple different levels of corruptions due to the benefits of large-capacity networks. Specifically,
the proposed framework learns end-to-end fully convolutional mappings from corrupted images to the
clean ones. The network is composed of multiple layers of convolution and deconvolution operators.
As deeper networks tend to be more difficult to train, we propose to symmetrically link convolutional
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
and deconvolutional layers with skip-layer connections, with which the training procedure converges
much faster and is more likely to attain a high-quality local optimum.
Our main contributions are summarized as follows.
? A very deep network architecture, which consists of a chain of symmetric convolutional
and deconvolutional layers, for image restoration is proposed in this paper. The convolutional layers act as the feature extractor which encode the primary components of image
contents while eliminating the corruption. The deconvolutional layers then decode the image
abstraction to recover the image content details.
? We propose to add skip connections between corresponding convolutional and deconvolutional layers. These skip connections help to back-propagate the gradients to bottom layers
and pass image details to top layers, making training of the end-to-end mapping easier
and more effective, and thus achieving performance improvement while the network going
deeper. Relying on the large capacity and fitting ability of our very deep network, we also
propose to handle different level of noises/corruption using a single model.
? Experimental results demonstrate the advantages of our network over other recent stateof-the-art methods on image denoising and super-resolution, setting new records on these
topics.1
Related work Extensive work has been done on image restoration in the literature. See detail
reviews in a survey [21]. Traditional methods such as Total variation [24, 23], BM3D algorithm
[5] and dictionary learning based methods [31, 10, 2] have shown very good performance on image
restoration topics such as image denoising and super-resolution. Since image restoration is in general
an ill-posed problem, the use of regularization [34, 9] has been proved to be essential.
An active and probably more promising category for image restoration is the DNN based methods.
Stacked denoising auto-encoder [29] is one of the most well-known DNN models which can be used
for image restoration. Xie et al. [32] combined sparse coding and DNN pre-trained with denoising
auto-encoder for low-level vision tasks such as image denoising and inpainting. Other neural networks
based methods such as multi-layer perceptron [1] and CNN [15] for image denoising, as well as DNN
for image or video super-resolution [4, 30, 7, 14] and compression artifacts reduction [6] have been
actively studied in these years.
Burger et al. [1] presented a patch-based algorithm learned with a plain multi-layer perceptron.
They also concluded that with large networks, large training data, neural networks can achieve
state-of-the-art image denoising performance. Jain and Seung [15] proposed a fully convolutional
CNN for denoising. They found that CNNs provide comparable or even superior performance to
wavelet and Markov Random Field (MRF) methods. Cui et al. [4] employed non-local self-similarity
(NLSS) search on the input image in multi-scale, and then used collaborative local auto-encoder for
super-resolution in a layer by layer fashion. Dong et al. [7] proposed to directly learn an end-to-end
mapping between the low/high-resolution images. Wang et al. [30] argued that domain expertise
represented by the conventional sparse coding can be combined to achieve further improved results.
An advantage of DNN methods is that these methods are purely data driven and no assumptions about
the noise distributions are made.
2
Very deep RED-Net for Image Restoration
The proposed framework mainly contains a chain of convolutional layers and symmetric deconvolutional layers, as shown in Figure 1. We term our method ?RED-Net??very deep Residual
Encoder-Decoder Networks.
2.1
Architecture
The framework is fully convolutional and deconvolutional. Rectification layers are added after each
convolution and deconvolution. The convolutional layers act as feature extractor, which preserve
the primary components of objects in the image and meanwhile eliminating the corruptions. The
deconvolutional layers are then combined to recover the details of image contents. The output of
the deconvolutional layers is the ?clean? version of the input image. Moreover, skip connections
1
We have released the evaluation code at https://bitbucket.org/chhshen/image-denoising/
2
Figure 1: The overall architecture of our proposed network. The network contains layers of symmetric
convolution (encoder) and deconvolution (decoder). Skip-layer connections are connected every a few (in our
experiments, two) layers.
are also added from a convolutional layer to its corresponding mirrored deconvolutional layer. The
convolutional feature maps are passed to and summed with the deconvolutional feature maps elementwise, and passed to the next layer after rectification.
For low-level image restoration problems, we prefer using neither pooling nor unpooling in the
network as usually pooling discards useful image details that are essential for these tasks. Motivated
by the VGG model [27], the kernel size for convolution and deconvolution is set to 3?3, which has
shown excellent image recognition performance. It is worth mentioning that the size of input image
can be arbitrary since our network is essentially a pixel-wise prediction. The input and output of the
network are images of the same size w ? h ? c, where w, h and c are width, height and number of
channels. In this paper, we use c = 1 although it is straightforward to apply to images with c > 1. We
found that using 64 feature maps for convolutional and deconvolutional layers achieves satisfactory
results, although more feature maps leads to slightly better performance. Deriving from the above
architecture, in this work we mainly conduct experiments with two networks, which are 20-layer and
30-layer respectively.
2.1.1
Deconvolution decoder
Architectures combining layers of convolution and deconvolution [22, 12] have been proposed for
semantic segmentation lately. In contrast to convolutional layers, in which multiple input activations
within a filter window are fused to output a single activation, deconvolutional layers associate a single
input activation with multiple outputs. Deconvolution is usually used as learnable up-sampling layers.
One can simply replace deconvolution with convolution, which results in an architecture that is very
similar to recently proposed very deep fully convolutional neural networks [19, 7]. However, there
exist differences between a fully convolution model and our model.
First, in the fully convolution case, the noise is eliminated step by step, i.e., the noise level is reduced
after each layer. During this process, the details of the image content may be lost. Nevertheless, in our
network, convolution preserves the primary image content. Then deconvolution is used to compensate
the details. We compare the 5-layer and 10-layer fully convolutional network with our network
(combining convolution and deconvolution, but without skip connection). For fully convolutional
networks, we use padding and up-sample the input to make the input and output the same size. For
our network, the first 5 layers are convolutional and the second 5 layers are deconvolutional. All the
other parameters for training are the same. In terms of Peak Signal-to-Noise Ratio (PSNR), using
deconvolution works slightly better than the fully convolutional counterpart.
On the other hand, to apply deep learning models on devices with limited computing power such
as mobile phones, one has to speed-up the testing phase. In this situation, we propose to use downsampling in convolutional layers to reduce the size of the feature maps. In order to obtain an output
of the same size as the input, deconvolution is used to up-sample the feature maps in the symmetric
deconvolutional layers. We experimentally found that the testing efficiency can be well improved
with almost negligible performance degradation.
3
Figure 2: An example of a building block in the proposed framework. For ease of visualization, only two skip
connections are shown in this example, and the ones in layers represented by fk are omitted.
2.1.2
Skip connections
An intuitive question is that, is deconvolution able to recover image details from the image abstraction
only? We find that in shallow networks with only a few layers of convolution, deconvolution is able to
recover the details. However, when the network goes deeper or using operations such as max pooling,
deconvolution does not work so well, possibly because too much image detail is already lost in the
convolution. The second question is that, when our network goes deeper, does it achieve performance
gain? We observe that deeper networks often suffer from gradient vanishing and become hard to
train?a problem that is well addressed in the literature.
To address the above two problems, inspired by highway networks [28] and deep residual networks
[11], we add skip connections between two corresponding convolutional and deconvolutional layers
as shown in Figure 1. A building block is shown in Figure 2. There are two reasons for using
such connections. First, when the network goes deeper, as mentioned above, image details can be
lost, making deconvolution weaker in recovering them. However, the feature maps passed by skip
connections carry much image detail, which helps deconvolution to recover a better clean image.
Second, the skip connections also achieve benefits on back-propagating the gradient to bottom layers,
which makes training deeper network much easier as observed in [28] and [11]. Note that our skip
layer connections are very different from the ones proposed in [28] and [11], where the only concern
is on the optimization side. In our case, we want to pass information of the convolutional feature
maps to the corresponding deconvolutional layers.
Instead of directly learning the mappings from input X to the output Y , we would like the network to
fit the residual [11] of the problem, which is denoted as F(X) = Y ? X. Such a learning strategy
is applied to inner blocks of the encoding-decoding network to make training more effective. Skip
connections are passed every two convolutional layers to their mirrored deconvolutional layers. Other
configurations are possible and our experiments show that this configuration already works very well.
Using such skip connections makes the network easier to be trained and gains restoration performance
via increasing network depth.
The very deep highway networks [28] are essentially feed-forward long short-term memory (LSTMs)
with forget gates, and the CNN layers of deep residual network [11] are feed-forward LSTMs without
gates. Note that our deep residual networks are in general not in the format of standard feed-forward
LSTMs.
2.2
Discussions
Training with symmetric skip connections As mentioned above, using skip connections mainly
has two benefits: (1) passing image detail forwardly, which helps to recover clean images and (2)
passing gradient backwardly, which helps to find better local minimum. We design experiments to
show these observations.
We first compare two networks trained for denoising noises of ? = 70. In the first network, we use 5
layers of 3?3 convolution with stride 3. The input size of training data is 243?243, which results in
a vector after 5 layers of convolution. Then deconvolution is used to recover the input. The second
network uses the same settings as the first one, except for adding skip connections. The results are
show in Figure 3(a). We can observe that it is hard for deconvolution to recover details from only a
vector encoding the abstraction of the input, which shows that the ability on recovering image details
for deconvolution is limited. However, if we use skip connections, the network can still recover the
input, because details are passed from top layers by skip connections.
We also train five networks to show that using skip connections help to back-propagate gradient in
training to better fit the end-to-end mapping, as shown in Figure 3(b). The five networks are: 10, 20
and 30 layer networks without skip connections, and 20, 30 layer networks with skip connections.
4
(a)
(b)
(c)
Figure 3: Analysis on skip connections: (a) Recovering image details using deconvolution and skip connections;
(b) The training loss during training; (c) Comparisons of skip connection types in [11] and our model, where
?Block-i-RED? is the connections in our model with block size i and ?Block-i-He et al.? is the connections in He
et al. [11] with block size i. PSNR values at the last iteration for the curves are: 25.08, 24.59, 25.30 and 25.21.
As we can see, the training loss increases when the network going deeper without skip connections
(similar phenomenon is also observed in [11]), but we obtain a lower loss value when using them.
Comparison with deep residual networks [11] One may use different types of skip connections
in our network, a straightforward alternate is that in [11]. In [11], the skip connections are added
to divide the network into sequential blocks. A benefit of our model is that our skip connections
have element-wise correspondence, which can be very important in pixel-wise prediction problems.
We carry out experiments to compare the two types of skip connections. Here the block size
indicates the span of the connections. The results are shown in Figure 3(c). We can observe that our
connections often converge to a better optimum, demonstrating that element-wise correspondence
can be important.
Dealing with different levels of noises/corruption An important question is that, can we handle
different levels of corruption with a single model? Almost all existing methods need to train different
models for different levels of corruptions. Typically these methods need to estimate the corruption
level at first. We use a trained model in [1], to denoise different levels of noises with ? being 10,
30, 50 and 70. The obtained average PSNR on the 14 images are 29.95dB, 27.81dB, 18.62dB and
14.84dB, respectively. The results show that the parameters trained on a single noise level cannot
handle different levels of noises well. Therefore, in this paper, we aim to train a single model
for recovering different levels of corruption, which are different noise levels in the task of image
denoising and different scaling parameters in image super-resolution. The large capacity of the
network is the key to this success.
2.3
Training
Learning the end-to-end mapping from corrupted images to clean ones needs to estimate the weights
? represented by the convolutional and deconvolutional kernels. This is achieved by minimizing the
Euclidean loss between the outputs of the network and the clean image. In specific, given a collection
of N training sample pairs Xi , Yi , where Xi is a corrupted image and Yi is the clean version as the
ground-truth. We minimize the following Mean Squared Error (MSE):
L(?) =
N
1 X
kF(Xi ; ?) ? Yi k2F .
N i=1
(1)
We implement and train our network using Caffe [16]. In practice, we find that using Adam [17]
with learning rate 10?4 for training converges faster than using traditional stochastic gradient descent
(SGD). The base learning rate for all layers are the same, different from [7, 15], in which a smaller
learning rate is set for the last layer. This trick is not necessary in our network.
Following general settings in the literature, we use gray-scale image for denoising and the luminance
channel for super-resolution in this paper. 300 images from the Berkeley Segmentation Dataset
(BSD) [20] are used to generate the training set. For each image, patches of size 50?50 are sampled
5
as ground-truth. For denoising, we add additive Gaussian noise to the patches multiple times to
generate a large training set (about 0.5M). For super-resolution, we first down-sample a patch and
then up-sample it to its original size, obtaining a low-resolution version as the input of the network.
2.4
Testing
Although trained on local patches, our network can perform denoising and super-resolution on images
of arbitrary size. Given a testing image, one can simply go forward through the network, which is
able to obtain a better performance than existing methods. To achieve smoother results, we propose to
process a corrupted image on multiple orientations. Different from segmentation, the filter kernels in
our network only eliminate the corruptions, which is not sensitive to the orientation of image contents.
Therefore, we can rotate and mirror flip the kernels and perform forward multiple times, and then
average the output to obtain a smoother image. We see that this can lead to slightly better denoising
and super-resolution performance.
3
Experiments
In this section, we provide evaluation of denoising and super-resolution performance of our models
against a few existing state-of-the-art methods. Denoising experiments are performed on two datasets:
14 common benchmark images [33, 3, 18, 9] and the BSD200 dataset. We test additive Gaussian
noises with zero mean and standard deviation ? = 10, 30, 50 and 70 respectively. BM3D [5],
NCSR [8], EPLL [34], PCLR [3], PDPD [33] and WMMN [9] are compared with our method. For
super-resolution, we compare our network with SRCNN [7], NBSRF [25], CSCN [30], CSC [10],
TSE [13] and ARFL+ [26] on three datasets: Set5, Set14 and BSD100. The scaling parameter is
tested with 2, 3 and 4.
Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity (SSIM) index are calculated for
evaluation. For our method, which is denoted as RED-Net, we implement three versions: RED10
contains 5 convolutional and deconvolutional layers without skip connections, RED20 contains 10
convolutional and deconvolutional layers with skip connections, and RED30 contains 15 convolutional
and deconvolutional layers with skip connections.
3.1
Image Denoising
Evaluation on the 14 images Table 1 presents the PSNR and SSIM results of ? 10, 30, 50, and
70. We can make some observations from the results. First of all, the 10 layer convolutional and
deconvolutional network has already achieved better results than the state-of-the-art methods, which
demonstrates that combining convolution and deconvolution for denoising works well, even without
any skip connections. Moreover, when the network goes deeper, the skip connections proposed in
this paper help to achieve even better denoising performance, which exceeds the existing best method
WNNM [9] by 0.32dB, 0.43dB, 0.49dB and 0.51dB on noise levels of ? being 10, 30, 50 and 70
respectively. While WNNM is only slightly better than the second best existing method PCLR [3] by
0.01dB, 0.06dB, 0.03dB and 0.01dB respectively, which shows the large improvement of our model.
Last, we can observe that the more complex the noise is, the more improvement our model achieves
than other methods. Similar observations can be made on the evaluation of SSIM.
?
?
?
?
= 10
= 30
= 50
= 70
?
?
?
?
= 10
= 30
= 50
= 70
Table 1: Average PSNR and SSIM results of ? 10, 30, 50, 70 for the 14 images.
PSNR
BM3D
EPLL
NCSR
PCLR
PGPD WNNM RED10 RED20
34.18
33.98
34.27
34.48
34.22
34.49
34.62
34.74
28.49
28.35
28.44
28.68
28.55
28.74
28.95
29.10
26.08
25.97
25.93
26.29
26.19
26.32
26.51
26.72
24.65
24.47
24.36
24.79
24.71
24.80
24.97
25.23
SSIM
0.9339 0.9332 0.9342 0.9366 0.9309
0.9363
0.9374
0.9392
0.8204 0.8200 0.8203 0.8263 0.8199
0.8273
0.8327
0.8396
0.7427 0.7354 0.7415 0.7538 0.7442
0.7517
0.7571
0.7689
0.6882 0.6712 0.6871 0.6997 0.6913
0.6975
0.7012
0.7177
6
RED30
34.81
29.17
26.81
25.31
0.9402
0.8423
0.7733
0.7206
Evaluation on BSD200 For testing efficiency, we convert the images to gray-scale and resize them
to smaller ones on BSD-200. Then all the methods are run on these images to get average PSNR
and SSIM results of ? 10, 30, 50, and 70, as shown in Table 2. For existing methods, their denoising
performance does not differ much, while our model achieves 0.38dB, 0.47dB, 0.49dB and 0.42dB
higher of PSNR over WNNM.
?
?
?
?
?
?
?
?
3.2
Table 2: Average PSNR and SSIM results of ? 10, 30, 50, 70 on 200 images from BSD.
PSNR
BM3D
EPLL
NCSR
PCLR
PGPD WNNM RED10 RED20 RED30
= 10
33.01
33.01
33.09
33.30
33.02
33.25
33.49
33.59
33.63
= 30
27.31
27.38
27.23
27.54
27.33
27.48
27.79
27.90
27.95
= 50
25.06
25.17
24.95
25.30
25.18
25.26
25.54
25.67
25.75
= 70
23.82
23.81
23.58
23.94
23.89
23.95
24.13
24.33
24.37
SSIM
= 10 0.9218 0.9255 0.9226 0.9261 0.9176
0.9244
0.9290
0.9310
0.9319
= 30 0.7755 0.7825 0.7738 0.7827 0.7717
0.7807
0.7918
0.7993
0.8019
= 50 0.6831 0.6870 0.6777 0.6947 0.6841
0.6928
0.7032
0.7117
0.7167
= 70 0.6240 0.6168 0.6166 0.6336 0.6245
0.6346
0.6367
0.6521
0.6551
Image super-resolution
The evaluation on Set5 is shown in Table 3. Our 10-layer network outperforms the compared methods
already, and we achieve even better performance with deeper networks. The 30-layer network exceeds
the second best method CSCN by 0.52dB, 0.56dB and 0.47dB on scales 2, 3 and 4 respectively. The
evaluation on Set14 is shown in Table 4. The improvement on Set14 in not as significant as that on
Set5, but we can still observe that the 30 layer network achieves higher PSNR than the second best
CSCN by 0.23dB, 0.06dB and 0.1dB. The results on BSD100, as shown in Table 5, are similar to
those on Set5. The second best method is still CSCN, the performance of which is worse than that of
our 10 layer network. Our deeper network obtains much more performance gain than the others.
s=2
s=3
s=4
SRCNN
36.66
32.75
30.49
s=2
s=3
s=4
0.9542
0.9090
0.8628
s=2
s=3
s=4
SRCNN
32.45
29.30
27.50
s=2
s=3
s=4
0.9067
0.8215
0.7513
3.3
Table 3: Average PSNR and SSIM results on Set5.
PSNR
NBSRF CSCN
CSC
TSE
ARFL+ RED10
36.76
37.14
36.62
36.50
36.89
37.43
32.75
33.26
32.66
32.62
32.72
33.43
30.44
31.04
30.36
30.33
30.35
31.12
SSIM
0.9552
0.9567 0.9549 0.9537
0.9559
0.9590
0.9104
0.9167 0.9098 0.9094
0.9094
0.9197
0.8632
0.8775 0.8607 0.8623
0.8583
0.8794
Table 4: Average PSNR and SSIM results on Set14.
PSNR
NBSRF CSCN
CSC
TSE
ARFL+ RED10
32.45
32.71
32.31
32.23
32.52
32.77
29.25
29.55
29.15
29.16
29.23
29.42
27.42
27.76
27.30
27.40
27.41
27.58
SSIM
0.9071
0.9095 0.9070 0.9036
0.9074
0.9125
0.8212
0.8271 0.8208 0.8197
0.8201
0.8318
0.7511
0.7620 0.7499 0.7518
0.7483
0.7654
RED20
37.62
33.80
31.40
RED30
37.66
33.82
31.51
0.9597
0.9229
0.8847
0.9599
0.9230
0.8869
RED20
32.87
29.61
27.80
RED30
32.94
29.61
27.86
0.9138
0.8343
0.7697
0.9144
0.8341
0.7718
Evaluation using a single model
To construct the training set, we extract image patches with different noise levels and scaling
parameters for denoising and super-resolution. Then a 30-layer network is trained for the two tasks
respectively. The evaluation results are shown in Table 6 and Table 7. Although training with different
levels of corruption, we can observe that the performance of our network only slightly degrades
7
s=2
s=3
s=4
s=2
s=3
s=4
Table 5: Average PSNR and SSIM results on BSD100 for super-resolution.
PSNR
SRCNN NBSRF CSCN
CSC
TSE
ARFL+ RED10 RED20
31.36
31.30
31.54
31.27
31.18
31.35
31.85
31.95
28.41
28.36
28.58
28.31
28.30
28.36
28.79
28.90
26.90
26.88
27.11
26.83
26.85
26.86
27.25
27.35
SSIM
0.8879
0.8876
0.8908 0.8876 0.8855
0.8885
0.8953
0.8969
0.7863
0.7856
0.7910 0.7853 0.7843
0.7851
0.7975
0.7993
0.7103
0.7110
0.7191 0.7101 0.7108
0.7091
0.7238
0.7268
RED30
31.99
28.93
27.40
0.8974
0.7994
0.7290
comparing to the case in which using separate models for denoising and super-resolution. This may
due to the fact that the network has to fit much more complex mappings. Except that CSCN works
slightly better on Set14 super-resolution with scales 3 and 4, our network still beats the existing
methods, showing that our network works much better in image denoising and super-resolution even
using only one single model to deal with complex corruption.
Table 6: Average PSNR and SSIM results for image denoising using a single 30-layer network.
14 images
BSD200
? = 10 ? = 30 ? = 50 ? = 70 ? = 10 ? = 30 ? = 50 ? = 70
PSNR
34.49
29.09
26.75
25.20
33.38
27.88
25.69
24.36
SSIM
0.9368
0.8414
0.7716
0.7157
0.9280
0.7980
0.7119
0.6544
Table 7: Average PSNR and SSIM results for image super-resolution using a single 30-layer network.
Set5
Set14
BSD100
s=2
s=3
s=4
s=2
s=3
s=4
s=2
s=3
s=4
PSNR
37.56
33.70
31.33
32.81
29.50
27.72
31.96
28.88
27.35
SSIM 0.9595 0.9222 0.8847 0.9135 0.8334 0.7698 0.8972 0.7993 0.7276
4
Conclusions
In this paper we have proposed a deep encoding and decoding framework for image restoration.
Convolution and deconvolution are combined, modeling the restoration problem by extracting primary
image content and recovering details. More importantly, we propose to use skip connections, which
helps on recovering clean images and tackles the optimization difficulty caused by gradient vanishing,
and thus obtains performance gains when the network goes deeper. Experimental results and our
analysis show that our network achieves better performance than state-of-the-art methods on image
denoising and super-resolution.
This work was in part supported by Natural Science Foundation of China (Grants 61673204,
61273257, 61321491), Program for Distinguished Talents of Jiangsu Province, China (Grant 2013XXRJ-018), Fundamental Research Funds for the Central Universities (Grant 020214380026), and
Australian Research Council Future Fellowship (FT120100969). X.-J. Mao?s contribution was made
when visiting University of Adelaide. His visit was supported by the joint PhD program of China
Scholarship Council.
References
[1] H. C. Burger, C. J. Schuler, and S. Harmeling. Image denoising: Can plain neural networks compete with
BM3D? In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 2392?2399, 2012.
[2] P. Chatterjee and P. Milanfar. Clustering-based denoising with locally learned dictionaries. IEEE Trans.
Image Process., 18(7):1438?1451, 2009.
[3] F. Chen, L. Zhang, and H. Yu. External patch prior guided internal clustering for image denoising. In Proc.
IEEE Int. Conf. Comp. Vis., pages 603?611, 2015.
[4] Z. Cui, H. Chang, S. Shan, B. Zhong, and X. Chen. Deep network cascade for image super-resolution. In
Proc. Eur. Conf. Comp. Vis., pages 49?64, 2014.
[5] K. Dabov, A. Foi, V. Katkovnik, and K. O. Egiazarian. Image denoising by sparse 3-d transform-domain
collaborative filtering. IEEE Trans. Image Process., 16(8):2080?2095, 2007.
8
[6] C. Dong, Y. Deng, C. C. Loy, and X. Tang. Compression artifacts reduction by a deep convolutional
network. In Proc. IEEE Int. Conf. Comp. Vis., pages 576?584, 2015.
[7] C. Dong, C. C. Loy, K. He, and X. Tang. Image super-resolution using deep convolutional networks. IEEE
Trans. Pattern Anal. Mach. Intell., 38(2):295?307, 2016.
[8] W. Dong, L. Zhang, G. Shi, and X. Li. Nonlocally centralized sparse representation for image restoration.
IEEE Trans. Image Process., 22(4):1620?1630, 2013.
[9] S. Gu, L. Zhang, W. Zuo, and X. Feng. Weighted nuclear norm minimization with application to image
denoising. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 2862?2869, 2014.
[10] S. Gu, W. Zuo, Q. Xie, D. Meng, X. Feng, and L. Zhang. Convolutional sparse coding for image
super-resolution. In Proc. IEEE Int. Conf. Comp. Vis., pages 1823?1831, 2015.
[11] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proc. IEEE Conf.
Comp. Vis. Patt. Recogn., volume abs/1512.03385, 2016.
[12] S. Hong, H. Noh, and B. Han. Decoupled deep neural network for semi-supervised semantic segmentation.
In Proc. Advances in Neural Inf. Process. Syst., 2015.
[13] J. Huang, A. Singh, and N. Ahuja. Single image super-resolution from transformed self-exemplars. In
Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 5197?5206, 2015.
[14] Y. Huang, W. Wang, and L. Wang. Bidirectional recurrent convolutional networks for multi-frame
super-resolution. In Proc. Advances in Neural Inf. Process. Syst., pages 235?243, 2015.
[15] V. Jain and H. S. Seung. Natural image denoising with convolutional networks. In Proc. Advances in
Neural Inf. Process. Syst., pages 769?776, 2008.
[16] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe:
Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
[17] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In Proc. Int. Conf. Learn.
Representations, 2015.
[18] H. Liu, R. Xiong, J. Zhang, and W. Gao. Image denoising via adaptive soft-thresholding based on non-local
samples. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 484?492, 2015.
[19] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proc.
IEEE Conf. Comp. Vis. Patt. Recogn., pages 3431?3440, 2015.
[20] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its
application to evaluating segmentation algorithms and measuring ecological statistics. In Proc. IEEE Int.
Conf. Comp. Vis., volume 2, pages 416?423, July 2001.
[21] P. Milanfar. A tour of modern image filtering: New insights and methods, both practical and theoretical.
IEEE Signal Process. Mag., 30(1):106?128, 2013.
[22] H. Noh, S. Hong, and B. Han. Learning deconvolution network for semantic segmentation. In Proc. IEEE
Int. Conf. Comp. Vis., pages 1520?1528, 2015.
[23] S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin. An iterative regularization method for total
variation-based image restoration. Multiscale Modeling & Simulation, 4(2):460?489, 2005.
[24] L. I. Rudin, S. Osher, and E. Fatemi. Nonlinear total variation based noise removal algorithms. Phys. D,
60(1-4):259?268, November 1992.
[25] J. Salvador and E. Perez-Pellitero. Naive bayes super-resolution forest. In Proc. IEEE Int. Conf. Comp.
Vis., pages 325?333, 2015.
[26] S. Schulter, C. Leistner, and H. Bischof. Fast and accurate image upscaling with super-resolution forests.
In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 3791?3799, 2015.
[27] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In
Proc. Int. Conf. Learn. Representations, 2015.
[28] R. K. Srivastava, K. Greff, and J. Schmidhuber. Training very deep networks. In Proc. Advances in Neural
Inf. Process. Syst., pages 2377?2385, 2015.
[29] P. Vincent, H. Larochelle, Y. Bengio, and P. Manzagol. Extracting and composing robust features with
denoising autoencoders. In Proc. Int. Conf. Mach. Learn., pages 1096?1103, 2008.
[30] Z. Wang, D. Liu, J. Yang, W. Han, and T. S. Huang. Deep networks for image super-resolution with sparse
prior. In Proc. IEEE Int. Conf. Comp. Vis., pages 370?378, 2015.
[31] Z. Wang, Y. Yang, Z. Wang, S. Chang, J. Yang, and T. S. Huang. Learning super-resolution jointly from
external and internal examples. IEEE Trans. Image Process., 24(11):4359?4371, 2015.
[32] J. Xie, L. Xu, and E. Chen. Image denoising and inpainting with deep neural networks. In Proc. Advances
in Neural Inf. Process. Syst., pages 350?358, 2012.
[33] J. Xu, L. Zhang, W. Zuo, D. Zhang, and X. Feng. Patch group based nonlocal self-similarity prior learning
for image denoising. In Proc. IEEE Int. Conf. Comp. Vis., pages 244?252, 2015.
[34] D. Zoran and Y. Weiss. From learning models of natural image patches to whole image restoration. In
Proc. IEEE Int. Conf. Comp. Vis., pages 479?486, 2011.
9
| 6172 |@word cnn:4 version:4 eliminating:3 compression:2 norm:1 simulation:1 propagate:2 set5:6 sgd:1 inpainting:3 carry:2 reduction:2 configuration:2 contains:5 liu:2 mag:1 nonlocally:1 deconvolutional:25 outperforms:1 existing:7 guadarrama:1 comparing:1 activation:3 unpooling:1 additive:2 csc:4 bsd100:4 fund:1 rudin:1 device:1 vanishing:3 short:1 record:1 org:1 zhang:8 five:2 height:1 mathematical:1 become:1 consists:1 fitting:1 bitbucket:1 nor:1 multi:4 bm3d:5 inspired:1 relying:1 window:1 increasing:1 spain:1 burger:3 moreover:2 superresolution:1 berkeley:1 every:2 act:3 tackle:2 demonstrates:1 grant:3 negligible:1 local:7 despite:1 encoding:4 mach:2 meng:1 china:4 studied:1 mentioning:1 limited:2 ease:1 harmeling:1 practical:1 testing:5 lost:3 block:9 implement:2 practice:1 procedure:1 significantly:1 attain:1 cascade:1 pre:1 nanjing:1 cannot:1 get:1 operator:2 conventional:1 map:8 shi:1 straightforward:2 go:6 survey:1 shen:1 resolution:32 insight:1 importantly:1 deriving:1 zuo:3 nuclear:1 his:1 embedding:1 handle:5 variation:3 decode:1 us:1 epll:3 associate:1 element:2 trick:1 recognition:4 database:1 bottom:3 observed:2 preprint:1 wang:6 capture:1 forwardly:1 connected:1 sun:1 mentioned:2 seung:2 trained:7 singh:1 zoran:1 purely:1 efficiency:2 gu:2 joint:1 represented:3 recogn:7 train:7 jiao:1 stacked:1 jain:2 effective:2 fast:2 caffe:2 posed:2 compressed:1 encoder:6 ability:2 statistic:1 simonyan:1 transform:1 jointly:1 advantage:2 karayev:1 net:3 propose:9 combining:3 achieve:10 intuitive:1 optimum:3 darrell:2 adam:2 converges:3 object:1 help:7 recurrent:1 propagating:1 exemplar:1 school:1 progress:1 recovering:8 skip:40 australian:1 larochelle:1 differ:1 guided:1 cnns:1 filter:2 stochastic:2 human:1 australia:1 bin:1 argued:1 dnns:3 leistner:1 ground:2 mapping:8 achieves:6 dictionary:2 released:1 omitted:1 proc:25 sensitive:1 highway:2 council:2 weighted:1 minimization:1 gaussian:2 super:30 aim:1 zhong:1 mobile:1 encode:1 improvement:4 indicates:1 mainly:3 contrast:1 attains:1 abstraction:4 typically:1 eliminate:1 dnn:5 going:2 transformed:1 pixel:2 overall:1 noh:2 ill:2 orientation:2 stateof:1 denoted:2 art:6 summed:1 field:1 construct:1 sampling:1 eliminated:1 yu:2 k2f:1 future:1 others:1 few:3 modern:1 composed:2 preserve:2 intell:1 phase:1 ab:1 centralized:1 evaluation:10 perez:1 chain:2 accurate:1 capable:1 necessary:1 decoupled:1 conduct:1 divide:1 euclidean:1 girshick:1 theoretical:1 tse:4 modeling:2 soft:1 measuring:1 restoration:20 deviation:1 tour:1 jiangsu:1 too:1 corrupted:6 combined:4 eur:1 peak:2 fundamental:1 dong:4 decoding:3 fused:1 squared:1 central:1 huang:4 possibly:1 worse:1 conf:20 external:2 actively:1 li:1 syst:5 stride:1 summarized:1 coding:3 int:12 caused:1 vi:17 performed:1 observing:1 red:4 recover:11 bayes:1 jia:1 contribution:2 collaborative:2 minimize:1 egiazarian:1 convolutional:41 raw:1 vincent:1 foi:1 ren:1 dabov:1 expertise:1 worth:1 corruption:15 comp:17 phys:1 against:1 propagated:1 gain:5 sampled:1 proved:1 dataset:2 psnr:22 segmentation:8 back:4 feed:3 bidirectional:1 higher:3 xie:3 supervised:1 zisserman:1 improved:2 wei:1 done:1 salvador:1 autoencoders:1 hand:1 lstms:3 multiscale:1 nonlinear:1 quality:2 gray:2 artifact:2 building:2 counterpart:1 regularization:2 symmetric:6 laboratory:1 satisfactory:1 goldfarb:1 semantic:5 deal:1 during:2 width:1 self:3 hong:2 demonstrate:1 greff:1 image:113 ranging:1 wise:4 novel:1 recently:2 superior:3 common:1 volume:2 he:4 elementwise:1 significant:1 talent:1 fk:1 arfl:4 han:3 similarity:3 add:3 base:1 recent:3 inf:5 driven:1 chunhua:1 discard:1 phone:1 schmidhuber:1 ecological:1 success:1 yi:3 minimum:1 employed:1 deng:1 converge:1 signal:4 semi:1 smoother:2 multiple:8 july:1 exceeds:2 segmented:1 faster:3 compensate:1 long:3 visit:1 prediction:2 mrf:1 vision:2 essentially:2 arxiv:2 iteration:1 kernel:4 achieved:2 want:1 fellowship:1 addressed:1 concluded:1 probably:1 pooling:3 tend:1 db:22 extracting:2 structural:1 yang:4 symmetrically:2 bengio:1 fit:3 architecture:7 reduce:1 inner:1 vgg:1 motivated:1 passed:5 padding:1 milanfar:2 suffer:1 passing:2 deep:27 useful:1 locally:1 category:1 reduced:1 http:1 generate:2 exist:1 mirrored:2 upscaling:1 patt:7 group:1 key:2 nevertheless:1 demonstrating:1 achieving:2 neither:1 clean:9 luminance:1 year:1 convert:1 run:1 inverse:1 compete:1 almost:2 patch:9 prefer:1 scaling:3 resize:1 comparable:1 layer:74 shan:1 correspondence:2 software:1 tal:1 answered:1 speed:1 span:1 cscn:8 format:1 martin:1 alternate:1 bsd:3 cui:2 beneficial:2 remain:1 slightly:6 smaller:2 shallow:1 making:3 osher:2 rectification:2 visualization:1 flip:1 end:12 schulter:1 operation:1 apply:2 observe:7 distinguished:1 fowlkes:1 xiong:1 gate:2 original:3 top:2 clustering:2 scholarship:1 feng:3 malik:1 question:4 added:3 already:4 strategy:1 primary:4 degrades:1 traditional:2 visiting:1 gradient:8 link:2 separate:1 capacity:4 decoder:4 accommodating:1 topic:2 reason:1 code:1 fatemi:1 index:1 manzagol:1 ratio:2 downsampling:1 minimizing:1 loy:2 difficult:1 ba:1 design:2 anal:1 perform:2 ssim:18 convolution:17 observation:4 markov:1 datasets:2 benchmark:1 descent:1 november:1 beat:1 situation:1 frame:1 ncsr:3 arbitrary:2 pair:1 extensive:1 connection:45 bischof:1 learned:2 barcelona:1 kingma:1 nip:1 trans:5 address:1 able:3 usually:2 pattern:1 program:2 max:1 memory:1 video:1 power:1 difficulty:1 natural:4 residual:7 technology:1 lately:1 auto:3 extract:1 naive:1 set14:6 review:1 literature:3 prior:3 removal:1 kf:1 fully:11 loss:4 filtering:2 shelhamer:2 foundation:1 xiao:1 thresholding:1 supported:2 last:3 side:1 allow:1 deeper:13 perceptron:2 weaker:1 katkovnik:1 sparse:6 benefit:4 curve:1 plain:2 depth:1 calculated:1 evaluating:1 forward:5 made:3 collection:1 adaptive:1 nonlocal:1 obtains:2 dealing:1 active:1 xi:3 search:1 iterative:1 table:14 promising:2 learn:4 channel:2 schuler:1 composing:1 robust:1 obtaining:1 forest:2 mse:1 excellent:1 complex:3 meanwhile:2 domain:2 main:1 whole:1 noise:20 denoise:1 xu:3 fashion:1 srcnn:4 ahuja:1 mao:2 extractor:3 wavelet:1 learns:1 tang:2 donahue:1 down:1 specific:1 showing:1 learnable:1 concern:1 deconvolution:25 essential:2 adding:1 sequential:1 mirror:1 phd:1 province:1 chatterjee:1 chen:3 easier:4 forget:1 yin:1 simply:2 likely:1 gao:1 deblur:1 chang:2 applies:1 truth:2 consequently:1 replace:1 content:8 experimentally:1 hard:2 specifically:1 except:2 denoising:39 degradation:1 total:3 pas:3 experimental:3 internal:2 rotate:1 adelaide:2 tested:1 phenomenon:1 srivastava:1 |
5,717 | 6,173 | Community Detection on Evolving Graphs
Aris Anagnostopoulos
Sapienza University of Rome
[email protected]
Jakub ?acki
?
Sapienza University of Rome
[email protected]
Stefano Leonardi
Sapienza University of Rome
[email protected]
Silvio Lattanzi
Google
[email protected]
Mohammad Mahdian
Google
[email protected]
Abstract
Clustering is a fundamental step in many information-retrieval and data-mining applications. Detecting clusters in graphs is also a key tool for finding the community
structure in social and behavioral networks. In many of these applications, the input
graph evolves over time in a continual and decentralized manner, and, to maintain
a good clustering, the clustering algorithm needs to repeatedly probe the graph.
Furthermore, there are often limitations on the frequency of such probes, either
imposed explicitly by the online platform (e.g., in the case of crawling proprietary
social networks like twitter) or implicitly because of resource limitations (e.g., in
the case of crawling the web).
In this paper, we study a model of clustering on evolving graphs that captures
this aspect of the problem. Our model is based on the classical stochastic block
model, which has been used to assess rigorously the quality of various static
clustering methods. In our model, the algorithm is supposed to reconstruct the
planted clustering, given the ability to query for small pieces of local information
about the graph, at a limited rate. We design and analyze clustering algorithms
that work in this model, and show asymptotically tight upper and lower bounds on
their accuracy. Finally, we perform simulations, which demonstrate that our main
asymptotic results hold true also in practice.
1
Introduction
This work studies the problem of detecting the community structure of a dynamic network according
to the framework of evolving graphs [3]. In this model the underlying graph evolves over time,
subject to a probabilistic process that modifies the vertices and the edges of the graph. The algorithm
can learn the changes that take place in the network only by probing the graph at a limited rate. The
main question for the evolving graph model is to design strategies for probing the graph, such as to
obtain information that is sufficient to maintain a solution that is competitive with a solution that can
be computed if the entire underlying graph is known.
The motivation for studying this model comes from the the inadequacy of the classical computational
paradigm, which assumes perfect knowledge of the input data and an algorithm that terminates. The
evolving graph model captures the evolving and decentralized nature of large-scale online social
networks. An important part of the model is that only a limited number of probes can be made at
each time step. This assumption is motivated by the limitations imposed by many social network
platforms such as Twitter or Facebook, where the network is constantly evolving and the access to the
structure is possible through an API that implements a rate-limited oracle. Even in cases where such
rate-limits are not exogenously imposed (e.g., when the network under consideration is the Web),
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
resource constraints often prohibit us from making too many probes in each time step (probing a
large graphs stored across many machines is a costly operation). The evolving graph model has been
considered for PageRank computation [4] and connectivity problems [3]. This work is the first to
address the problem of community detection in the evolving graph model.
Our probabilistic model of the evolution of the community structure of a network is based on the
stochastic block model (SBM) [1, 2, 5, 10]. It is a widely accepted model of probabilistic networks
for the study of community-detection methods, which generates graphs with an embodied community
structure. In the basic form of the model, vertices of a graph are first partitioned into k disjoint
communities in a probabilistic manner. Then, two nodes of the same community are linked with
probability p, and two nodes of distinct communities are linked with probability q, where p > q. All
the connections are mutually independent.
We make a first step in the study of community detection in the evolving-graph model by considering
an evolving stochastic block model, which allows nodes to change their communities according to a
given stochastic process.
1.1
Our Contributions
Our first step is to define a meaningful model for community detection on evolving graphs. We do
this by extending the stochastic block model to the evolving setting. The evolving stochastic block
model generates an n-node graph, whose nodes are partitioned into k communities. At each time
step, some nodes may change their communities in a random fashion. Namely, with probability 1/n
each node is reassigned; when this happens it is moved to the ith community Ci (which we also
call a cluster Ci ) with probability ?i , where {?i }ki=1 form a probability distribution. After being
reassigned, the neighborhood of the node is updated accordingly.
While these changes are being performed, we are unaware of them. Yet, at each step, we have a
budget of ? queries that we can perform to the graph (later we will specify values for ? that allow
us to obtain meaningful results?a value of ? that is too small may not allow the algorithm to catch
up with the changes; a value that is too large makes the problem trivial and unrealistic). A query of
the algorithm consists in choosing a single node. The result of the query is the list of neighbors of
the chosen node at the moment of the query. Our goal is to design an algorithm that is able to issue
queries over time in such a way that at each step it may report a partitioning {C?1 , . . . , C?k } that is
as close as possible to the real one {C1 , . . . , Ck }. A difficulty of the evolving-graph model is that,
because we observe the process for an infinite amount of time, even events with negligible probability
will take place. Thus we should design algorithms that are able to provide guarantees for most of the
time and recover even after highly unlikely events take place.
Let us now present our results at a high level. For simplicity of the description, let us assume that
p = 1, q = 0 and that the query model is slightly different, namely the algorithm can discover the
entire contents of a given cluster with one query.1
We first study algorithms that at each step pick the cluster to query independently at random from
some predefined distribution. One natural idea is to pick a cluster proportionally to its size (which
is essentially the same as querying the cluster of a node chosen uniformly at random). However,
we show that a better strategy is to query a cluster proportionally to the square root of its size.
While the two strategies are equivalent if the cluster probabilities {?i }ki=1 are uniform, the latter
becomes better in the case of skewed distributions. For example, if we have n1/3 clusters, and
the associated probabilities are ?i ? 1/i2 , the first strategy incorrectly classifies O(n1/3 ) nodes in
each step (in expectation), compared to only O(log2 n) nodes misclassified by the second strategy.
Furthermore, our experimental analysis suggests that the the strategy of probing a cluster with a
frequency proportional to the square root of its size is not only efficient in theory, but it may be a
good choice for practical application as well.
We later improve this result and give an algorithm that uses a mixture of cluster and node queries.
In the considered example when ?i ? 1/i2 , at each step it reports clusterings with only O(1)
misclassified nodes (in expectation). Although the query strategy and the error bound expressed in
1
In our analysis we show that the assumption about the query model can be dropped at the cost of increasing
the number of queries that we perform at each time step by a constant factor.
2
terms of {?i }ki=1 are both quite complex, we are able show that the algorithm is optimal, by giving a
matching lower bound.
Finally, we also show how to deal with the case when 1 ? p > q ? 0. In this case querying node v
provides us with only partial information about its cluster: it is connected to only a subset of the nodes
in C. In this case we impose some assumptions on p and q, and we provide an algorithm that given
a node can discover the entire contents of its cluster with O(log n/p) node queries. This algorithm
allows us to extend the previous results to the case when p > q > 0 (and p and q are sufficiently
far from each other), at the cost of performing ? = O(log n/p) queries per step. Even though the
evolving graph model requires the algorithm to issue a low number of queries, our analysis shows
that (under reasonable assumptions on p and q) this small number of queries is sufficient to maintain
a high-quality clustering.
Our theoretical results hold for large enough n. Therefore, we also perform simulations, which
demonstrate that our final theoretically optimal algorithm is able to beat the other algorithms even for
small values of n.
2
Related Work
Clustering and community-detection techniques have been studied by hundreds of researchers. In
social networks, detecting the clustering structure is a basic primitive for finding communities of
users, that is, sets of users sharing similar interests or affiliations [12, 16]. In recommendation
networks cluster discovery is often used to improve the quality of recommendation systems [13].
Other relevant applications of clustering can be found in image processing, bioinformatics, image
analysis and text classification.
Prior to the evolving model, a number of dynamic computation models have been studied, such as
online computation (the input data are revealed step by step), dynamic algorithms and data structures
(the input data are modified dynamically), and streaming computation (the input data are revealed
step by step while the algorithm is space constrained). Hartamann et al. [9] presented a survey of
results for clustering dynamic networks in some of the previously mentioned models. However, none
of the aforementioned models capture the relevant features of the dynamic evolution of large-scale
data sets: the data evolves at a slow pace and an algorithm can learn the data changes only by probing
specific portions of the graph at some cost.
The stochastic block model, used by sociologists [10], has recently received a growing attention in
computer science, machine learning, and statistics [1, 2, 5, 6, 17]. At the theoretical level, most work
has studied the range of parameters, for which the communities can be recovered from the generated
graph, both in the case of two [1, 7, 11, 14, 15] or more [2, 5] communities.
Another line of research focused on studying different dynamic versions of the stochastic block
model [8, 18, 19, 20]. Yet, there is a lack of theoretical work on modeling and analyzing stochastic
block models, and more generally community detection on evolving graph. This paper makes the
first step in this direction.
3
Model
In this paper we analyze an evolving extension of the stochastic block model [10]. We call this new
model the evolving stochastic block model. In this model we consider a graph of n nodes, which
are assigned to one of k clusters, and the probability that two nodes have an edge between them
depends on the clusters to which
P they are assigned. More formally, consider a probability distribution
?1 , . . . , ?k (i.e., ?i > 0 and i ?i = 1). Without loss of generality, throughout the paper we assume
?1 ? . . . ? ?k . Also, for each 1 ? i ? k we also assume that ?i < 1 ? ? for some constant
0 < ? < 1.
At the beginning, each node independently picks one of the k clusters. The probability that the node
picks cluster i is ?i . We denote this clustering of the nodes by C. Nodes that pick the same cluster
i are connected with a fixed probability pi (which may depend on n), whereas pairs of nodes that
pick two different clusters i and j are connected with probability qij (also possibly dependent on n).
Note that qij = qji and the edges are independent of each other. We denote p := min1?i?k pi and
q := max1?i,j?k qi,j .
3
So far, our model is very similar to the classic stochastic block model. Now we introduce its main
distinctive property, namely the evolution dynamics.
Evolution model: In our analysis, we assume that the graph evolves in discrete time steps indexed
by natural numbers. The nodes change their cluster in a random manner. At each time step, every
node v is reassigned with probability 1/n (independently from other nodes). When this happens, v
first deletes all the edges to its neighbors, then selects a new cluster i with probability ?i and finally
adds new edges with probability pi to nodes in cluster i and with probability qij to nodes in cluster j,
for every j 6= i. For 1 ? i ? k and t ? N, we denote by Cit the set of nodes assigned to cluster i
just after the reassignments in time step t. Note that we use Ci to denote the cluster itself, but Cit to
denote its contents.
Query model: We assume that the algorithm may gather information about the clusters by issuing
queries. In a single query the algorithm chooses a single node v and learns the list of current neighbors
of v. In each time step, the graph is probed after all reassignments are made.
We study algorithms that learn the cluster structure of the graph. The goal of our algorithm is to
report an approximate clustering C? of the graph at the end of each time step, that is close to the true
clustering C. We define the distance between two clusterings (partitions) C = {C1 , C2 , . . . , Ck } and
C? = {C?1 , C?2 , . . . , C?k } of the nodes as
? = min
d(C, C)
?
k
X
|Ci 4C??(i) |,
i=1
where the minimum is taken over all the permutations ? of {1, . . . , k}, and 4 denotes the symmetric
? is called the
difference between two sets, i.e., A4B = (A \ B) ? (B \ A).2 The distance d(C, C)
error of the algorithm (or of the returned clustering).
Finally, in our analysis we assume that p and q are far apart, more formally we assume that:
Assumption 1. For every i ? [k], and parameters K, ? and ?0 that we fix later, we have: (i)
p?i > Kq, (ii) p2 ?i n ? ? log n and (iii) p?i n ? ?0 log n.
Let us now discuss the above assumptions. Observe that (iii) follows from (ii). However, we prefer
to make them separate, as we mostly rely only on (iii). Assumption (iii) is necessary to assure that
most of the nodes in the cluster have at least a single edge to another node in the same cluster. In the
analysis, we set ?0 to be large enough (yet, still constant), to assure that for every given time t each
node has ?(log n) edges to nodes of the same cluster, with high probability.
We use Assumption 1 in an algorithm that, given a node v, finds all nodes of the cluster of v (correctly
with high probability3 ) and issues only O(log n/p) queries. Our algorithm also uses (ii), which is
slightly stronger than (iii) (it implies that two nodes from the same cluster have many neighbors in
common), as well as (i), which guarantees that (on average) most neighbors of a node v belong to the
cluster of v.
Discussion: The assumed graph model is relatively simple?certainly not complex enough to claim
that it accurately models real-world graphs. Nevertheless, this work is the first attempt to formally
study clustering in dynamic graphs and several simplifying assumptions are necessary to obtain
provable guarantees. Even with this basic model, the analysis is rather involved. Dealing with difficult
features of a more advanced model would overshadow our main findings.
We believe that if we want to keep the number of queries low, that is, O(log n/p), Assumption 1
cannot be relaxed considerably, that is, p and q cannot be too close to each other. At the same time,
recovery of clusters in the (nonevolving) stochastic block model has also been studied for stricter
ranges of parameters. However, the known algorithms in such settings inspect considerably more
nodes and require that the cluster probabilities {?i }ki=1 are close to being uniform [5]. The results
that apply to the case with many clusters with nonuniform sizes require that p and q are relatively far
apart. We note that in studying the classic stochastic block model it is a standard assumption to know
p and q, so we also assume it in this work for the sake of simplicity.
2
Note that we can extend this definition to pairs of clusterings with different numbers of clusters just by
adding empty clusters to the clustering with a smaller number of clusters.
3
We define the term with high probability in Section 4.
4
Our model assumes that (in expectation) only one node changes its cluster at every time step. However,
we believe that the analysis can be extended to the case when c > 1 nodes change their cluster every
step (in expectation) at the cost of using c times more queries.
Generalizing the results of this paper to more general models is a challenging open problem. Some
interesting directions are, for example, using graphs models with overlapping communities or
analyzing a more general model of moving nodes between clusters.
4
Algorithms and Main Results
In this section we outline our main results. For simplicity, we omit some technical details, mostly
concerning probability. In particular, we say that an event happens with high probability, if it happens
with probability at least 1 ? 1/nc , for some constant c > 1, but in this section we do not specify how
this constant is defined.4
We are interested in studying the behavior of the algorithm in an arbitrary time step. We start by
stating a lemma showing that to obtain an algorithm that can run indefinitely long, it suffices to
designing an algorithm that uses ? queries per step, initializes in O(n log n) steps and works with
high probability for n2 steps.
Lemma 1. Assume that there exists an algorithm for clustering evolving graphs that issues ? queries
per step and that at each time step t such that t = ?(n log n) and t ? n2 it reports a clustering of
expected error E correctly with high probability.
Then, there exists an algorithm for clustering evolving graphs that issues 2? queries per step and at
each time step t such that t = ?(n log n) it reports a clustering of expected error O(E).
To prove this lemma, we show that it suffices to run a new instance of the assumed algorithm every
n2 steps. In this way, when the first instance is no longer guaranteed to work, the second one has
finished initializing and can be used to report clusterings.
4.1
Simulating Node Queries
We now show how to reduce the problem to the setting in which an algorithm can query for the entire
contents of a cluster. This is done in two steps. As a first step, we give an algorithm for detecting the
cluster of a given node v by using only O(log n/p) node queries.
This algorithm maintains score of each node in the graph. Initially, the scores are all equal to 0. The
algorithm queries O(log n/p) neighbors of v and adds a score of 1 to every neighbor of neighbor
of v. We use Assumption 1 to prove that after this step, with high probability there is a gap between
the minimum score of a node inside the cluster of v and the maximum score of a node outside it.
Lemma 2. Suppose that Assumption 1 holds. Then, there exists an algorithm that, given a node v,
correctly identifies all nodes in the cluster of v with high probability. It issues O(log n/p) queries.
Observe that Lemma 2 effectively reduces our problem to the case when p = 1 and q = 0: a single
execution of the algorithm gives us the entire cluster of a node, just like a single query for this node
in the case when p = 1 and q = 0.
In the second step, we give a data structure that maintains an approximate clustering of the nodes and
detects the number of cluster k together with (approximate) cluster probabilities. Internally, it uses
the algorithm of Lemma 2.
Lemma 3. Suppose that Assumption 1 holds. Then there exists a data structure that at each time
step t = ?(n) may answer the following queries:
1. Given a cluster number i, return a node v, such that Pr(v ? Cit ) ? 1/2.
2. Given Cit (the contents of cluster Ci ) return i.
3. Return k and a sequence ?10 , . . . , ?k0 , such that for each 1 ? i ? k, we have ?i /2 ? ?i0 ?
3?i /2.
4
Usually, the constant c can be made arbitrarily large, by tuning the constants of Assumption 1.
5
The data structure runs correctly for n2 steps with high probability and issues O(log n/p) queries
per step.
Furthermore if p = 1 and q = 0, it makes only 1 query per step.
Note that because the data structure can only use node queries to access the graph, it imposes its own
numbering on the clusters that it uses consistently. Let us now describe the high-level idea behind it.
In each step the data structure selects a node uniformly at random and discovers its entire cluster using
the algorithm of Lemma 2. We show that this implies that within any n/16 time steps each cluster
is queried at least once with high probability. The main challenge lies in refreshing the knowledge
about the clusters. The data structure internally maintains a clustering D1 , . . . , Dk . However, when
it queries some cluster C, it is not clear which of D1 , . . . , Dk does C correspond to. To deal with
that we show that the number of changes in each cluster within n/16 time steps is so low (again, with
high probability), that there is a single cluster D ? {D1 , . . . , Dk }, for which |D ? C| > |C| /2.
The data structure of Lemma 3 can be used to simulate queries for cluster in the following way.
Assume we want to discover the contents of cluster i. First, we use the data structure to get a node v,
such that Pr(v ? Cit ) ? 1/2. Then, we can use algorithm of Lemma 2 to get the entire cluster C 0 of
node v. Finally, we may use the data structure again to verify whether C 0 is indeed Cit . This is the
case with probability more at least 1/2.
Moreover, the data structure allows us to assume that the algorithms are initially only given the
number of nodes n and the values of p and q, because the data structure can provide to the algorithms
both the number of clusters k and their (approximate) probabilities.
4.2
Clustering Algorithms
Using the results of Section 4.1, we may now assume that algorithms may query the clusters directly.
This allows us to give a simple clustering algorithm. The algorithm first computes a probability
distribution ?1 , . . . , ?k on the clusters, which is a function of the cluster probability distribution
?1 , . . . , ?k . Although the cluster probability distribution is not a part of the input data, we may use an
approximate distribution ?10 , . . . , ?k0 given by the data structure of Lemma 3?this increases the error
of the algorithm only by a constant factor. In each step the algorithm picks a cluster independently at
random from the distribution ?1 , . . . , ?k and queries it.
In order to determine the probability distribution ?1 , . . . , ?k , we express the upper bound on the error
in terms of this distribution and then find the sequence ?1 , . . . , ?k that minimizes this error.
Theorem 4. Suppose that Assumption 1 holds. Then there exists an algorithm for clustering evolving
graphs that issues O(log n/p)queries per stepand that for each time step t = ?(n) reports a
P
? 2
k
clustering of expected error O
. Furthermore if p = 1 and q = 0, it issues only
i=1 ?i
O(1) queries per step.
The clusterings given by this algorithm already have low error, but still we are able to give a better
result. Whenever the algorithm of Theorem 4 queries some cluster Ci , it finds the correct cluster
assignment for all nodes that have been reassigned to Ci since it has last been queried. These nodes
are immediately assigned to the right cluster. However, by querying Ci the algorithm also discovers
which nodes have been recently reassigned from Ci (they used to be in Ci when it was last queried,
but are not there now). Our improved algorithm maintains a queue of such nodes and in each step
removes two nodes from this queue and locates them. In order to locate a single node v, we first
discover its cluster C(v) (using algorithm of Lemma 2) and then use the data structure of Lemma 3
to find the cluster number of C(v). Once we do that, we can assign v to the right cluster immediately.
This results in a better bound on the error.
Theorem 5. Assume that ?1 ? . . . ? ?k . Suppose that Assumption 1 holds. Then there exists an
algorithm for clustering evolving graphs that issues O(log n/p) queries
per step and that for each
2
q P
P
time step t = ?(n log n) reports a clustering of expected error O
?
?
.
i
j
1?i?k
i<j?k
Furthermore if p = 1 and q = 0, it issues only O(1) queries per step.
6
Note that the assumption that ?1 ? . . . ? ?k is not needed for the theorem to be true. However, this
particular ordering minimizes the value of the bound in the theorem statement.
Let us compare the upper bounds of Theorems 4 and 5. For uniform distributions, where ?1 =
. . . = ?k = 1/k, both analyses give an upper bound of O(k), which means that on average only
a constant
p number of nodes per cluster contribute to the error. Now consider a distribution, where
k = ?( n/ log n) and ai = ?(1/i2 ) for 1 ? i ? k. The error of the first algorithm is O(log2 n),
whereas the second one has only O(1) expected error. Furthermore in some cases, the difference can
be even bigger. Namely, let us define the distribution as follows. Let k = b(n/ log n)2/3 c + 2 and
= ((log n)/n)1/3 . We set
a1 = a2 =(1 ? )/2 and ai = /(k ? 2) for 3 ? i ? k. Then, the error
1/3
n
of the first algorithm is O
, but for the second it is still O(1).
log n
4.3
Lower Bound
Finally, we provide a lower bound for the problem of detecting clusters in evolving stochastic block
model. In particular, it implies that in the case when p = 1 and q = 0 the algorithm of Theorem 5 is
optimal (up to constant factor).
Theorem 6. Every algorithm that issues one query per time step for detecting clusters in the evolving
stochastic block model and runs for n/ log n steps has average expected error
??
v
?2 ?
u
k
k
X
? Xu
?
t?i
? ??
?j ? ?
i=1
j=i+1
We note here that Theorem 6 can be extended also to algorithms that are allowed ? queries by losing a
multiplicative factor 1/?. The proof is based on the observation that if the algorithm has not queried
some clusters long enough, it is unaware of the nodes that have been reassigned between them. In
particular, if a node v moves from Ci to Cj at time t and the algorithm does not query one of the
two clusters after time t it has small chances of guessing the cluster of v. Some nontrivial analysis is
needed to show that a sufficiently large number of such nodes exist, regardless of the choices of the
algorithm.
5
Experiments
In this section we compare our optimal algorithm with some benchmarks and show experimentally
its effectiveness. More precisely, we compare three different strategies to select the node to explore
in each step of our algorithm:
? the optimal algorithm of Theorem 5,
? the strategy that probes a random node,
? the strategy that selects first a random cluster and then probes a random node in the cluster.
To compare these three probing strategies we construct a synthetic instance of our model as follows.
We build a graph with 10000 nodes with communities of expected size between 50 and 250. The
number of communities with expected size ` is proportional to `?c for c = 0, 1, 2, 3. So the
distribution of communities? size follows a power-law distribution with parameter c ? {0, 1, 2, 3}.
To generate random communities in our experiment we use p = 0.5 and q = 0.001.
Note that in our experiments the number of communities depends on the various parameters. For
simplicity in the remaining of the section we use k to denote the number of communities in a specific
experiment instance.
In the first step of the experiment we generate a random graph with the parameters described above.
Then the random evolution starts and in each step a single node changes its cluster. In the first 10k
evolution steps, we construct the data structure described in Lemma 3 by exploring the clusters of a
single random node per step. Finally, we run the three different strategies for 25k additional steps in
which we update the clusterings by exploring a single node in each step and by retrieving its cluster.
7
(a) c=0
(b) c=1
(c) c=2
(d) c=3
Figure 1: Comparing the performance of different algorithms on graphs with different community
distributions.
At any point during the execution of the algorithm we compute the cluster of a node by exploring at
most 30 of its neighbors.
In Figure 1 we show the experimental results for different values of c ? {0, 1, 2, 3}. We repeat all
the experiments 5 times and we show the average value and the standard deviation. It is interesting
to note that the optimal queue algorithm outperforms significantly all the other strategies. It is also
interesting to note that the quality of the clustering worsens with time, this is probably because of the
fact that after many steps the data structure become less reliable.
Finally, notice that as c decreases and the communities? size distribution becomes less skewed, the
performance of the 3 algorithms worsens and becomes closer to one another, as suggested by our
theoretical analysis.
Acknowledgments
We would like to thank Marek Adamczyk for helping in some mathematical derivations. This work is
partly supported by the EU FET project MULTIPLEX no. 317532 and the Google Focused Award on
?Algorithms for Large-scale Data Analysis.?
References
[1] Emmanuel Abbe, Afonso S. Bandeira, and Georgina Hall. Exact recovery in the stochastic block model.
IEEE Transactions on Information Theory, 62(1):471?487, 2016.
[2] Emmanuel Abbe and Colin Sandon. Community detection in general stochastic block models: Fundamental
limits and efficient algorithms for recovery. In IEEE 56th Annual Symposium on Foundations of Computer
Science, FOCS 2015, Berkeley, CA, USA, 17-20 October, 2015, pages 670?688, 2015.
[3] Aris Anagnostopoulos, Ravi Kumar, Mohammad Mahdian, Eli Upfal, and Fabio Vandin. Algorithms on
evolving graphs. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, ITCS
?12, pages 149?160, 2012.
8
[4] Bahman Bahmani, Ravi Kumar, Mohammad Mahdian, and Eli Upfal. Pagerank on an evolving graph. In
The 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ?12,
Beijing, China, August 12-16, 2012, pages 24?32, 2012.
[5] Nader H. Bshouty and Philip M. Long. Finding planted partitions in nearly linear time using arrested
spectral clustering. In Proceedings of the 27th International Conference on Machine Learning (ICML-10),
June 21-24, 2010, Haifa, Israel, pages 135?142, 2010.
[6] D. S. Choi, P. J. Wolfe, and E. M. Airoldi. Stochastic blockmodels with a growing number of classes.
Biometrika, pages 1 ? 12, 2012.
[7] Amin Coja-Oghlan. Graph partitioning via adaptive spectral techniques. Combinatorics, Probability &
Computing, 19(2):227?284, 2010.
[8] Qiuyi Han, Kevin S. Xu, and Edoardo Airoldi. Consistent estimation of dynamic and multi-layer block
models. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille,
France, 6-11 July 2015, pages 1511?1520, 2015.
[9] Tanja Hartmann, Andrea Kappes, and Dorothea Wagner.
abs/1401.3516, 2014.
Clustering evolving networks.
CoRR,
[10] P.W. Holland, K. Laskey, and S. Leinhardt. Stochastic block models: First steps. Social Networks,
5:109?137, 1983.
[11] Adel Javanmard, Andrea Montanari, and Federico Ricci-Tersenghi. Phase transitions in semidefinite
relaxations. Proceedings of the National Academy of Sciences, 113(16):E2218?E2223, 2016.
[12] Ravi Kumar, Jasmine Novak, and Andrew Tomkins. Structure and evolution of online social networks.
In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data
Mining, KDD ?06, pages 611?617, New York, NY, USA, 2006. ACM.
[13] Greg Linden, Brent Smith, and Jeremy York. Amazon.com recommendations: Item-to-item collaborative
filtering. IEEE Internet Computing, 7(1):76?80, January 2003.
[14] Laurent Massouli?. Community detection thresholds and the weak ramanujan property. In Symposium on
Theory of Computing, STOC 2014, New York, NY, USA, May 31 - June 03, 2014, pages 694?703, 2014.
[15] Elchanan Mossel, Joe Neeman, and Allan Sly. Consistency thresholds for the planted bisection model.
In Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, STOC 2015,
Portland, OR, USA, June 14-17, 2015, pages 69?75, 2015.
[16] Mark Newman. Networks: An Introduction. Oxford University Press, Inc., New York, NY, USA, 2010.
[17] K. Rohe, S. Chatterjee, and B. Yu. Spectral clustering and the high-dimensional stochastic block model.
The Annals of Statistics, 39(4):1878 ? 1915, 2011.
[18] Kevin S. Xu. Stochastic block transition models for dynamic networks. CoRR, abs/1411.5404, 2014.
[19] Kevin S. Xu and Alfred O. Hero III. Dynamic stochastic blockmodels: Statistical models for timeevolving networks. In Social Computing, Behavioral-Cultural Modeling and Prediction - 6th International
Conference, SBP 2013, Washington, DC, USA, April 2-5, 2013. Proceedings, pages 201?210, 2013.
[20] Rawya Zreik, Pierre Latouche, and Charles Bouveyron. The dynamic random subgraph model for the
clustering of evolving networks. Computational Statistics, pages 1?33, 2016.
9
| 6173 |@word worsens:2 version:1 stronger:1 nd:1 open:1 simulation:2 simplifying:1 pick:7 bahmani:1 moment:1 score:5 silviol:1 neeman:1 outperforms:1 recovered:1 com:3 current:1 comparing:1 yet:3 crawling:2 issuing:1 partition:2 kdd:2 remove:1 update:1 item:2 accordingly:1 beginning:1 ith:1 smith:1 indefinitely:1 detecting:6 provides:1 node:81 contribute:1 mathematical:1 novak:1 c2:1 become:1 symposium:3 retrieving:1 qij:3 consists:1 prove:2 uniroma1:2 focs:1 behavioral:2 inside:1 introduce:1 manner:3 theoretically:1 allan:1 indeed:1 javanmard:1 andrea:2 expected:8 behavior:1 growing:2 mahdian:4 multi:1 detects:1 considering:1 increasing:1 becomes:3 spain:1 discover:4 underlying:2 classifies:1 moreover:1 project:1 probability3:1 cultural:1 israel:1 minimizes:2 finding:4 guarantee:3 berkeley:1 every:9 continual:1 stricter:1 biometrika:1 partitioning:2 internally:2 omit:1 negligible:1 dropped:1 local:1 multiplex:1 limit:2 api:1 analyzing:2 oxford:1 laurent:1 studied:4 china:1 dynamically:1 suggests:1 challenging:1 limited:4 range:2 practical:1 acknowledgment:1 practice:1 block:21 implement:1 dorothea:1 evolving:30 significantly:1 matching:1 get:2 cannot:2 close:4 equivalent:1 imposed:3 ramanujan:1 modifies:1 primitive:1 attention:1 exogenously:1 independently:4 regardless:1 survey:1 focused:2 simplicity:4 recovery:3 immediately:2 amazon:1 sbm:1 classic:2 updated:1 annals:1 suppose:4 user:2 exact:1 losing:1 us:5 designing:1 assure:2 wolfe:1 nader:1 sbp:1 min1:1 initializing:1 capture:3 kappes:1 connected:3 ordering:1 decrease:1 eu:1 mentioned:1 rigorously:1 dynamic:12 depend:1 tight:1 distinctive:1 max1:1 k0:2 various:2 arrested:1 derivation:1 distinct:1 describe:1 query:49 newman:1 kevin:3 neighborhood:1 choosing:1 outside:1 whose:1 quite:1 widely:1 say:1 reconstruct:1 federico:1 ability:1 statistic:3 itself:1 final:1 online:4 sequence:2 leinhardt:1 relevant:2 subgraph:1 academy:1 supposed:1 amin:1 description:1 moved:1 cluster:86 empty:1 extending:1 perfect:1 andrew:1 stating:1 bshouty:1 received:1 p2:1 come:1 implies:3 direction:2 correct:1 stochastic:23 require:2 assign:1 ricci:1 fix:1 suffices:2 extension:1 pl:1 exploring:3 hold:6 helping:1 sufficiently:2 considered:2 hall:1 claim:1 a2:1 estimation:1 tool:1 modified:1 ck:2 rather:1 june:3 consistently:1 portland:1 sigkdd:2 twitter:2 dependent:1 streaming:1 i0:1 entire:7 unlikely:1 initially:2 misclassified:2 france:1 selects:3 interested:1 issue:12 classification:1 aforementioned:1 hartmann:1 platform:2 constrained:1 equal:1 once:2 construct:2 washington:1 lille:1 yu:1 icml:2 abbe:2 nearly:1 report:8 national:1 phase:1 bouveyron:1 maintain:3 n1:2 attempt:1 ab:2 detection:9 interest:1 mining:3 highly:1 certainly:1 mixture:1 semidefinite:1 behind:1 predefined:1 edge:7 closer:1 partial:1 necessary:2 elchanan:1 indexed:1 haifa:1 theoretical:5 instance:4 modeling:2 assignment:1 cost:4 vertex:2 subset:1 deviation:1 uniform:3 hundred:1 kq:1 seventh:1 too:4 stored:1 answer:1 considerably:2 chooses:1 synthetic:1 fundamental:2 international:5 probabilistic:4 together:1 connectivity:1 again:2 possibly:1 brent:1 return:3 jeremy:1 inc:1 combinatorics:1 explicitly:1 depends:2 piece:1 multiplicative:1 performed:1 later:3 root:2 analyze:2 linked:2 portion:1 competitive:1 recover:1 start:2 maintains:4 contribution:1 ass:1 square:2 collaborative:1 accuracy:1 greg:1 correspond:1 weak:1 itcs:1 accurately:1 bisection:1 none:1 researcher:1 afonso:1 sharing:1 whenever:1 facebook:1 definition:1 anagnostopoulos:2 frequency:2 involved:1 a4b:1 associated:1 proof:1 static:1 knowledge:4 cj:1 oghlan:1 specify:2 improved:1 april:1 done:1 though:1 generality:1 furthermore:6 just:3 sly:1 web:2 overlapping:1 lack:1 google:5 quality:4 laskey:1 believe:2 usa:6 verify:1 true:3 timeevolving:1 evolution:7 assigned:4 symmetric:1 i2:3 deal:2 skewed:2 during:1 prohibit:1 reassignment:2 fet:1 outline:1 mohammad:3 demonstrate:2 stefano:1 image:2 consideration:1 discovers:2 recently:2 charles:1 common:1 extend:2 belong:1 queried:4 ai:2 tuning:1 rd:1 consistency:1 moving:1 access:2 han:1 longer:1 add:2 own:1 apart:2 bandeira:1 affiliation:1 arbitrarily:1 minimum:2 additional:1 relaxed:1 impose:1 determine:1 paradigm:1 colin:1 forty:1 july:1 ii:3 reduces:1 technical:1 long:3 retrieval:1 concerning:1 locates:1 award:1 bigger:1 a1:1 qi:1 prediction:1 basic:3 essentially:1 expectation:4 c1:2 whereas:2 want:2 probably:1 subject:1 effectiveness:1 call:2 revealed:2 iii:6 enough:4 reduce:1 idea:2 whether:1 motivated:1 inadequacy:1 edoardo:1 adel:1 queue:3 returned:1 york:4 repeatedly:1 proprietary:1 generally:1 proportionally:2 clear:1 overshadow:1 amount:1 cit:6 generate:2 exist:1 notice:1 disjoint:1 per:13 correctly:4 pace:1 alfred:1 discrete:1 probed:1 express:1 key:1 nevertheless:1 threshold:2 deletes:1 ravi:3 graph:49 asymptotically:1 relaxation:1 beijing:1 run:5 eli:2 massouli:1 place:3 throughout:1 reasonable:1 sapienza:3 prefer:1 bound:10 ki:4 layer:1 guaranteed:1 internet:1 oracle:1 annual:2 nontrivial:1 constraint:1 precisely:1 sake:1 generates:2 aspect:1 simulate:1 min:1 kumar:3 performing:1 relatively:2 numbering:1 according:2 terminates:1 across:1 slightly:2 smaller:1 partitioned:2 evolves:4 making:1 happens:4 pr:2 taken:1 resource:2 mutually:1 previously:1 discus:1 needed:2 know:1 hero:1 end:1 studying:4 operation:1 decentralized:2 probe:6 observe:3 apply:1 spectral:3 simulating:1 pierre:1 assumes:2 clustering:41 denotes:1 remaining:1 tomkins:1 log2:2 qji:1 giving:1 emmanuel:2 build:1 classical:2 initializes:1 move:1 question:1 already:1 strategy:13 planted:3 costly:1 guessing:1 fabio:1 distance:2 separate:1 thank:1 philip:1 trivial:1 provable:1 innovation:1 nc:1 difficult:1 mostly:2 aris:3 october:1 statement:1 stoc:2 design:4 perform:4 coja:1 upper:4 inspect:1 observation:1 benchmark:1 incorrectly:1 beat:1 january:1 extended:2 rome:3 locate:1 nonuniform:1 dc:1 arbitrary:1 august:1 tanja:1 community:32 namely:4 pair:2 connection:1 sandon:1 barcelona:1 nip:1 address:1 able:5 suggested:1 usually:1 challenge:1 pagerank:2 reliable:1 marek:1 unrealistic:1 event:3 power:1 difficulty:1 natural:2 rely:1 advanced:1 improve:2 mossel:1 finished:1 identifies:1 catch:1 embodied:1 text:1 prior:1 discovery:3 asymptotic:1 law:1 loss:1 permutation:1 bahman:1 interesting:3 limitation:3 proportional:2 filtering:1 querying:3 foundation:1 upfal:2 gather:1 sufficient:2 consistent:1 imposes:1 pi:3 repeat:1 last:2 supported:1 dis:2 allow:2 neighbor:9 wagner:1 world:1 transition:2 unaware:2 computes:1 made:3 adaptive:1 far:4 social:8 transaction:1 approximate:5 implicitly:1 keep:1 dealing:1 assumed:2 reassigned:6 learn:3 nature:1 ca:1 complex:2 refreshing:1 main:7 blockmodels:2 montanari:1 motivation:1 n2:4 lattanzi:1 allowed:1 xu:4 fashion:1 slow:1 probing:6 ny:3 lie:1 learns:1 theorem:10 choi:1 rohe:1 specific:2 showing:1 jakub:1 list:2 dk:3 linden:1 exists:6 joe:1 adding:1 effectively:1 corr:2 ci:11 airoldi:2 execution:2 budget:1 chatterjee:1 gap:1 generalizing:1 explore:1 expressed:1 recommendation:3 holland:1 chance:1 constantly:1 acm:4 tersenghi:1 goal:2 content:6 change:11 experimentally:1 infinite:1 uniformly:2 lemma:14 silvio:1 called:1 accepted:1 experimental:2 partly:1 meaningful:2 formally:3 select:1 mark:1 latter:1 bioinformatics:1 d1:3 |
5,718 | 6,174 | Convergence guarantees for kernel-based quadrature
rules in misspecified settings
Motonobu Kanagawa? , Bharath K Sriperumbudur? , Kenji Fukumizu?
?
The Institute of Statistical Mathematics, Tokyo 190-8562, Japan
?
Department of Statistics, Pennsylvania State University, University Park, PA 16802, USA
[email protected], [email protected], [email protected]
Abstract
Kernel-based quadrature rules are
? becoming important in machine learning and
statistics, as they achieve super- n convergence rates in numerical integration,
and thus provide alternatives to Monte Carlo integration in challenging settings
where integrands are expensive to evaluate or where integrands are high dimensional. These rules are based on the assumption that the integrand has a certain
degree of smoothness, which is expressed as that the integrand belongs to a certain reproducing kernel Hilbert space (RKHS). However, this assumption can be
violated in practice (e.g., when the integrand is a black box function), and no general theory has been established for the convergence of kernel quadratures in such
misspecified settings. Our contribution is in proving that kernel quadratures can
be consistent even when the integrand does not belong to the assumed RKHS,
i.e., when the integrand is less smooth than assumed. Specifically, we derive convergence rates that depend on the (unknown) lesser smoothness of the integrand,
where the degree of smoothness is expressed via powers of RKHSs or via Sobolev
spaces.
1
Introduction
Numerical integration, or quadrature, is a fundamental task in the construction of various statistical
and machine learning algorithms. For instance, in Bayesian learning, numerical integration is generally required for the computation of marginal likelihood in model selection, and for the marginalization of parameters in fully Bayesian prediction, etc [20]. It also offers flexibility to probabilistic
modeling, since, e.g., it enables us to use a prior that is not conjugate with a likelihood function.
Let P be a (known) probability
distribution on a measurable space X and f be an integrand on X .
?
Suppose that the integral f (x)dP (x) has no closed form solution. One standard form of numerical
integration is to approximate the integral as a weighted sum of function values f (X1 ), . . . , f (Xn )
by appropriately choosing the points X1 , . . . , Xn ? X and weights w1 , . . . , wn ? R:
?
n
?
wi f (Xi ) ? f (x)dP (x).
(1)
i=1
For example, the simplest Monte Carlo method generates the points X1 , . . . , Xn as an i.i.d. sample
from P and uses equal weights w1 = ? ? ? = wn = 1/n. Convergence rates of such Monte Carlo
methods are of the form n?1/2 , which can be slow for practical purposes. For instance, in situations
where the evaluation of the integrand requires heavy computations, n should be small and Monte
Carlo would perform poorly; such situations typically appear in modern scientific and engineering
applications, and thus quadratures with faster convergence rates are desirable [18].
One way of achieving faster rates is to exploit one?s prior knowledge or assumption about the integrand (e.g. the degree of smoothness) in the construction of a weighted point set {(wi , Xi )}ni=1 .
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Reproducing kernel Hilbert spaces (RKHS) have been successfully used for this purpose, with examples being Quasi Monte Carlo (QMC) methods based on RKHSs [13] and Bayesian quadratures
[19]; see e.g. [11, 6] and references therein. We will refer to such methods as kernel-based quadrature rules or simply kernel quadratures in this paper. A kernel quadrature assumes that the integrand
f belongs to an RKHS consisting of smooth functions (such as Sobolev spaces), and constructs the
weighted points {(wi , Xi )}ni=1 so that the worst case error in that RKHS is small. Then the error
rate of the form n?b , b ? 1, which is much faster than the rates of Monte Carlo methods, will be
guaranteed with b being a constant representing the degree of smoothness of the RKHS (e.g., the
order of differentiability). Because of this nice property, kernel quadratures have been studied extensively in recent years [7, 3, 5, 2, 17] and have started to find applications in machine learning and
statistics [23, 14, 12, 6].
However, if the integrand f does not belong to the assumed RKHS (i.e. if f is less smooth than assumed), there is no known theoretical guarantee for fast convergence rate or even the consistency of
kernel quadratures. Such misspecification is likely to happen if one does not have the full knowledge
of the integrand?such situations typically occur when the integrand is a black box function. As an
illustrative example, let us consider the problem of illumination integration in computer graphics
(see e.g. Sec. 5.2.4 of [6]). The task is to compute the total amount of light arriving at a camera in a
virtual environment. This is solved by numerical integration with integrand f (x) being the intensity
of light arriving at the camera from a direction x (angle). The value f (x) is only given by simulation
of the environment for each x, so f is a black box function. In such a situation, one?s assumption
on the integrand can be misspecified. Establishing convergence guarantees for such misspecified
settings has been recognized as an important open problem in the literature [6, Section 6].
Contributions. The main contribution of this paper is in providing general convergence guarantees for kernel-based quadrature rules in misspecified settings. Specifically, we make the following
contributions:
? In Section 4, we prove that consistency can be guaranteed even when the integrand f does
not belong to the assumed RKHS. Specifically, we derive a convergence rate of the form
n??b , where 0 < ? ? 1 is a constant characterizing the (relative) smoothness of the integrand. In other words, the integration error decays at a speed depending on the (unknown)
smoothness of the integrand. This guarantee is applicable to kernel quadratures that employ
random points.
? We apply this result to QMC methods called lattice rules (with randomized points) and
the quadrature rule by Bach [2], for the setting where the RKHS is a Korobov space. We
show that even when the integrand is less smooth than assumed, the error rate becomes
the same as for the case when the (unknown) smoothness is known; namely, we show that
these methods are adaptive to the unknown smoothness of the integrand.
? In Section 5, we provide guarantees for kernel quadratures with deterministic points, by
focusing on specific cases where the RKHS is a Sobolev space W2r of order r ? N (the order
of differentiability). We prove that consistency can be guaranteed even if f ? W2s \W2r
where s ? r, i.e., the integrand f belongs to a Sobolev space W2s of lesser smoothness. We
derive a convergence rate of the form n?bs/r , where the ratio s/r determines the relative
degree of smoothness.
? As an important consequence, we show that if weighted points {(wi , Xi )}m
i=1 achieve the
optimal rate in W2r , then they also achieve the optimal rate in W2s . In other words, to
achieve the optimal rate for the integrand f belonging to W2s , one does not need to know
the smoothness s of the integrand; one only needs to know its upper-bound s ? r.
This paper is organized as follows. In Section 2, we describe kernel-based quadrature rules, and
formally state the goal and setting of theoretical analysis in Section 3. We present our contributions
in Sections 4 and 5. Proofs are collected in the supplementary material.
Related work. Our work is close to [17] in spirit, which discusses situations where the true integrand is smoother than assumed (which is complementary to ours) and proposes a control functional
approach to make kernel quadratures adaptive to the (unknown) greater smoothness. We also note
that there are certain quadratures which are adaptive to less smooth integrands [8, 9, 10]. On the
other hand, our aim here is to provide general theoretical guarantees that are applicable to a wide
class of kernel-based quadrature rules.
2
Notation. For x ? Rd , let {x} ? [0, 1]d be its fractional parts. For a probability distribution P on
a measurable space X , let L2 (P ) be the Hilbert space of square-integrable functions with respect to
P . If P is the Lebesgue measure on X ? Rd , let L2 (X ) := L2 (P ) and further L2 := L2 (Rd ). Let
C0s := C0s (Rd ) be the set of all functions on Rd that are continuously differentiable up to
? order s ? N
and vanish at?
infinity. Given a function f and weighted points {(wi , Xi )}ni=1 , P f := f (x)dP (x)
n
and Pn f := i=1 wi f (Xi ) denote the integral and its numerical approximation, respectively.
2
Kernel-based quadrature rules
Suppose that one has prior knowledge on certain properties of the integrand f (e.g. its order of differentiability). A kernel quadrature exploits this knowledge by expressing it as that f belongs to a certain RKHS H that possesses those properties, and then constructing weighted points {(wi , Xi )}ni=1
so that the error of integration is small for every function in the RKHS. More precisely, it pursues
the minimax strategy that aims to minimize the worst case error defined by
?
n
?
(2)
wi f (Xi ) ,
en (P ; H) :=
sup
|P f ? Pn f | :=
sup
f (x)dP (x) ?
f ?H:?f ?H ?1
f ?H:?f ?H ?1
i=1
where ? ? ?H denotes the norm of H. The use of RKHS is beneficial (compared to other function
spaces), because it results in an analytic expression of the worst case error (2) in terms of the reproducing kernel. Namely, one can explicitly compute (2) in the construction of {(wi , Xi )}ni=1 as a
criterion to be minimized. Below we describe this as well as examples of kernel quadratures.
2.1
The worst case error in RKHS
Let X be a measurable space and H be an RKHS of functions on X with k : X ? X ? R as the
bounded reproducing kernel. By the reproducing property, it is easy to verify that P f = ?f, mP ?H
and Pn f = ?f, mPn ?H for all f ? H, where ??, ??H denotes the inner-product of H, and
?
n
?
mP := k(?, x)dP (x) ? H, mPn :=
wi k(?, Xi ) ? H.
i=1
Therefore, the worst case error (2) can be written as the difference between mP and mPn :
sup |P f ? Pn f | =
?f ?H ?1
sup ?f, mP ? mPn ?H = ?mP ? mPn ?H ,
(3)
?f ?H ?1
?
where ? ? ?H denotes the norm of H defined by ?f ?H = ?f, f ?H for f ? H. From (3), it is easy
to see that for every f ? H, the integration error |Pn f ? P f | is bounded by the worst case error:
|Pn f ? P f | ? ?f ?H ?mP ? mPn ?H = ?f ?H en (P ; H).
We refer the reader to [21, 11] for details of these derivations. Using the reproducing property of k,
the r.h.s. of (3) can be alternately written as:
v
u? ?
?
n
n ?
n
?
?
u
t
en (P ; H) =
k(x, x
?)dP (x)dP (?
x) ? 2
wi k(x, Xi )dP (x) +
wi wj k(Xi , Xj ).
i=1
i=1 j=1
(4)
The integrals in (4) are known in closed form for many pairs of k and P ; see e.g. Table 1 of [6].
For instance,?if P is the uniform distribution on X = [0, 1]d , and k is the Korobov kernel described
below, then k(y, x)dP (x) = 1 for all y ? X . To pursue the aforementioned minimax strategy,
one can explicitly use the formula (4) to minimize the worst case error (2). Often H is chosen as
an RKHS consisting of smooth functions, and the degree of smoothness is what a user specifies; we
describe this in the example below.
2.2
Examples of RKHSs: Korobov spaces
The setting X = [0, 1]d is standard in the literature on numerical integration; see e.g. [11]. In this
setting, Korobov spaces and Sobolev spaces have been widely used as RKHSs.1 We describe the
former here; for the latter, see Section 5.
1
Korobov spaces are also known as periodic Sobolev spaces in the literature [4, p.318].
3
?
Korobov space on [0, 1]. The Korobov space WKor
([0, 1]) of order ? ? N is an RKHS whose
kernel k? is given by
k? (x, y) := 1 +
(?1)??1 (2?)2?
B2? ({x ? y}),
(2?)!
(5)
?
where B2? denotes the 2?-th Bernoulli polynomial. WKor
([0, 1]) consists of periodic functions
on [0, 1] whose derivatives up to (? ? 1)-th are absolutely continuous and the ?-th derivative belongs to L2 ([0, 1]) [16]. Therefore the order ? represents the degree of smoothness of functions in
?
WKor
([0, 1]).
Korobov space on [0, 1]d . For d ? 2, the kernel of the Korobov space is given as the product of
one-dimensional kernels (5):
k?,d (x, y) := k? (x1 , y1 ) ? ? ? k? (xd , yd ), x := (x1 , . . . , xd )T , y := (y1 , . . . , yd )T ? [0, 1]d . (6)
?
([0, 1]d ) on [0, 1]d is then the tensor product of one-dimensional
The induced Korobov space WKor
?
d
?
?
Korobov spaces: WKor ([0, 1] ) := WKor
([0, 1]) ? ? ? ? ? WKor
([0, 1]). Therefore it consists of
functions having square-integrable mixed partial derivatives up to the order ? in each variable. This
means that by using the kernel (6) in the computation of (4), one can make an assumption that the
integrand f has smoothness of degree ? in each variable. In other words, one can incorporate one?s
knowledge or belief on f into the construction of weighted points {(wi , Xi )} via the choice of ?.
2.3
Examples of kernel-based quadrature rules
We briefly describe examples of kernel-based quadrature rules.
Quasi Monte Carlo (QMC). These methods typically focus on the setting where X = [0, 1]d
with P being the uniform distribution on [0, 1]d , and employ equal weights wi = ? ? ? = wn = 1/n.
Popular examples are lattice rules and digital nets/sequences. Points X1 , . . . , Xn are selected in a
deterministic way so that the worst case error (4) is as small as possible. Then such deterministic
points are often randomized to obtain unbiased integral estimators, as we will explain in Section 4.2.
For a review of these methods, see [11].
For instance, lattice rules generate X1 , . . . , Xn in the following way (for simplicity assume n is
prime). Let z ? {1, . . . , n?1}d be a generator vector. Then the points are defined as Xi = {iz/n} ?
[0, 1]d for i = 1, . . . , n. Here z is selected so that the resulting worst case error (2) becomes as small
as possible. The CBC (Component-By-Component) construction is a fast method that makes use of
the formula (4) to achieve this; see Section 5 of [11] and references therein. Lattice rules applied
?
?
to the Korobov space WKor
([0, 1]d ) can achieve the rate en (P, WKor
([0, 1]d ) = O(n??+? ) for the
worst case error with ? > 0 arbitrarily small [11, Theorem 5.12].
Bayesian quadratures. These methods are applicable to general X and P , and employ nonuniform weights. Points X1 , . . . , Xn are selected either deterministically or randomly. Given the
points being fixed, weights w1 , . . . , wn are obtained by minimizing (4), which can be done by solving a linear system of size n. Such methods are called Bayesian quadratures, since the resulting
estimate Pn f in this case is exactly the posterior mean of the integral P f given ?observations?
{(Xi , f (Xi ))}ni=1 , with the prior on the integrand f being Gaussian Process with the covariance
kernel k. We refer to [6] for these methods.
For instance, the algorithm by Bach [2] proceeds as follows, for the case of H being a Korobov space
?
WKor
([0, 1]d ) and P being the uniform distribution on [0, 1]d : (i) Generate points X1 , . . . , Xn independently from the uniform
on [0, 1]d ; (ii) Compute weights w1 , . . . , wn by minimizing
?n distribution
2
(4), with the constraint i=1 wi ? 4/n. Bach [2] proved that this procedure gives the error rate
?
en (P, WKor
([0, 1]d ) = O(n??+? ) for ? > 0 arbitrarily small.2
3
Setting and objective of theoretical analysis
We now formally state the setting and objective of our theoretical analysis in general form. Let P be
a known distribution and H be an RKHS. Our starting point is that weighted points {(wi , Xi )}ni=1
2
Note that in [2], the degree of smoothness is expressed in terms of s := ?d.
4
are already constructed for each n ? N by some quadrature rule3 , and that these provide consistent
approximation of P in terms of the worst case error:
en (P ; H) = ?mP ? mPn ?H = O(n?b ) (n ? ?),
(7)
where b > 0 is some constant. Here we do not specify the quadrature algorithm explicitly, to
establish results applicable to a wide class of kernel quadratures simultaneously.
Let f be an integrand that is not included in the RKHS: f ?
/ H. Namely, we consider a misspecified
setting. Our aim is to derive convergence rates for the integration error
n
?
?
|Pn f ? P f | =
wi f (Xi ) ? f (x)dP (x)
i=1
based on the assumption (7). This will be done by assuming a certain regularity condition on f ,
which expresses (unknown) lesser smoothness of f . For example, this is the case when the weighted
points are constructed by assuming the Korobov space of order ? ? N, but the integrand f belongs
to the Korobov space of order ? < ?: in this case, f is less smooth than assumed. As mentioned in
Section 1, such misspecification is likely to happen if f is a black box function. But misspecification
also occurs even when one has the full knowledge of f . As explained in Section 2.1, the kernel k
should be chosen so that the integrals in (4) allow analytic solutions w.r.t. P . Namely, the distribution
P determines an available class of kernels (e.g. Gaussian kernels for a Gaussian distribution), and
therefore the RKHS of a kernel from this class may not contain the integrand of interest. This
situation can be seen in application to random Fourier features [23], for example.
4
Analysis 1: General RKHS with random points
We first focus on kernel quadratures with random points. To this end, we need to introduce certain
assumptions on (i) the construction of weighted points {(wi , Xi )}ni=1 and on (ii) the smoothness of
the integrand f ; we discuss them in Sections 4.1 and 4.2, respectively. In particular, we introduce
the notion of powers of RKHSs [22] in Section 4.2, which enables us to characterize the (relative)
smoothness of the integrand. We then state our main result in Section 4.3, and illustrate it with QMC
lattice rules (with randomization) and the Bayesian quadrature by Bach [2] in Korobov RKHSs.
4.1 Assumption on random points X1 , . . . , Xn
Assumption 1. There exists a probability distribution Q on X satisfying the following properties:
(i) P has a bounded density function w.r.t. Q; (ii) there is a constant D > 0 independent of n, such
that
( [ n
])1/2
1? 2
E
g (Xi )
? D?g?L2 (Q) , ?g ? L2 (Q),
(8)
n i=1
where E[?] denotes the expectation w.r.t. the joint distribution of X1 , . . . , Xn .
Assumption 1 is fairly general, as it does not specify any distribution of points X1 , . . . , Xn , but just
requires that the expectations over these points satisfy (8) for some distribution Q (also note that
it allows the points to be dependent). For instance, let us consider the case where X1 , . . . , Xn are
independently generated from a user-specified distribution Q; in this case, Q serves as a proposal
distribution. Then (8) holds for D = 1 with equality. Examples in this case include the Bayesian
quadratures by Bach [2] and Briol et al. [6] with random points.
Assumption 1 is also satisfied by QMC methods that apply randomization to deterministic points,
which is common in the literature [11, Sections 2.9 and 2.10]. Popular methods for randomization
are random shift and scrambling, both of which satisfy Assumption 1 for D = 1 with equality, where
Q( = P ) is the uniform distribution on X = [0, 1]d . ?
This is because in? general, randomization is
n
applied to make an integral estimator unbiased: E[ n1 i=1 f (Xi )] = [0,1]d f (x)dx [11, Section
2.9]. For instance, the random shift is done as follows. Let x1 , . . . , xn ? [0, 1]d be deterministic
points generated by a QMC method. Let ? be a random sample from the uniform
on
?ndistribution
1
d
d
2
[0,
1]
.
Then
each
X
is
given
as
X
:=
{x
+
?}
?
[0,
1]
.
Therefore
E[
g
(X
)]
=
i
i
i
i
i=1
n
?
? 2
2
g (x)dx = g (x)dQ(x) for all g ? L2 (Q), so (8) holds for D = 1 with equality.
[0,1]d
(n)
(n)
Note that here the weighted points should be written as {(wi , Xi )}n
i=1 , since they are constructed
depending on the number of points n. However, we just write as {(wi , Xi )}n
i=1 for notational simplicity.
3
5
4.2
Assumption on the integrand via powers of RKHSs
To state our assumption on the integrand f , we need to introduce powers of RKHSs [22, Section 4].
Let 0 < ? ? 1 be a constant. First, with the distribution Q in Assumption 1, we require that the
kernel satisfies
?
k(x, x)dQ(x) < ?.
For example, this is always satisfied if the kernel is bounded. We also assume that the support of
Q is entire X and that k is continuous. These conditions imply Mercer?s theorem [22, Theorem 3.1
and Lemma 2.3], which guarantees the following expansion of the kernel k:
?
?
k(x, y) =
?i ei (x)ei (y), x, y ? X ,
(9)
i=1
{ei }?
i=1
where ?1 ? ?2 ? ? ? ? > 0 and
is an orthonormal series in L2 (Q); in particular, {?i ei }?
i=1
forms
orthonormal basis of H. Here the convergence of the series in (9) is pointwise. Assume
?an
?
? 2
that i=1 ?i ei (x) < ? holds for all x ? X . Then the ?-th power of the kernel k is a function
k ? : X ? X ? R defined by
?
?
k ? (x, y) :=
??i ei (x)ei (y), x, y ? X .
(10)
1/2
i=1
This is again a reproducing kernel [22, Proposition 4.2], and defines an RKHS called the ?-th power
of the RKHS H:
{?
}
?
?
?
?/2
?
2
H =
ai ?i ei :
ai < ? .
i=1
i=1
This is an intermediate space between L2 (Q) and H, and the constant 0 < ? ? 1 determines how
close H? is to H. For instance, if ? = 1 we have H? = H, and H? approaches L2 (Q) as ? ? +0.
Indeed, H? is nesting w.r.t. ?:
?
H = H1 ? H? ? H? ? L2 (Q),
for all 0 < ?? < ? < 1.
(11)
In other words, H gets larger as ? decreases. If H is an RKHS consisting of smooth functions, then
H? contains less smooth functions than those in H; we will show this in the example below.
Assumption 2. The integrand f lies in H? for some 0 < ? ? 1.
?
We note that Assumption 2 is equivalent to assuming that f belongs to the interpolation space
[L2 (Q), H]?,2 , or lies in the range of a power of certain integral operator [22, Theorem 4.6].
Powers of Tensor RKHSs. Let us mention the important case where RKHS H is given as the
tensor product of individual RKHSs H1 , . . . , Hd on the spaces X1 , . . . , Xd , i.e., H = H1 ? ? ? ? ? Hd
and X = X1 ? ? ? ? ? Xd . In this case, if the distribution Q is the product of individual distributions
Q1 , . . . , Qd on X1 , . . . , Xn , it can be easily shown that the power RKHS H? is the tensor product
of individual power RKHSs Hi? :
(12)
H? = H1? ? ? ? ? ? Hd? .
?
Examples: Powers of Korobov spaces. Let us consider the Korobov space WKor
([0, 1]d ) with Q
being the uniform distribution on [0, 1]d . The Korobov kernel (5) has a Mercer representation
?
?
1
[ci (x)ci (y) + si (x)si (y)],
(13)
k? (x, y) = 1 +
2?
i
i=1
?
?
where ci (x) := 2 cos 2?ix and si (x) := 2 sin 2?ix. Note that c0 (x) := 1 and {ci , si }?
i=1
constitute an orthonormal basis of L2 ([0, 1]). From (10) and (13), the ?-th power of the Korobov
kernel k? is given by
?
?
1
k?? (x, y) = 1 +
[ci (x)ci (y) + si (x)si (y)].
2??
i
i=1
??
If ?? ? N, this is exactly the kernel k?? of the Korobov space WKor
([0, 1]) of lower order ??. In
??
?
other words, WKor
([0, 1]) is nothing but the ?-power of WKor
([0, 1]). From this and (12), we can
?
??
also show that the ?-th power of WKor
([0, 1]d ) is WKor
([0, 1]d ) for d ? 2.
6
4.3
Result: Convergence rates for general RKHSs with random points
The following result guarantees the consistency of kernel quadratures for integrands satisfying Assumption 2, i.e., f ? H? .
Theorem
1. Let {(wi , Xi )}ni=1 be such that E[en (P ; H)] = O(n?b ) for some b > 0 and
?n
2
E[ i=1 wi ] = O(n?2c ) for some 0 < c ? 1/2, as n ? ?. Assume also that {Xi }ni=1 satisfies Assumption 1. Let 0 < ? ? 1 be a constant. Then for any f ? H? , we have
E [|Pn f ? P f |] = O(n??b+(1/2?c)(1??) ) (n ? ?).
(14)
?b
Remark 1. (a) The expectation in the assumption E[en (P ; H)] = O(n ) is w.r.t. the joint distribution of the weighted points {(wi , Xi )}ni=1 .
?n
(b) The assumption E[ i=1 wi2 ] = O(n?2c ) requires that each weight wi decreases as n increases.
For instance, if maxi?{1,...,n} |wi | = O(n?1 ), we have c = 1/2. For QMC methods, weights are
uniform wi = 1/n, so we always have c = 1/2. The quadrature rule by Bach [2] also satisfies
c = 1/2; see Section 2.3.
(c) Let c = 1/2. Then the rate in (14) becomes O(n??b ), which shows that the integral estimator
Pn f is consistent, even when the integrand f does not belong to H (recall H ? H? for ? < 1; see
also (11)). The resulting rate O(n??b ) is determined by 0 < ? ? 1 of the assumption f ? H? ,
which characterizes the closeness of f to H.
(d) When ? = 1 (well-specified case), irrespective of the value of c, the rate in (14) becomes O(n?b ),
which recovers the rate of the worst case error E[en (P ; H)] = O(n?b ).
Examples in Korobov spaces. Let us illustrate Theorem 1
in the following setting described earlier. Let X = [0, 1]d ,
?
H = WKor
([0, 1]d ), and P be the uniform distribution on
d
??
[0, 1] . Then H? = WKor
([0, 1]d ), as discussed in Section
4.2. Let us consider the two methods discussed in Section 2.3:
(i) the QMC lattice rules with randomization and (ii) the algorithm by Bach [2]. For both the methods, we have c = 1/2,
and the distribution Q in Assumption 1 is uniform on [0, 1]d in
this setting. As mentioned before, these methods achieve the
rate n??+? for arbitrarily small ? > 0 in the well-specified
setting: b = ? ? ? in our notation.
Figure 1: Simulation results
??
Then the assumption f ? H? reads f ? WKor
([0, 1]d ) for 0 < ? ? 1. For such an integrand f , we
???+?
obtain the rate O(n
) in (14) with arbitrarily small ? > 0. This is the same rate as for a wellspecified case where W2?? ([0, 1]d ) was assumed for the construction of weighted points. Namely,
we have shown that these methods are adaptive to the unknown smoothness of the integrand.
For the algorithm by Bach [2], we conducted simulation experiments to support this observation,
by using code available from http://www.di.ens.fr/~fbach/quadrature.html. The setting
is what we have described with d = 1, and weights are obtained without regularization as in [2].
The result is shown in Figure 1, where r (= ?) denotes the assumed smoothness, and s (= ??)
is the (unknown) smoothness of an integrand. The straight lines are (asymptotic) upper-bounds
in Theorem 1 (slope ?s and intercept fitted for n ? 24 ), and the corresponding solid lines are
numerical results (both in log-log scales). Averages over 100 runs are shown. The result indeed
shows the adaptability of the quadrature rule by Bach for the less smooth functions (i.e. s = 1, 2, 3).
We observed similar results for the QMC lattice rules (reported in Appendix D in the supplement).
5
Analysis 2: Sobolev RKHS with deterministic points
In Section 4, we have provided guarantees for methods that employ random points. However, the
result does not apply to those with deterministic points, such as (a) QMC methods without randomization, (b) Bayesian quadratures with deterministic points, and (c) kernel herding [7].
We aim here to provide guarantees for quadrature rules with deterministic points. To this end, we
focus on the setting where X = Rd and H is a Sobolev space [1]. The Sobolev space W2r of order
r ? N is defined by
W2r := {f ? L2 : D? f ? L2 exists for all |?| ? r}
7
?d
where ? := (?1 , . . . , ?d ) with ?i ? 0 is a multi-index with |?| := i=1 ?i , and D? f is the ?-th
?
(weak) derivative of f . Its norm is defined by ?f ?W2r = ( |?|?r ?D? f ?2L2 )1/2 . For r > d/2, this
is an RKHS with the reproducing kernel k being the Mat?rn kernel; see Section 4.2.1. of [20] for
the definition.
Our assumption on the integrand f is that it belongs to a Sobolev space W2s of a lower order s ? r.
Note that the order s represents the smoothness of f (the order of differentiability). Therefore the
situation s < r means that f is less smooth than assumed; we consider the setting where W2r was
assumed for the construction of weighted points.
Rates under an assumption on weights. The first result in this section is based on the same
assumption on weights as in Theorem 1.
?n
Theorem 2. Let {(wi , Xi )}ni=1 be such that en (P ; W2r ) = O(n?b ) for some b > 0 and i=1 wi2 =
O(n?2c ) for some 0 < c ? 1/2, as n ? ?. Then for any f ? C0s ? W2s with s ? r, we have
|Pn f ? P f | = O(n?bs/r+(1/2?c)(1?s/r) ) (n ? ?).
(15)
??b+(1/2?c)(1??)
Remark 2. (a) Let ? := s/r. Then the rate in (15) is rewritten as O(n
), which
matches the rate of Theorem 1. In other words, Theorem 2 provides a deterministic version of
Theorem 1 for the special case of Sobolev spaces.
(b) Theorem 2 can be applied to quadrature rules with equally-weighted deterministic points, such
as QMC methods and kernel herding [7]. For these methods, we have c = 1/2 and so we obtain the
rate O(n?sb/r ) in (15). The minimax optimal rate in this setting (i.e., c = 1/2) is given by n?b with
b = r/d [15]. For these choices of b and c, we obtain a rate of O(n?s/d ) in (15), which is exactly
the optimal rate in W2s . This leads to an important consequence that the optimal rate O(n?s/d ) can
be achieved for an integrand f ? W2s without knowing the degree of smoothness s; one just needs to
know its upper-bound s ? r. Namely, any methods of optimal rates in Sobolev spaces are adaptive
to lesser smoothness.
Rates under an assumption on separation radius. Theorems 1 and 2 require the assumption
?
n
2
?2c
). However, for some algorithms, the value of c may not be available. For
i=1 wi = O(n
instance, this is the case for Bayesian quadratures that compute the weights without any constraints
[6]; see Section 2.3. Here we present a preliminary result that does not rely on the assumption on
the weights. To this end, we introduce a quantity called separation radius:
qn := min ?Xi ? Xj ?.
i?=j
In the result below, we assume that qn does not decrease very quickly as n increases. Let
diam(X1 , . . . , Xn ) denote the diameter of the points.
Theorem 3. Let {(wi , Xi )}ni=1 be such that en (P ; W2r ) = O(n?b ) for some b > 0 as n ? ?,
qn ? Cn?b/r for some C > 0, and diam(X1 , . . . , Xn ) ? 1. Then for any f ? C0s ? W2s with
s ? r, we have
bs
(16)
|Pn f ? P f | = O(n? r ) (n ? ?).
Consequences similar to those of Theorems 1 and 2 can be drawn for Theorem 3. In particular, the
rate in (16) coincides with that of (15) with c = 1/2. The assumption qn ? Cn?b/r can be verified
when points form equally-spaced grids in a compact subset of Rd . In this case, the separation radius
satisfies qn ? Cn?1/d for some C > 0. As above, the optimal rate for this setting is n?b with
b = r/d, which implies the separation radius satisfies the assumption as n?b/r = n?1/d .
6
Conclusions
Kernel quadratures are powerful tools for numerical integration. However, their convergence guarantees had not been established in situations where integrands are less smooth than assumed, which
can happen in various situations in practice. In this paper, we have provided the first known theoretical guarantees for kernel quadratures in such misspecified settings.
Acknowledgments
We wish to thank the anonymous reviewers for valuable comments. We also thank Chris Oates for
fruitful discussions. This work has been supported in part by MEXT Grant-in-Aid for Scientific
Research on Innovative Areas (25120012).
8
References
[1] R. A. Adams and J. J. F. Fournier. Sobolev Spaces. Academic Press, 2nd edition, 2003.
[2] F. Bach. On the equivalence between kernel quadrature rules and random feature expansions.
Technical report, HAL-01118276v2, 2015.
[3] F. Bach, S. Lacoste-Julien, and G. Obozinski. On the equivalence between herding and conditional gradient algorithms. In Proc. ICML2012, pages 1359?1366, 2012.
[4] A. Berlinet and C. Thomas-Agnan. Reproducing kernel Hilbert spaces in probability and
statistics. Kluwer Academic Publisher, 2004.
[5] F-X. Briol, C. J. Oates, M. Girolami, and M. A. Osborne. Frank-Wolfe Bayesian quadrature:
Probabilistic integration with theoretical guarantees. In Adv. NIPS 28, pages 1162?1170, 2015.
[6] F-X. Briol, C. J. Oates, M. Girolami, M. A. Osborne, and D. Sejdinovic. Probabilistic integration: A role for statisticians in numerical analysis? arXiv:1512.00933v4 [stat.ML], 2016.
[7] Y. Chen, M. Welling, and A. Smola. Supersamples from kernel-herding. In Proc. UAI 2010,
pages 109?116, 2010.
[8] J. Dick. Explicit constructions of quasi-Monte Carlo rules for the numerical integration of
high-dimensional periodic functions. Siam J. Numer. Anal., 45:2141?2176, 2007.
[9] J. Dick. Walsh spaces containing smooth functions and quasi?Monte Carlo rules of arbitrary
high order. Siam J. Numer. Anal., 46(3):1519?1553, 2008.
[10] J. Dick. Higher order scrambled digital nets achieve the optimal rate of the root mean square
error for smooth integrands. The Annals of Statistics, 39(3):1372?1398, 2011.
[11] J. Dick, F. Y. Kuo, and I. H. Sloan. High dimensional numerical integration - the quasi-Monte
Carlo way. Acta Numerica, 22(133-288), 2013.
[12] M. Gerber and N. Chopin. Sequential quasi Monte Carlo. Journal of the Royal Statistical
Society. Series B. Statistical Methodology, 77(3):509?579, 2015.
[13] F. J. Hickernell. A generalized discrepancy and quadrature error bound. Mathematics of Computation of the American Mathematical Society, 67(221):299?322, 1998.
[14] S. Lacoste-Julien, F. Lindsten, and F. Bach. Sequential kernel herding: Frank-Wolfe optimization for particle filtering. In Proc. AISTATS 2015, 2015.
[15] E. Novak. Deterministic and Stochastic Error Bounds in Numerical Analysis. Springer-Verlag,
1988.
[16] E. Novak and H. W?zniakowski. Tractability of Multivariate Problems, Vol. I: Linear Information. EMS, 2008.
[17] C. J. Oates and M. Girolami. Control functionals for quasi-Monte Carlo integration. In Proc.
AISTATS 2016, 2016.
[18] C. J. Oates, M. Girolami, and N. Chopin. Control functionals for Monte Carlo integration.
Journal of the Royal Statistical Society. Series B, 2017, to appear.
[19] A. O?Hagan. Bayes?Hermite quadrature. Journal of Statistical Planning and Inference,
29:245?260, 1991.
[20] E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press,
2006.
[21] B. K. Sriperumbudur, A. Gretton, K. Fukumizu, B. Sch?lkopf, and G. R.G. Lanckriet. Hilbert
space embeddings and metrics on probability measures. JMLR, 11:1517?1561, 2010.
[22] I. Steinwart and C. Scovel. Mercer?s theorem on general domains: On the interaction between
measures, kernels, and RKHS. Constructive Approximation, 35(363-417), 2012.
[23] J. Yang, V. Sindhwani, H. Avron, and M. W. Mahoney. Quasi-Monte Carlo feature maps for
shift-invariant kernels. In Proc. ICML 2014, 2014.
9
| 6174 |@word version:1 briefly:1 polynomial:1 norm:3 nd:1 c0:1 open:1 simulation:3 covariance:1 q1:1 mention:1 solid:1 series:4 contains:1 rkhs:29 ours:1 scovel:1 si:6 dx:2 written:3 numerical:13 happen:3 enables:2 analytic:2 selected:3 provides:1 hermite:1 mathematical:1 novak:2 constructed:3 prove:2 consists:2 introduce:4 indeed:2 planning:1 multi:1 becomes:4 spain:1 provided:2 notation:2 bounded:4 what:2 pursue:1 lindsten:1 guarantee:14 every:2 avron:1 xd:4 exactly:3 berlinet:1 control:3 grant:1 appear:2 before:1 engineering:1 consequence:3 establishing:1 becoming:1 yd:2 interpolation:1 black:4 therein:2 studied:1 acta:1 equivalence:2 challenging:1 co:1 walsh:1 range:1 practical:1 camera:2 acknowledgment:1 practice:2 procedure:1 area:1 word:6 get:1 close:2 selection:1 operator:1 intercept:1 www:1 measurable:3 deterministic:12 equivalent:1 reviewer:1 pursues:1 fruitful:1 williams:1 map:1 starting:1 independently:2 simplicity:2 rule:25 estimator:3 nesting:1 orthonormal:3 hd:3 proving:1 notion:1 annals:1 construction:9 suppose:2 user:2 us:1 lanckriet:1 pa:1 wolfe:2 expensive:1 satisfying:2 hagan:1 observed:1 role:1 solved:1 worst:12 wj:1 mpn:7 adv:1 decrease:3 valuable:1 mentioned:2 environment:2 depend:1 solving:1 basis:2 easily:1 joint:2 various:2 derivation:1 fast:2 describe:5 monte:14 choosing:1 whose:2 supplementary:1 widely:1 larger:1 statistic:5 sequence:1 differentiable:1 net:2 interaction:1 product:6 fr:1 flexibility:1 achieve:8 poorly:1 convergence:15 regularity:1 adam:1 derive:4 depending:2 ac:2 stat:1 illustrate:2 kenji:1 implies:1 qd:1 girolami:4 direction:1 radius:4 tokyo:1 stochastic:1 virtual:1 material:1 bks18:1 require:2 preliminary:1 randomization:6 proposition:1 anonymous:1 hold:3 purpose:2 proc:5 applicable:4 successfully:1 tool:1 weighted:15 fukumizu:3 mit:1 w2s:9 gaussian:4 always:2 super:1 aim:4 pn:12 focus:3 notational:1 bernoulli:1 likelihood:2 inference:1 dependent:1 sb:1 typically:3 entire:1 quasi:8 chopin:2 aforementioned:1 html:1 proposes:1 integration:19 fairly:1 special:1 marginal:1 equal:2 construct:1 integrands:6 having:1 psu:1 represents:2 park:1 icml:1 discrepancy:1 minimized:1 report:1 employ:4 modern:1 randomly:1 simultaneously:1 individual:3 consisting:3 lebesgue:1 statistician:1 n1:1 interest:1 evaluation:1 numer:2 mahoney:1 light:2 integral:10 partial:1 gerber:1 theoretical:7 fitted:1 instance:10 modeling:1 earlier:1 lattice:7 tractability:1 subset:1 uniform:10 conducted:1 graphic:1 characterize:1 reported:1 periodic:3 density:1 fundamental:1 randomized:2 siam:2 probabilistic:3 v4:1 continuously:1 quickly:1 w1:4 again:1 satisfied:2 containing:1 american:1 derivative:4 japan:1 sec:1 b2:2 satisfy:2 explicitly:3 mp:7 sloan:1 h1:4 root:1 closed:2 sup:4 characterizes:1 bayes:1 slope:1 contribution:5 minimize:2 square:3 ni:13 spaced:1 weak:1 bayesian:10 lkopf:1 carlo:14 straight:1 herding:5 bharath:1 explain:1 definition:1 sriperumbudur:2 proof:1 di:1 recovers:1 proved:1 popular:2 recall:1 knowledge:6 fractional:1 hilbert:5 organized:1 adaptability:1 focusing:1 higher:1 methodology:1 specify:2 done:3 box:4 just:3 smola:1 hand:1 steinwart:1 ei:8 defines:1 scientific:2 hal:1 usa:1 verify:1 true:1 unbiased:2 contain:1 former:1 equality:3 regularization:1 read:1 sin:1 illustrative:1 coincides:1 criterion:1 generalized:1 misspecified:7 common:1 functional:1 jp:2 belong:4 discussed:2 kluwer:1 refer:3 expressing:1 ai:2 smoothness:26 rd:7 consistency:4 mathematics:2 grid:1 zniakowski:1 particle:1 had:1 etc:1 posterior:1 multivariate:1 recent:1 belongs:8 prime:1 certain:8 verlag:1 arbitrarily:4 integrable:2 seen:1 greater:1 recognized:1 ii:4 smoother:1 full:2 desirable:1 gretton:1 smooth:14 technical:1 faster:3 match:1 academic:2 offer:1 bach:12 equally:2 prediction:1 expectation:3 metric:1 arxiv:1 kernel:57 sejdinovic:1 achieved:1 proposal:1 publisher:1 appropriately:1 w2:1 fbach:1 sch:1 posse:1 comment:1 induced:1 spirit:1 yang:1 intermediate:1 easy:2 wn:5 embeddings:1 marginalization:1 xj:2 pennsylvania:1 inner:1 lesser:4 knowing:1 cn:3 shift:3 expression:1 constitute:1 remark:2 generally:1 amount:1 extensively:1 differentiability:4 simplest:1 diameter:1 generate:2 specifies:1 http:1 write:1 numerica:1 mat:1 iz:1 vol:1 express:1 achieving:1 drawn:1 verified:1 fournier:1 lacoste:2 sum:1 year:1 run:1 angle:1 powerful:1 reader:1 separation:4 sobolev:13 appendix:1 bound:5 hi:1 guaranteed:3 occur:1 infinity:1 precisely:1 constraint:2 generates:1 integrand:41 speed:1 fourier:1 min:1 innovative:1 department:1 conjugate:1 belonging:1 beneficial:1 em:1 wi:29 b:3 explained:1 invariant:1 discus:2 know:3 end:3 serf:1 wellspecified:1 available:3 rewritten:1 apply:3 v2:1 alternative:1 rkhss:12 thomas:1 assumes:1 denotes:6 include:1 exploit:2 ism:2 establish:1 society:3 tensor:4 objective:2 already:1 quantity:1 occurs:1 strategy:2 gradient:1 dp:10 thank:2 chris:1 collected:1 assuming:3 code:1 pointwise:1 index:1 providing:1 ratio:1 minimizing:2 dick:4 frank:2 anal:2 unknown:8 perform:1 upper:3 observation:2 situation:9 incorporate:1 misspecification:3 y1:2 scrambling:1 nonuniform:1 reproducing:9 rn:1 arbitrary:1 intensity:1 namely:6 required:1 pair:1 specified:3 established:2 barcelona:1 nip:2 alternately:1 proceeds:1 below:5 agnan:1 wi2:2 oates:5 royal:2 belief:1 power:14 rely:1 representing:1 minimax:3 imply:1 julien:2 started:1 irrespective:1 qmc:11 prior:4 nice:1 literature:4 l2:18 review:1 relative:3 asymptotic:1 fully:1 mixed:1 filtering:1 generator:1 digital:2 degree:10 consistent:3 mercer:3 dq:2 heavy:1 supported:1 rasmussen:1 arriving:2 allow:1 institute:1 wide:2 characterizing:1 xn:15 qn:5 adaptive:5 welling:1 functionals:2 approximate:1 compact:1 ml:1 uai:1 assumed:13 xi:29 scrambled:1 continuous:2 table:1 kanagawa:2 expansion:2 constructing:1 domain:1 aistats:2 main:2 edition:1 nothing:1 osborne:2 complementary:1 quadrature:47 x1:19 en:12 slow:1 cbc:1 aid:1 deterministically:1 wish:1 explicit:1 lie:2 vanish:1 jmlr:1 ix:2 formula:2 theorem:18 briol:3 specific:1 maxi:1 decay:1 closeness:1 exists:2 sequential:2 ci:6 supplement:1 illumination:1 chen:1 simply:1 likely:2 expressed:3 sindhwani:1 springer:1 determines:3 satisfies:5 obozinski:1 conditional:1 goal:1 diam:2 included:1 specifically:3 determined:1 lemma:1 total:1 called:4 kuo:1 w2r:9 formally:2 support:2 mext:1 latter:1 violated:1 absolutely:1 constructive:1 evaluate:1 |
5,719 | 6,175 | An Efficient Streaming Algorithm
for the Submodular Cover Problem
Ashkan Norouzi-Fard ?
Abbas Bazzi ?
[email protected] [email protected]
Marwa El Halabi ?
[email protected]
Ilija Bogunovic ?
Ya-Ping Hsieh ?
Volkan Cevher ?
[email protected]
[email protected]
[email protected]
Abstract
We initiate the study of the classical Submodular Cover (SC) problem in the data
streaming model which we refer to as the Streaming Submodular Cover (SSC). We
show that any single pass streaming algorithm using sublinear memory in the size
of the stream will fail to provide any non-trivial approximation guarantees for SSC.
Hence, we consider a relaxed version of SSC, where we only seek to find a partial
cover. We design the first Efficient bicriteria Submodular Cover Streaming (ESCStreaming) algorithm for this problem, and provide theoretical guarantees for its
performance supported by numerical evidence. Our algorithm finds solutions that
are competitive with the near-optimal offline greedy algorithm despite requiring
only a single pass over the data stream. In our numerical experiments, we evaluate
the performance of ESC-Streaming on active set selection and large-scale graph
cover problems.
1 Introduction
We consider the Streaming Submodular Cover (SSC) problem, where we seek to find the smallest
subset that achieves a certain utility, as measured by a monotone submodular function. The data is
assumed to arrive in an arbitrary order and the goal is to minimize the number of passes over the
whole dataset while using a memory that is as small as possible.
The motivation behind studying SSC is that many real-world applications can be modeled as cover
problems, where we need to select a small subset of data points such that they maximize a particular
utility criterion. Often, the quality criterion can be captured by a utility function that satisfies
submodularity [27, 16, 15], an intuitive notion of diminishing returns.
Despite the fact that the standard Submodular Cover (SC) problem is extensively studied and very
well-understood, all the proposed algorithms in the literature heavily rely on having access to whole
ground set during their execution. However, in many real-world applications, this assumption does
not hold. For instance, when the dataset is being generated on the fly or is too large to fit in memory,
having access to the whole ground set may not be feasible. Similarly, depending on the application,
we may have some restrictions on how we can access the data. Namely, it could be that random
access to the data is simply not possible, or we might be restricted to only accessing a small fraction
of it. In all such scenarios, the optimization needs to be done on the fly.
The SC problem is first considered by Wolsey [28], who shows that a simple greedy algorithm yields
a logarithmic factor approximation. This algorithm performs well in practice and usually returns
?
?
Theory of Computation Laboratory 2 (THL2), EPFL. These authors contributed equally to this work.
Laboratory for Information and Inference Systems (LIONS), EPFL
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
solutions that are near-optimal. Moreover, improving on its theoretical approximation guarantee is
not possible under some natural complexity theoretic assumptions [12, 10]. However, such an offline
greedy approach is impractical for the SSC, since it requires an infeasible number of passes over the
stream.
1.1
Our Contribution
In this work, we rigorously show that achieving any non-trivial approximation of SSC with a single
pass over the data stream, while using a reasonable amount of memory, is not possible. More
generally, we establish an uncontidional lower bound on the trade-off between the memory and
approximation ratio for any p-pass streaming algorithm solving the SSC problem.
Hence, we consider instead a relaxed version of SSC, where we only seek to achieve a fraction
(1 ?) of the specified utility. We develop the first Efficient bicriteria Submodular Cover Streaming
(ESC-Streaming) algorithm. ESC-Streaming is simple, easy to implement, and memory as well time
efficient. It returns solutions that are competitive with the near-optimal offline greedy algorithm.
It requires only a single pass over the data in arbitrary order, and provides for any ? > 0, a 2/?approximation to the optimal solution, while achieving a (1 ?) fraction of the specified utility.
In our experiments, we test the performance of ESC-Streaming on active set selection in materials
science and graph cover problems. In the latter, we consider a graph dataset that consists of more
than 787 million nodes and 47.6 billion edges.
1.2
Related work
Submodular optimization has attracted a lot of interest in machine learning, data mining, and
theoretical computer science. Faced with streaming and massive data, the traditional (offline) greedy
approaches fail. One popular approach to deal with the challenge of the data deluge is to adopt
streaming or distributed perspective. Several submodular optimization problems have been studied
so far under these two settings [25, 11, 9, 2, 20, 8, 18, 7, 14, 1]. In the streaming setting, the goal
is to find nearly optimal solutions, with a minimal number of passes over the data stream, memory
requirement, and computational cost (measured in terms of oracle queries).
A related streaming problem to SSC was investigated by Badanidiyuru et al. [2], where the authors
studied the streaming Submodular Maximization (SM) problem subject to a cardinality constraint. In
their setting, given a budget k, the goal is to pick at most k elements that achieve the largest possible
utility. Whereas for the SC problem, given a utility Q, the goal is to find the minimum number of
elements that can achieve it. In the offline setting of cadinality constrained SM, the greedy algorithm
returns a solution that is (1 1/e) away from the optimal value [21], which is known to be the best
solution that one can obtain efficiently [22]. In the streaming setting, Badanidiyuru et al. [2] designed
an elegant single pass (1/2 ?)-approximation algorithm that requires only O((k log k)/?) memory.
More general constraints for SM have also been studied in the streaming setting, e.g., in [8].
Moreover, the Streaming Set Cover problem, which is a special case of the SSC problem is extensively
studied [25, 11, 9, 7, 14, 1]. In this special case, the elements in the data stream are m subsets of a
universe X of size n, and the goal is to find the minimum number of sets k ? that can cover all the
elements in the universe X . The study of the Streaming Set Cover problem is mainly focused on the
e 3 . This regime is first investigated by
semi-streaming model, where the memory is restricted to O(n)
Saha and Getoor [25], who designed a O(log n)-pass, O(log n)-approximation algorithm that uses
e
O(n)
space. Emek and Ros?n [11] show that if one restricts the streaming algorithmpto perform only
one pass over the data stream, then the best possible approximation guarantee is O( n). This lower
bound holds even for randomized algorithms. They also designed a deterministic greedy algorithm
that matches this approximation guarantee. By relaxing the single pass constraint, Chakrabarti and
Wirth [7] designed a p-pass semi-streaming (p + 1)n1/(p+1) -approximation algorithm, and proved
that this is essentially tight up to a factor of (p + 1)3 .
Partial streaming submodular optimization. The Streaming Set Cover has also been studied from
a bicriteria perspective, where one settles for solutions that only cover a (1 ?)-fraction of the universe.
Building on the work of [11], the authors in [7] designed a semi-streaming p-pass streaming algorithm
3
e notation is used to hide poly-log factors, i.e., O(n)
e
The O
:= O(n poly{log n, log m})
2
that achieves a (1 ?, (n, ?))-approximation, where (n, ?) = min{8p?1/p , (8p+1)n1/(p+1) }. They
also provided a lower bound that matches their approximation ratio up to a factor of ?(p3 ).
Distributed submodular optimization. Mirzasoleiman et al. [20] consider the SC problem in the
distributed setting, where they design an efficient algorithm whose solution is close to that of the
offline greedy algorithm. Moreover, they study the trade-off between the communication cost and the
number of rounds to obtain such a solution.
To the best of our knowledge, no other works have studied the general SSC problem. We propose the
first efficient algorithm ESC-Streaming that approximately solves this problem with tight guarantees.
2 Problem Statement
Preliminaries. We assume that we are given a utility function f : 2V 7! R+ that measures the
quality of a given subset S ? V , where V = {e1 , ? ? ? , em } is the ground set. The marginal gain
associated with any given element e 2 V with respect to some set S ? V , is defined as follows
f (e|S)
:=
(e|S) = f (S [ {e})
f (S).
In this work, we focus on normalized, monotone, submodular utility functions f , where f is referred
to be:
1. submodular if for all S, T , such that S ? T , and for all e 2 V \ T , (e|S)
2. monotone if for all S, T such that S ? T ? V , we have f (S) ? f (T ).
3. normalized if f (;) = 0.
(e|T ).
In the standard Submodular Cover (SC) problem, the goal is to find the smallest subset S ? V that
satisfies a certain utility Q, i.e.,
min |S| s.t. f (S) Q.
(SC)
S?V
Hardness results. The SC problem is known to be NP-Hard. A simple greedy strategy [28]
that in each round selects the element with the highest marginal gain until Q is reached, returns
a solution of size at most H(maxe f ({e}))k ? , where k ? is the size of the optimal solution set
S ? .4 Moreover, Feige [12] proved that this is the best possible approximation guarantee unless
NP ? DTIME nO(log log n) . This was recently improved to an NP-hardness result by Dinur and
Steurer [10].
Streaming Submodular Cover (SSC). In the streaming setting, the main challenge is to solve the
SC problem while maintaining a small memory and without performing a large number of passes
over the data stream. We use m to denote the size of the data stream. Our first result states that any
single pass streaming algorithm with an approximation ratio better than m/2, must use at least ?(m)
memory. Hence, for large datasets, if we restrict ourselves to a single pass streaming algorithm with
sublinear memory o(m), we cannot obtain any non-trivial approximation of the SSC problem (cf.,
Theorem 2 in Section 4). To obtain non-trivial and feasible guarantees, we need to relax the coverage
constraint in SC. Thus, we instead solve the Streaming Bicriteria Submodular Cover (SBSC) defined
as follows:
Definition 1. Given ? 2 (0, 1) and
1, an algorithm is said to be a (1 ?, )-bicriteria
approximation algorithm for the SBSC problem if for any Submodular Cover instance with utility Q
and optimal set size k ? , the algorithm returns a solution S such that
f (S)
(1
and |S| ? k ? .
?)Q
(1)
3 An efficient streaming submodular cover algorithm
ESC-Streaming algorithm. The first phase of our algorithm is described in Algorithm 1. The
algorithm receives as input a parameter M representing the size of the allowed memory. The
4
Here, H(x) is the x-th harmonic number and is bounded by H(x) ? 1 + ln x.
3
discussion of the role of this parameter is postponed to Section 4. The algorithm keeps t + 1 =
log(M/2) + 1 representative sets. Each representative set Sj (j = 0, .., t) has size at most 2j , and
has a corresponding threshold value Q/2j . Once a new element e arrives in the stream, it is added to
all the representative sets that are not fully populated, and for which the element?s marginal gain is
Q
above the corresponding threshold, i.e., (e|Sj )
2j . This phase of the algorithm requires only
one pass over the data stream. The running time of the first phase of the algorithm is O(log(M )) for
every element of the stream, since the per-element computational cost is O(log(M )) oracle calls.
In the second phase (i.e., Algorithm 2), given a feasible ??, the algorithm finds the smallest set Si
among the stored sets, such that f (Si ) (1 ??)Q. For any query, the running time of the second
phase is O(log log(M )). Note that after one-pass over the stream, we have no limitation on the
number of the queries that we can answer, i.e., we do not need another pass over the stream. Moreover,
this phase does not require any oracle calls, and its total memory usage is at most M .
Algorithm 1 ESC-Streaming Algorithm - Picking representative set
t = log(M/2)
1: S0 = S1 = ... = St = ;
2: for i = 1, ? ? ? , m do
3:
Let e be the next element in the stream
4:
for j = 0, ? ? ? , t do
5:
if (e|Sj ) 2Qj and |Sj | ? 2j then
6:
Sj
Sj [ e
7:
end if
8:
end for
9: end for
Algorithm 2 ESC-Streaming Algorithm - Responding to the queries
Given value ??, perform the following steps
1: Run a binary search on the S0 , ..., St
2: Return the smallest set Si such that f (Si ) (1 ??)Q
3: If no such set exists, Return ?Assumption Violated?
In the following section, we analyze ESC-Streaming and prove that it is an (1 ??, 2/?
?)-bicriteria
approximation algorithm for SSC. Formally we prove the following:
Theorem 1. For any given instance of SSC problem, and any values M, ??, such that k ? /?
? ? M , where
k ? is size optimal solution to SSC, ESC-Streaming algorithm returns a (1 ??, 2/?
?)-approximation
solution.
4 Theoretical Bounds
Lower Bound. We start by establishing a lower bound on the tradeoff between the memory
requirement and the approximation ratio of any p-pass streaming algorithm solving the SSC problem.
Theorem 2. For any number of passes p and any stream size m, a p-pass streaming algorithm that,
with probability at least 2/3, approximates?the submodular
cover problem to a factor smaller than
?
1
1
p
p
m
m
p+1 , must use a memory of size at least ? p(p+1)2 .
The proof of this theorem can be found in the supplementary material.
Note that for p = 1, Theorem 2 states that any one-pass streaming algorithm with an approximation
ratio better than m/2 requires at least ?(m) memory. Hence, for large datasets, Theorem 2 rules out
any approximation of the streaming submodular cover problem, if we restrict ourselves to a one-pass
streaming algorithm with sublinear memory o(m). This result motivates the study of the Streaming
Bicriterion Submodular Cover (SBSC) problem as in Definition 1.
Main result and discussion. Refining the analysis of the greedy algorithm for SC [28], where
we stop once we have achieved a utility of (1 ?)Q, yields that the number of elements that we
4
pick is at most k ? ln(1/?). This yields a tight (1 ?, ln(1/?))-bicriteria approximation algorithm
for the Bicriteria Submoludar Cover (BSC) problem. One can turn this bicriteria algorithm into an
(1 ?, ln(1/?))-bicriteria algorithm for SBSC, at the cost of doing k ? ln(1/?) passes over the data
stream which may be infeasible for some applications. Moreover, this requires mk ? ln 1? oracle
calls, which may be infeasible for large datasets.
To circumvent these issues, it is natural to parametrize our algorithm by a user defined memory
budget M that the streaming algorithm is allowed to use. Assuming, for some 0 < ? ? e 1 , that the
(1 ?, ln(1/?))-bicriteria solution given by the offline greedy algorithm?s fits in a memory of M/2
for the BSC variant of the problem, then our algorithm (ESC-Streaming) is guaranteed to return a
(1 1/ ln(1/?), 2 ln(1/?))-bicriteria solution for the SBSC problem, while using at most M memory.
Hence in only one pass over the data stream, ESC-Streaming returns solutions guaranteed to cover,
for small values of ?, almost the same fraction of the utility as the greedy solution, loosing only
a factor of two in the worst case solution size. Moreover, the number of oracle calls needed by
ESC-Streaming is only m log M , which for M = 2k ? ln(1/?) is bounded by
? ?
1
m log M =
m log(2k ? ln(1/?))
? mk ? ln
,
|
{z
}
?
|
{z
}
oracle calls by ESC-Streaming algorithm
oracle calls by greedy
which is more than a factor k / log(k ) smaller than the greedy algorithm. This enables ESCStreaming algorithm to perform much faster than the offline greedy algorithm. Another feature of
ESC-Streaming is that it performs a single pass over the data stream, and after this unique pass, we
are able to query a (1 1/ ln(1/?0 ), 2 ln(1/?0 ))-bicriteria solution for any ? ? ?0 ? e 1 , without
any additonal oracle calls. Whenever the above inequality does not hold, ESC-Streaming returns
?Assumption Violated?. More precisely, we state the following Theorem whose proof can be found in
the supplementary material.
Theorem 3. For any given instance of SSC problem, and any values M, ?, such that 2k ? ln 1/? ? M ,
where k ? is the optimal solution size, ESC-Streaming algorithm returns a (1 1/(ln 1/?), 2 ln 1/?)approximation solution.
?
?
Remarks. Note that in Algorithm 1, we can replace the constant 2 by another choice of the constant
1 < ? ? 2. The representative sets are changed accordingly to ?j , and t = log? (M/?). Varying ?
provides a trade-off between memory and solution size guarantee. More precisely, for any 1 < ? ? 2,
ESC-Streaming achieves a (1 ln 11/? , ? ln 1/?)-approximation guarantee, for instances of SSC where
?k ? ln 1/? ? M . However, the improvement in the size approximation guarantee, comes at the cost
1
of increased memory usage M
? 1 , and increased number of oracle calls m(log? (M/?) + 1).
Notice that in the statement of Theorem 3, the approximation guarantee of ESC-Streaming is
given with respect to a memory only large enough to fit the offline greedy algorithm?s solution.
However, if we allow our memory M to be as large as k ? /?, then Theorem 1 follows immediately for
?? = 1/ ln(1/?).
5 Example Applications
Many real-world problems, such as data summarization [27], image segmentation [16], influence
maximization in social networks [15], can be formulated as a submodular cover problem and can
benefit from the streaming setting. In this section, we discuss two such concrete applications.
5.1
Active set selection
To scale kernel methods (such as kernel ridge regression, Gaussian processes, etc.) to large data sets,
we often rely on active set selection methods [23]. For example, a significant problem with Gaussian
process prediction is that it scales as O(n3 ). Storing the kernel matrix K and solving the associated
linear system is prohibitive when n is large. One way to overcome this is to select a small subset of
data while maintaining a certain diversity. A popular approach for active set selection is Informative
Vector Machine (IVM) [26], where the goal is to select a set S that maximizes the utility function
1
f (S) = log det(I + 2 KS,S ), .
(2)
2
5
Here, KS,S is the submatrix of K, corresponding to rows/columns indexed by S, and > 0 is a
regularization parameter. This utility function is monotone submodular, as shown in [17].
5.2
Graph set cover
In a lot of applications, e.g., influence maximization in social networks [15], community detection in
graphs [13], etc., we are interested in selecting a small subset of vertices from a massive graph that
?cover? in some sense a large fraction of the graph.
In particular, in section 6, we consider two fundamental set cover problems: Dominating Set and
Vertex Cover problems. Given a graph G(V, E) with vertex set V and edge set E, let ?(S) denote
the neighbours of the vertices in S in the graph, and (S) the edges in the graph connect to a vertex
in S. The dominating set is the problem of selecting the smallest set that covers the vertex set V ,
i.e., the corresponding utility is f (S) = |?(S) [ S|. The vertex cover is the problem of selecting the
smallest set that covers the edge set E, i.e., the corresponding utility is f (S) = | (S)|. Both utilities
are monotone submodular functions.
6 Experimental Results
We address the following questions in our experiments:
1. How does ESC-Streaming perform in comparison to the offline greedy algorithm, in terms
of solution size and speed?
2. How does ? influence the trade-off between solution size and speed ?
3. How does ESC-Streaming scale to massive data sets?
We evaluate the performance of ESC-Streaming on real-world data sets with two applications: active
set selection and graph set cover problems, described in section 5. For active set selection, we choose
a dataset having a size that permits the comparison with the offline greedy algorithm. For graph cover,
we run ESC-Streaming on a large graph of 787 million nodes and 47.6 billion edges.
We measure the computational cost in terms of the number of oracle calls, which is independent of
the concrete implementation and platform.
6.1
Active Set Selection for Quantum Mechanics
In quantum chemistry, computing certain properties, such as atomization energy of molecules, can be
computationally challenging [24]. In this setting, it is of interest to choose a small and diverse training
set, from which one can predict the atomization energy (e.g., by using kernel ridge regression) of
other molecules.
In this setting, we apply ESC-Streaming on the log-det function defined in Section 5.1 where
kxi xj k22
we use the Gaussian kernel Kij = exp(
), and we set the hyperparameters as in [24]:
2h2
= 1, h = 724. The dataset consists of 7k small organic molecules, each represented by a 276
)
3f (V )
dimensional vector. We set M = 215 and vary Q from f (V
2 to
4 , and ? from 1.1 to 2.
We compare against offline greedy, and its accelerated version with lazy updates (Lazy Greedy)[19].
For all algorithms, we provide a vector of different values of ?? as input, and terminate once the utility
(1 ??)Q, corresponding to the smallest ??, is achieved. Below we report the performance for the
smallest and largest tested ?? = 0.01 and ?? = 0.5, respectively.
In Figure 6.1, we show the performance of ESC-Streaming with respect to the offline greedy and
lazy greedy, in terms of size of solutions picked and number of oracle calls made. The computational
costs of all algorithms are normalized to those of offline greedy.
It can be seen that standard ESC-Streaming, with ? = 2, always chooses a set at most twice (largest
ratio is 2.1089) as large as offline greedy, using at most 3.15% and 25.5% of the number of oracle
calls made, respectively, by offline greedy and lazy greedy. As expected, varying the parameter ?
leads to smaller solutions at the cost of more oracle calls: ? = 1.1 leads to solutions roughly of the
same size as the solutions found by the offline greedy. Note also that choosing larger values ? leads
to jumps in the solution sets sizes (c.f., 6.1). In particular, varying the required utility Q, even by a
6
3000
20
15
10
2500
5
0
0.5
2000
250
ESC-Streaming , = 1.1
ESC-Streaming , = 1.25
ESC-Streaming , = 1.5
ESC-Streaming , = 1.75
ESC-Streaming , = 2
Lazy Greedy
Offline Greedy
200
Size of solutions
ESC-Streaming , = 1.1
ESC-Streaming , = 1.25
ESC-Streaming , = 1.5
ESC-Streaming , = 1.75
ESC-Streaming , = 2
Lazy Greedy
Size of solutions
Number of oracle calls (%)
25
1500
1000
0.6
0.65
0.7
0.75
0
0.5
100
50
500
0.55
150
ESC-Streaming , = 1.1
ESC-Streaming , = 1.25
ESC-Streaming , = 1.5
ESC-Streaming , = 1.75
ESC-Streaming , = 2
Lazy Greedy
Offline Greedy
0.55
Q=f (V )
0.6
0.65
Q=f (V )
0.7
0.75
0
0.5
0.55
0.6
0.65
0.7
0.75
Q=f (V )
Figure 6.1: Active set selection of molecules: (Left) Percentage of oracle calls made relative to offline
greedy, (Middle) Size of selected sets for ? = 0.01, (Right) Size of selected sets for ? = 0.5.
small amount, may not be possible to achieve by the current solution?s size (?j ) and would require
moving to a set larger by at least an ? factor (?j+1 ).
Finally, we remark that even for this small dataset, offline greedy, for largest tested Q, required
1.2 ? 107 oracle calls, and took almost 2 days to run on the same machine.
6.2
Cover problems on Massive graphs
To assess the scalability of ESC-Streaming, we apply it to the "uk-2014" graph, a large snapshot of the
.uk domain taken at the end of 2014 [5, 4, 3]. It consists of 787,801,471 nodes and 47,614,527,250
edges. This graph is sparse, with average degree 60.440, and hence requires large cover solutions.
Storing this dataset (i.e., the adjacency list of the graph) on the hard-drive requires more than 190GB
of memory.
We solve both the Dominating Set and Vertex Cover problems, whose utility functions are defined in
Section 5. For the Dominating Set problem, we set M = 520 MB, ? = 2 and Q = 0.7|V |. We run
the first phase of ESC-Streaming (c.f., Algorithm 1), then query for different values of ?? between 0
to 1, using Algorithm 2. Similarly, for the Vertex Cover problem, we set M = 320 MB, ? = 2 and
Q = 0.8|E|. Figure 6.2 shows the performance of ESC-Streaming on both the dominating set and
vertex cover problems, in terms of utility achieved, i.e., number vertices/edges covered, for all the
feasible ?? values, with respect to the size of the subset of vertices picked.
As a baseline, we compare against a random selection procedure, that picks a random permutation
of the vertices and then select any vertex with a non-zero marginal, until it reaches the same partial
cover achieved by ESC-Streaming. Note that the offline greedy, even with lazy evaluations, is not
applicable here since it does not terminate in a reasonable time, so we omit it from the comparison.
Similarly, we do not compare against the Emek?Ros?n?s algorithm [11], due to its large memory
requirement of n log m, which in this case is roughly 20 times bigger than the memory used by
ESC-Streaming.
We do significantly better than a random selection, especially on the Vertex Cover problem, which
for sparse graphs is more challenging than the Dominating Set problem.
Since running the greedy algorithm on ?uk-2014? graph goes beyond our computing infrastructure,
we include another instance of the Dominating set cover problem on a smaller graph ?Friendster", an
online gaming network [29], to compare with offline greedy algorithm. This graph has 65.6 million
nodes, and 1.8 billion edges. The memory required by ESC-Streaming is less than 30MB for ? = 2.
We let offline greedy run for 2 days, and gathered data for 2000 greedy iterations. Figure 6.2 (Right)
shows that our performance almost matches the greedy solutions we managed to compute.
7 Conclusion
In this paper, we consider the SC problem in the streaming setting, where we select the least number
of elements that can achieve a certain utility, measured by a submodular function. We prove that
there cannot exist any single pass streaming algorithm that can achieve a non-trivial approximation
7
0.8
Fraction of edges covered
Fraction of vertices covered
ESC-Streaming , = 2
Random
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.05
0.1
0.15
0.2
Fraction of vertices selected
0.25
1
ESC-Streaming , = 2
Random
0.7
0.9
Fraction of vertices covered
0.9
0.6
0.5
0.4
0.3
0.2
0.1
0
0.8
0.7
0.6
0.5
0.4
0.15
0.3
0.1
0.2
ESC-Streaming , = 2
Random
Offline Greedy
0.1
0
0.05
0.1
0.15
0.2
0.25
Fraction of vertices selected
0.3
0
0.05
0
0
0.5
1
#10 -4
0
0.05
0.1
0.15
0.2
0.25
Fraction of vertices selected
Figure 6.2: (Left) Vertex cover on "uk-2014" (Middle) Dominating set on "uk-2014" (Right)
Dominating set on "Friendster"
of SSC, using sublinear memory, if the utility have to be met exactly. Consequently, we develop an
efficient approximation algorithm, ESC-Streaming, which finds solution sets, slightly larger than the
optimal solution, that partially cover the desired utility. We rigorously analyzed the approximation
guarantees of ESC-Streaming, and compared these guarantees against the offline greedy algorithm.
We demonstrate the performance of ESC-Streaming on real-world problems. We believe that our
algorithm is an important step towards solving streaming and large scale submodular cover problems,
which lie at the heart of many modern machine learning applications.
Acknowledgments
We would like to thank Michael Kapralov and Ola Svensson for useful discussions. This work was
supported in part by the European Commission under ERC Future Proof, SNF 200021-146750, SNF
CRSII2-147633, NCCR Marvel, and ERC Starting Grant 335288-OptApprox.
References
[1] Sepehr Assadi, Sanjeev Khanna, and Yang Li. Tight bounds for single-pass streaming complexity of the
set cover problem. arXiv preprint arXiv:1603.05715, 2016.
[2] Ashwinkumar Badanidiyuru, Baharan Mirzasoleiman, Amin Karbasi, and Andreas Krause. Streaming
submodular maximization: Massive data summarization on the fly. In Proceedings of the 20th ACM
SIGKDD international conference on Knowledge discovery and data mining, pages 671?680. ACM, 2014.
[3] Paolo Boldi, Andrea Marino, Massimo Santini, and Sebastiano Vigna. BUbiNG: Massive crawling for
the masses. In Proceedings of the Companion Publication of the 23rd International Conference on World
Wide Web, pages 227?228. International World Wide Web Conferences Steering Committee, 2014.
[4] Paolo Boldi, Marco Rosa, Massimo Santini, and Sebastiano Vigna. Layered label propagation: A
multiresolution coordinate-free ordering for compressing social networks. In Sadagopan Srinivasan, Krithi
Ramamritham, Arun Kumar, M. P. Ravindra, Elisa Bertino, and Ravi Kumar, editors, Proceedings of the
20th international conference on World Wide Web, pages 587?596. ACM Press, 2011.
[5] Paolo Boldi and Sebastiano Vigna. The WebGraph framework I: Compression techniques. In Proc. of the
Thirteenth International World Wide Web Conference (WWW 2004), pages 595?601, Manhattan, USA,
2004. ACM Press.
[6] Amit Chakrabarti, Graham Cormode, and Andrew McGregor. Robust lower bounds for communication
and stream computation. In Proceedings of the fortieth annual ACM symposium on Theory of computing,
pages 641?650. ACM, 2008.
[7] Amit Chakrabarti and Tony Wirth. Incidence geometries and the pass complexity of semi-streaming set
cover. arXiv preprint arXiv:1507.04645, 2015.
[8] Chandra Chekuri, Shalmoli Gupta, and Kent Quanrud. Streaming algorithms for submodular function
maximization. In Automata, Languages, and Programming, pages 318?330. Springer, 2015.
[9] Erik D Demaine, Piotr Indyk, Sepideh Mahabadi, and Ali Vakilian. On streaming and communication
complexity of the set cover problem. In Distributed Computing, pages 484?498. Springer, 2014.
8
[10] Irit Dinur and David Steurer. Analytical approach to parallel repetition. In Proceedings of the 46th Annual
ACM Symposium on Theory of Computing, STOC ?14, pages 624?633, New York, NY, USA, 2014. ACM.
[11] Yuval Emek and Adi Ros?n. Semi-streaming set cover. In Automata, Languages, and Programming, pages
453?464. Springer, 2014.
[12] Uriel Feige. A threshold of ln n for approximating set cover. Journal of the ACM (JACM), 45(4):634?652,
1998.
[13] Santo Fortunato. Community detection in graphs. Physics reports, 486(3):75?174, 2010.
[14] Piotr Indyk, Sepideh Mahabadi, and Ali Vakilian. Towards tight bounds for the streaming set cover problem.
arXiv preprint arXiv:1509.00118, 2015.
[15] David Kempe, Jon Kleinberg, and ?va Tardos. Maximizing the spread of influence through a social
network. In Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and
data mining, pages 137?146. ACM, 2003.
[16] Gunhee Kim, Eric P Xing, Li Fei-Fei, and Takeo Kanade. Distributed cosegmentation via submodular
optimization on anisotropic diffusion. In Computer Vision (ICCV), 2011 IEEE International Conference
on, pages 169?176. IEEE, 2011.
[17] Andreas Krause and Daniel Golovin. Submodular function maximization. Tractability: Practical Approaches to Hard Problems, 3:19, 2012.
[18] Ravi Kumar, Benjamin Moseley, Sergei Vassilvitskii, and Andrea Vattani. Fast greedy algorithms in
mapreduce and streaming. ACM Transactions on Parallel Computing, 2(3):14, 2015.
[19] Michel Minoux. Accelerated greedy algorithms for maximizing submodular set functions. In Optimization
Techniques, pages 234?243. Springer, 1978.
[20] Baharan Mirzasoleiman, Amin Karbasi, Ashwinkumar Badanidiyuru, and Andreas Krause. Distributed
submodular cover: Succinctly summarizing massive data. In Advances in Neural Information Processing
Systems, pages 2863?2871, 2015.
[21] George L Nemhauser, Laurence A Wolsey, and Marshall L Fisher. An analysis of approximations for
maximizing submodular set functions?i. Mathematical Programming, 14(1):265?294, 1978.
[22] George L Nemhauser and Leonard A Wolsey. Best algorithms for approximating the maximum of a
submodular set function. Mathematics of operations research, 3(3):177?188, 1978.
[23] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning
(Adaptive Computation and Machine Learning). The MIT Press, 2005.
[24] Matthias Rupp. Machine learning for quantum mechanics in a nutshell. International Journal of Quantum
Chemistry, 115(16):1058?1073, 2015.
[25] Barna Saha and Lise Getoor. On maximum coverage in the streaming model & application to multi-topic
blog-watch. In SDM, volume 9, pages 697?708. SIAM, 2009.
[26] Matthias Seeger. Greedy forward selection in the informative vector machine. Technical report, Technical
report, University of California at Berkeley, 2004.
[27] Sebastian Tschiatschek, Rishabh K Iyer, Haochen Wei, and Jeff A Bilmes. Learning mixtures of submodular
functions for image collection summarization. In Advances in Neural Information Processing Systems,
pages 1413?1421, 2014.
[28] Laurence A Wolsey. An analysis of the greedy algorithm for the submodular set covering problem.
Combinatorica, 2(4):385?393, 1982.
[29] Jaewon Yang and Jure Leskovec. Defining and evaluating network communities based on ground-truth. In
Proceedings of the ACM SIGKDD Workshop on Mining Data Semantics, MDS ?12, pages 3:1?3:8, New
York, NY, USA, 2012. ACM.
9
| 6175 |@word middle:2 version:3 compression:1 laurence:2 seek:3 hsieh:2 kent:1 pick:3 bicriteria:13 selecting:3 daniel:1 current:1 incidence:1 si:4 boldi:3 attracted:1 must:2 takeo:1 crawling:1 sergei:1 numerical:2 informative:2 enables:1 designed:5 update:1 greedy:46 prohibitive:1 selected:5 accordingly:1 cormode:1 volkan:2 santo:1 infrastructure:1 provides:2 node:4 mathematical:1 symposium:2 chakrabarti:3 consists:3 prove:3 expected:1 hardness:2 andrea:2 roughly:2 mechanic:2 multi:1 cardinality:1 spain:1 provided:1 moreover:7 notation:1 bounded:2 maximizes:1 mass:1 dtime:1 mahabadi:2 impractical:1 guarantee:14 berkeley:1 every:1 nutshell:1 exactly:1 ro:3 uk:5 grant:1 omit:1 understood:1 despite:2 establishing:1 approximately:1 might:1 twice:1 studied:7 k:2 relaxing:1 marwa:2 challenging:2 gunhee:1 minoux:1 tschiatschek:1 ola:1 bazzi:2 unique:1 acknowledgment:1 practical:1 practice:1 implement:1 procedure:1 snf:2 significantly:1 fard:1 organic:1 cannot:2 close:1 selection:12 layered:1 influence:4 restriction:1 www:1 deterministic:1 maximizing:3 go:1 williams:1 starting:1 automaton:2 focused:1 immediately:1 rule:1 notion:1 coordinate:1 tardos:1 heavily:1 massive:7 user:1 programming:3 carl:1 us:1 element:13 role:1 fly:3 preprint:3 worst:1 compressing:1 ordering:1 trade:4 highest:1 accessing:1 benjamin:1 complexity:4 haochen:1 rigorously:2 solving:4 badanidiyuru:4 tight:5 ali:2 eric:1 represented:1 fast:1 query:6 sc:12 choosing:1 whose:3 supplementary:2 solve:3 dominating:9 larger:3 relax:1 indyk:2 online:1 sdm:1 analytical:1 matthias:2 took:1 propose:1 mb:3 achieve:6 sebastiano:3 multiresolution:1 amin:2 intuitive:1 scalability:1 billion:3 requirement:3 mirzasoleiman:3 depending:1 develop:2 andrew:1 measured:3 edward:1 solves:1 coverage:2 come:1 met:1 submodularity:1 settle:1 material:3 adjacency:1 require:2 preliminary:1 quanrud:1 hold:3 marco:1 considered:1 ground:4 exp:1 predict:1 achieves:3 adopt:1 smallest:8 vary:1 proc:1 applicable:1 label:1 largest:4 repetition:1 arun:1 bsc:2 mit:1 gaussian:4 always:1 varying:3 publication:1 lise:1 focus:1 refining:1 improvement:1 mainly:1 seeger:1 sigkdd:3 baseline:1 sense:1 kim:1 summarizing:1 inference:1 el:1 streaming:103 epfl:8 diminishing:1 selects:1 interested:1 semantics:1 issue:1 among:1 constrained:1 special:2 platform:1 kempe:1 marginal:4 rosa:1 once:3 having:3 piotr:2 nearly:1 jon:1 future:1 np:3 report:4 saha:2 modern:1 neighbour:1 phase:7 ourselves:2 geometry:1 n1:2 detection:2 interest:2 mining:4 evaluation:1 analyzed:1 arrives:1 mixture:1 rishabh:1 shalmoli:1 behind:1 edge:9 partial:3 unless:1 indexed:1 desired:1 theoretical:4 leskovec:1 minimal:1 cevher:2 instance:6 mk:2 increased:2 column:1 kij:1 marshall:1 cover:54 maximization:6 cost:8 tractability:1 vertex:21 subset:8 too:1 stored:1 commission:1 connect:1 answer:1 kxi:1 chooses:1 st:2 fundamental:1 randomized:1 international:8 siam:1 off:4 physic:1 picking:1 michael:1 concrete:2 sanjeev:1 choose:2 ssc:21 nccr:1 vattani:1 return:13 michel:1 li:2 diversity:1 chemistry:2 stream:20 lot:2 picked:2 analyze:1 doing:1 reached:1 competitive:2 start:1 kapralov:1 parallel:2 xing:1 contribution:1 minimize:1 ass:1 cosegmentation:1 who:2 efficiently:1 yield:3 gathered:1 norouzi:1 bilmes:1 drive:1 ping:2 reach:1 ashkan:2 whenever:1 sebastian:1 definition:2 against:4 energy:2 associated:2 proof:3 gain:3 stop:1 dataset:7 proved:2 popular:2 knowledge:3 segmentation:1 dinur:2 elisa:1 day:2 improved:1 wei:1 done:1 uriel:1 chekuri:1 until:2 receives:1 web:4 christopher:1 gaming:1 propagation:1 khanna:1 quality:2 believe:1 building:1 usage:2 k22:1 requiring:1 normalized:3 managed:1 usa:3 hence:6 regularization:1 laboratory:2 deal:1 round:2 during:1 covering:1 criterion:2 theoretic:1 ridge:2 demonstrate:1 performs:2 image:2 harmonic:1 recently:1 volume:1 million:3 anisotropic:1 approximates:1 refer:1 significant:1 rd:1 populated:1 similarly:3 erc:2 mathematics:1 submodular:38 language:2 moving:1 access:4 ashwinkumar:2 etc:2 hide:1 perspective:2 scenario:1 certain:5 inequality:1 binary:1 blog:1 santini:2 postponed:1 captured:1 minimum:2 seen:1 relaxed:2 steering:1 george:2 maximize:1 semi:5 technical:2 match:3 faster:1 equally:1 e1:1 bigger:1 va:1 prediction:1 variant:1 regression:2 essentially:1 additonal:1 chandra:1 sepideh:2 arxiv:6 iteration:1 kernel:5 vision:1 abbas:2 achieved:4 whereas:1 krause:3 thirteenth:1 irit:1 pass:6 subject:1 elegant:1 call:15 near:3 yang:2 easy:1 enough:1 xj:1 fit:3 restrict:2 andreas:3 tradeoff:1 det:2 qj:1 vassilvitskii:1 utility:25 gb:1 york:2 emek:3 remark:2 generally:1 useful:1 covered:4 marvel:1 amount:2 extensively:2 percentage:1 restricts:1 exist:1 notice:1 ravindra:1 per:1 diverse:1 paolo:3 srinivasan:1 threshold:3 achieving:2 ravi:2 diffusion:1 graph:22 monotone:5 fraction:12 run:5 fortieth:1 arrive:1 almost:3 reasonable:2 p3:1 marino:1 bicriterion:1 graham:1 submatrix:1 bound:9 guaranteed:2 oracle:16 annual:2 constraint:4 precisely:2 fei:2 n3:1 kleinberg:1 speed:2 min:2 kumar:3 performing:1 feige:2 smaller:4 em:1 slightly:1 s1:1 iccv:1 restricted:2 karbasi:2 taken:1 heart:1 ln:22 computationally:1 turn:1 discus:1 fail:2 committee:1 needed:1 initiate:1 deluge:1 halabi:1 bogunovic:2 end:4 studying:1 parametrize:1 operation:1 permit:1 apply:2 away:1 responding:1 running:3 esc:53 cf:1 include:1 tony:1 vakilian:2 maintaining:2 especially:1 establish:1 amit:2 classical:1 approximating:2 webgraph:1 added:1 question:1 strategy:1 md:1 traditional:1 said:1 nemhauser:2 thank:1 vigna:3 topic:1 trivial:5 rupp:1 assuming:1 erik:1 friendster:2 modeled:1 ratio:6 statement:2 stoc:1 fortunato:1 design:2 steurer:2 motivates:1 summarization:3 implementation:1 contributed:1 perform:4 snapshot:1 datasets:3 sm:3 defining:1 communication:3 ninth:1 arbitrary:2 community:3 david:2 namely:1 required:3 specified:2 barna:1 california:1 jaewon:1 barcelona:1 nip:1 address:1 able:1 beyond:1 jure:1 usually:1 lion:1 below:1 regime:1 challenge:2 baharan:2 memory:30 getoor:2 natural:2 rely:2 circumvent:1 representing:1 faced:1 literature:1 discovery:2 mapreduce:1 relative:1 manhattan:1 fully:1 permutation:1 sublinear:4 wolsey:4 limitation:1 demaine:1 h2:1 degree:1 s0:2 editor:1 storing:2 row:1 succinctly:1 changed:1 supported:2 free:1 rasmussen:1 infeasible:3 offline:26 allow:1 wide:4 sparse:2 distributed:6 benefit:1 overcome:1 world:9 evaluating:1 quantum:4 crsii2:1 author:3 made:3 jump:1 adaptive:1 forward:1 collection:1 far:1 social:4 transaction:1 sj:6 bertino:1 keep:1 active:9 assumed:1 search:1 svensson:1 ilija:2 kanade:1 terminate:2 molecule:4 robust:1 golovin:1 improving:1 adi:1 investigated:2 poly:2 european:1 domain:1 main:2 spread:1 universe:3 whole:3 motivation:1 hyperparameters:1 allowed:2 referred:1 representative:5 ny:2 atomization:2 lie:1 wirth:2 theorem:10 companion:1 list:1 gupta:1 evidence:1 exists:1 workshop:1 execution:1 iyer:1 budget:2 logarithmic:1 simply:1 jacm:1 lazy:8 partially:1 watch:1 springer:4 ch:6 ivm:1 satisfies:2 truth:1 acm:14 goal:7 formulated:1 loosing:1 consequently:1 towards:2 leonard:1 massimo:2 replace:1 fisher:1 feasible:4 hard:3 jeff:1 yuval:1 total:1 pas:26 experimental:1 ya:2 moseley:1 maxe:1 select:5 formally:1 combinatorica:1 latter:1 violated:2 accelerated:2 evaluate:2 tested:2 mcgregor:1 |
5,720 | 6,176 | A scaled Bregman theorem with applications
Richard Nock?,?,?
Aditya Krishna Menon?,?
Cheng Soon Ong?,?
?
?
?
Data61, the Australian National University and the University of Sydney
{richard.nock, aditya.menon, chengsoon.ong}@data61.csiro.au
Abstract
Bregman divergences play a central role in the design and analysis of a range of
machine learning algorithms through a handful of popular theorems. We present
a new theorem which shows that ?Bregman distortions? (employing a potentially
non-convex generator) may be exactly re-written as a scaled Bregman divergence
computed over transformed data. This property can be viewed from the standpoints
of geometry (a scaled isometry with adaptive metrics) or convex optimization (relating generalized perspective transforms). Admissible distortions include geodesic
distances on curved manifolds and projections or gauge-normalisation.
Our theorem allows one to leverage to the wealth and convenience of Bregman
divergences when analysing algorithms relying on the aforementioned Bregman
distortions. We illustrate this with three novel applications of our theorem: a
reduction from multi-class density ratio to class-probability estimation, a new
adaptive projection free yet norm-enforcing dual norm mirror descent algorithm,
and a reduction from clustering on flat manifolds to clustering on curved manifolds.
Experiments on each of these domains validate the analyses and suggest that the
scaled Bregman theorem might be a worthy addition to the popular handful of
Bregman divergence properties that have been pervasive in machine learning.
1
Introduction: Bregman divergences as a reduction tool
Bregman divergences play a central role in the design and analysis of a range of machine learning
(ML) algorithms. In recent years, Bregman divergences have arisen in procedures for convex
optimisation [4], online learning [9, Chapter 11] clustering [3], matrix approximation [13], classprobability estimation [7, 26, 29, 28], density ratio estimation [35], boosting [10], variational inference
[18], and computational geometry [5]. Despite these being very different applications, many of
these algorithms and their analyses basically rely on three beautiful analytic properties of Bregman
divergences, properties that we summarize for differentiable scalar convex functions ? with derivative
?0 , conjugate ?? , and divergence D? :
? the triangle equality: D? (xky) + D? (ykz) ? D? (xkz) = (?0 (z) ? ?0 (y))(x ? y);
? the dual symmetry property: D? (xky) = D?? (?0 (y)k?0 (x));
? the right-centroid (population minimizer) is the average: arg min? E[D? (Xk?)] = E[X].
Casting a problem as a Bregman minimisation allows one to employ these properties to simplify
analysis; for example, by interpreting mirror descent as applying a particular Bregman regulariser,
Beck and Teboulle [4] relied on the triangle equality above to simplify its proof of convergence.
Another intriguing possibility is that one may derive reductions amongst learning problems by
connecting their underlying Bregman minimisations. Menon and Ong [24] recently established how
(binary) density ratio estimation (DRE) can be exactly reduced to class-probability estimation (CPE).
This was facilitated by interpreting CPE as a Bregman minimisation [7, Section 19], and a new
property of Bregman divergences ? Menon and Ong [24, Lemma 2] showed that for any twice
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Problem A
Multiclass density-ratio estimation
Online optimisation on Lq ball
Clustering on curved manifolds
Problem B that Theorem 1 reduces A to
Multiclass class-probability estimation
Convex unconstrained online learning
Clustering on flat manifolds
Reference
?3, Lemma 2
?4, Lemma 4
?5, Lemma 5
Table 1: Applications of our scaled Bregman Theorem (Theorem 1) ? ?Reduction? encompasses
shortcuts on algorithms and on analyses (algorithm/proof A uses algorithm/proof B as subroutine).
.
differentiable scalar convex ?, for g(x) = 1 + x and ?(x)
?
= g(x) ? ?(x/g(x)),
g(x) ? D? (x/g(x)ky/g(y)) = D?? (xky) , ?x, y.
(1)
Since the binary class-probability function ?(x) = Pr(Y = 1|X = x) is related to the classconditional density ratio r(x) = Pr(X = x|Y = 1)/ Pr(X = x|Y = ?1) via Bayes? rule as
?(x) = r(x)/g(r(x)) ([24] assume Pr(Y = 1) = 1/2), any ?? with small D? (?k?
? ) implicitly
produces an r? with low D?? (rk?
r) i.e. a good estimate of the density ratio. The Bregman property of
eq. (1) thus establishes a reduction from DRE to CPE. Two questions arise from this analysis: can we
generalise eq. (1) to other g(?), and if so, can we similarly relate other problems to each other?
This paper presents a new Bregman identity (Theorem 1), the scaled Bregman theorem, a significant
generalisation of Menon and Ong [24, Lemma 2]. It shows that general distortions D?? ? which are not
necessarily convex, positive, bounded or symmetric ? may be re-expressed as a Bregman divergence
D? computed over transformed data, and thus inherit their good properties despite appearing prima
facie to be a very different object. This transformation can be as simple as a projection or normalisation
by a gauge, or more involved like the exponential map on lifted coordinates for a curved manifold.
Our theorem can be summarized in two ways. The first is geometric as it specializes to a scaled
isometry involving adaptive metrics. The second calls to a fundamental object of convex analysis,
generalized perspective transforms [11, 22, 23]. Indeed, our theorem states when
"the perspective of a Bregman divergence equals the distortion of a perspective",
for a perspective (?? in eq. 1) which is analytically a generalized perspective transform but does
not rely on the same convexity and sign requirements as in Mar?chal [22, 23]. We note that the
perspective of a Bregman divergence (the left-hand side of eq. 1) is a special case of conformal
divergence [27], yet to our knowledge it has never been formally defined. As with the aforementioned
key properties of Bregman divergences, Theorem 1 has potentially wide implications for ML. We
give three such novel applications to vastly different problems (see Table 1):
? a reduction of multiple density ratio estimation to multiclass-probability estimation (?3), generalising the results of [24] for the binary label case,
? a projection-free yet norm-enforcing mirror gradient algorithm (enforced norms are those of
mirrored vectors and of the offset) with guarantees for adaptive filtering (?4), and
? a seeding approach for clustering on positively or negatively (constant) curved manifolds based
on a popular seeding for flat manifolds and with the same approximation guarantees (?5).
Experiments on each of these domains (?6) validate our analysis. The Supplementary Material (SM)
details the proofs of all results, provides the experimental results in extenso and some additional
(nascent) applications to exponential families and computational information geometry.
2
Main result: the scaled Bregman theorem
.
.
In the remaining, [k] = {0, 1, ..., k} and [k]? = {1, 2, ..., k} for k ? N. For any differentiable (but
not necessarily convex) ? : X ? R, we define the Bregman distortion D? as
.
D? (xky) =
?(x) ? ?(y) ? (x ? y)> ??(y) .
(2)
If ? is convex, D? is the familiar Bregman divergence with generator ?. Without further ado, we
present our main result.
Theorem 1 Let, ? : X ? R be convex differentiable, and g : X ? R? be differentiable. Then,
g(x) ? D? (1/g(x)) ? x
(1/g(y)) ? y = D?? x
y , ?x, y ? X ,
(3)
.
where ?(x)
?
= g(x) ? ? ((1/g(x)) ? x) ,
2
(4)
X
Rd
D? (xky)
1
2
2 ? kx ? yk2
Rd
1
2
1
2
1
2
Rd ? R
Rd ? C
Rd+
Rd+
S(d)
S(d)
? (kxk2q ? kyk2q ) ?
P
i
(xi ?yi )?sign(yi )?|yi |q?1
kykq?2
q
? kxS ? y S k22
? kxH ? y H k22
P
xi
>
i xi log yi ? 1 (x ? y)
P xi P
xi
i yi ?
i log yi ? d
tr (X log X ? X log Y) ? tr (X) + tr (Y)
tr XY?1 ? log det(XY?1 ) ? d
D?? (xky)
kxk2 ? (1 ? cos ?x, y)
q?1
P
i )?|yi |
W ? kxkq ? W ? i xi ?sign(y
kykq?1
g(x)
kxk2
kxk2
sin kxk2 ? (1 ? cos DG (x, y))
kxk2
? (cosh DG (x, y) ? 1)
? sinh
P kxk2 xi
E[X]
x
log
i
i
yi ? d ? E[X] ? log E[Y]
P xi (Qj yj )1/d
Q
? d( j xj )1/d
i
yi
kxk2 / sin kxk2
q
tr( X )
tr (X log X ? X log Y) ? tr (X) ? log tr(
Y)
det(Y1/d )tr XY?1 ? d ? det(X1/d )
kxkq /W
?kxk2 / sinh kxk2
1> x
Q 1/d
i xi
tr (X)
det(X1/d )
.
Table 2: Examples of (D? , D?? , g) for which eq. (3) holds. Function xS = f (x) : Rd ? Rd+1 and
.
xH = f (x) : Rd ? Rd ? C are the Sphere and Hyperbolic lifting maps defined in SM, eqs. 51, 62.
W > 0 is a constant. DG denotes the Geodesic distance on the sphere (for xS ) or the hyperboloid
(for xH ). S(d) is the set of symmetric real matrices. Related proofs are in SM, Section III.
.
if and only if (i) g is affine on X, or (ii) for every z ? Xg = {(1/g(x)) ? x : x ? X},
? (z) = z > ??(z) .
(5)
Table 2 presents some examples of (sometimes involved) triplets (D? , D?? , g) for which eq. (3) holds;
related proofs are in Appendix III. Depending on ? and g, there are at least two ways to summarize
Theorem 1. One is geometric: Theorem 1 sometimes states a scaled isometry between X and Xg . The
other one comes from convex optimisation: Theorem 1 defines generalized perspective transforms on
Bregman divergences and roughly states the identity between the perspective transform of a Bregman
divergence and the Bregman distortion of the perspective transform. Appendix VIII gives more
details for both properties. We refer to Theorem 1 as the scaled Bregman theorem.
Remark. If Xg is a vector space, ? satisfies eq. (5) if and only if it is positive homogeneous of
degree 1 on Xg (i.e. ?(?z) = ? ? ?(z) for any ? > 0) from Euler?s homogenous function theorem.
When Xg is not a vector space, this only holds for ? such that ?z ? Xg as well. We thus call the
gradient condition of eq. (5) ?restricted positive homogeneity? for simplicity.
Remark. Appendix IV gives a ?deep composition? extension of Theorem 1.
For the special case where X = R, and g(x) = 1 + x, Theorem 1 is exactly [24, Lemma 2] (c.f. eq.
1). We wish to highlight a few points with regard to our more general result. First, the ?distortion?
generator ?? may be1 non-convex, as the following illustrates.
Example. Suppose ?(x) = (1/2)kxk22 , the generator for squared Euclidean distance. Then, for
g(x) = 1 + 1> x, we have ?(x)
?
= (1/2) ? kxk22 /(1 + 1> x), which is non-convex on X = Rd .
When ?? is non-convex, the right hand side in eq. (3) is an object that ostensibly bears only a
superficial similarity to a Bregman divergence; it is somewhat remarkable that Theorem 1 shows
this general ?distortion? between a pair (x, y) to be entirely equivalent to a (scaling of a) Bregman
divergence between some transformation of the points. Second, when g is linear, eq. (3) holds for any
convex ? (This was the case considered in [24]). When g is non-linear, however, ? must be chosen
carefully so that (?, g) satisfies the restricted homogeneity conditon2 of eq. (5). In general, given a
convex ?, one can ?reverse engineer? a suitable g, as illustrated by the following example.
Example. Suppose3 ?(x) = (1 + kxk22 )/2. Then, eq. (5) requires that kxk22 = 1 for every x ? Xg ,
i.e. Xg is (a subset of) the unit sphere. This is afforded by the choice g(x) = kxk2 .
Third, Theorem 1 is not merely a mathematical curiosity: we now show that it facilitates novel
results in three very different domains, namely estimating multiclass density ratios, constrained
online optimisation, and clustering data on a manifold with non-zero curvature. We discuss nascent
applications to exponential families and computational geometry in Appendices V and VI.
1
Evidently, ?
? is convex iff g is non-negative, by eq. (3) and the fact that a function is convex iff its Bregman
?distortion? is nonnegative [6, Section 3.1.3].
2
We stress that this condition only needs to hold on Xg ? X; it would not be really interesting in general for
? to be homogeneous everywhere in its domain, since we would basically have ?
? = ?.
3
The constant 1/2 added in ? does not change D? , since a Bregman divergence is invariant to affine terms;
removing this however would make the divergences D? and D?? differ by a constant.
3
3
Multiclass density-ratio estimation via class-probability estimation
Given samples from a number of densities, density ratio estimation concerns estimating the ratio
between each density and some reference density. This has applications in the covariate shift problem
wherein the train and test distributions over instances differ [33]. Our first application of Theorem 1
is to show how density ratio estimation can be reduced to class-probability estimation [7, 29].
To proceed, we fix notation. For some integer C ? 1, consider a distribution P(X, Y) over an
(instance, label) space X ? [C]. Let ({Pc }C
c=1 , ?) be densities giving P(X|Y = c) and P(Y = c)
respectively, and M giving P(X) accordingly. Fix c? ? [C] a reference class, and suppose for
.
? ? 4C?1 such that ?
simplicity that c? = C. Let ?
?c = ?c /(1 ? ?C ). Density ratio estimation
.
C?1
[35] concerns inferring the vector r(x) ? R
of density ratios relative to C, with rc (x) =
P(X = x|Y = c)/P(X = x|Y = C) , while class-probability estimation [7] concerns inferring the
.
vector ?(x) ? RC?1 of class-probabilities, with ?c (x) = P(Y = c|X = x)/?
?c . In both cases, we
estimate the respective quantities given an iid sample S ? P(X, Y)m (m is the training sample size).
The genesis of the reduction from density ratio to class-probability estimation is the fact that r(x) =
? typically derived by
(?C /(1 ? ?C )) ? ?(x)/?C (x). In practice one will only have an estimate ?,
minimising a suitable loss on the given S [37], with a canonical example being multiclass logistic
? it is natural to estimate the density ratio via:
regression. Given ?,
?
r?(x) = ?(x)/?
?C (x) .
(6)
While this estimate is intuitive, to establish a formal reduction we must relate the quality of r? to
? Since the minimisation of a suitable loss for class-probability estimation is equivalent to a
that of ?.
Bregman minimisation [7, Section 19], [37, Proposition 7], this is however immediate by Theorem 1:
Lemma 2 Given a class-probability estimator ?? : X ? [0, 1]C?1 , let the density ratio estimator r? be
as per Equation 6. Then for any convex differentiable ? : [0, 1]C?1 ? R,
?
EX?M [D? (?(X)k?(X))]
= (1 ? ?C ) ? EX?PC D?? (r(X)k?
r (X))
(7)
.
? >x .
where ?? is as per Equation 4 with g(x) = ?C /(1 ? ?C ) + ?
Lemma 2 generalises [24, Proposition 3], which focussed on the binary case with ? = 1/2 (See
Appendix VII for a review of that result). Unpacking the Lemma, the LHS in Equation 7 represents
the object minimised by some suitable loss for class-probability estimation. Since g is affine, we
can use any convex, differentiable ?, and so can use any suitable class-probability loss to estimate
? Lemma 2 thus implies that producing ?? by minimising any class-probability loss equivalently
?.
produces an r? as per Equation 6 that minimises a Bregman divergence to the true r. Thus, Theorem 1
provides a reduction from density ratio to multiclass probability estimation.
We now detail two applications where g(?) is no longer affine, and ? must be chosen more carefully.
4
Dual norm mirror descent: projection-free online learning on Lp balls
A substantial amount of work in the intersection of ML and convex optimisation has focused on
constrained optimisation within a ball [32, 14]. This optimisation is typically via projection operators
that can be expensive to compute [17, 19]. We now show that gauge functions can be used as an
inexpensive alternative, and that Theorem 1 easily yields guarantees for this procedure in online
learning. We consider the adaptive filtering problem, closely related to the online least squares
problem with linear predictors [9, Chapter 11]. Here, over a sequence of T rounds, we observe some
>
xt ? X. We must then predict a target value y?t = wt?1
xt using our current weight vector wt?1 .
>
The true target yt = u xt + t is then revealed, where t is some unknown noise, and we may update
our weight to wt . Our goal is to minimise the regret of the sequence {wt }Tt=0 ,
.
R(w1:T |u) =
T
X
>
u> xt ? wt?1
xt
t=1
2
?
T
X
u> xt ? yt
2
.
(8)
t=1
.
Let q ? (1, 2] and p be such that 1/p + 1/q = 1. For ? = (1/2) ? kxk2q and loss `t (w) =
(1/2) ? (yt ? w> xt )2 , the p-LMS algorithm [20] employs the stochastic mirror gradient updates:
wt
.
=
argmin ?t ? `t (w) + D? (wkwt?1 ) = (??)?1 (??(wt?1 ) ? ?t ? ?`t ) ,
w
4
(9)
where ?t is a learning rate to be specified by the user. [20, Theorem 2] shows that for appropriate ?t ,
one has R(w1:T |u) ? (p ? 1) ? maxx?X kxk2p ? kuk2q .
The p-LMS updates do not provide any explicit control on kwt k, i.e. there is no regularisation.
Experiments (Section ?6) suggest that leaving kwt k uncontrolled may not be a good idea as the
increase of the norm sometimes prevents (significant) updates (eq. (9)). Also, the wide success of
regularisation in ML calls for regularised variants that retain the regret guarantees and computational
efficiency of p-LMS. (Adding a projection step to eq. (9) would not achieve both.) We now do just
.
this. For fixed W > 0, let ? = (1/2) ? (W 2 + kxk2q ), a translation of that used in p-LMS. Invoking
.
Theorem 1 with the admissible gq (x) = ||x||q /W yields ?? = ??q = W kxkq (see Table 2). Using
the fact that Lp and Lq norms are dual of each other, we replace eq. (9) by:
wt
.
=
???p (???q (wt?1 ) ? ?t ? ?`t ) .
(10)
See Lemma A of the Appendix for the simple forms of ???{p,q} . We call update (10) the dual norm
p-LMS (DN-p-LMS) algorithm, noting that the dual refers to the polar transform of the norm, and g
stems from a gauge normalization for Bq (W ), the closed Lq ball with radius W > 0. Namely, we
.
have ?GAU (x) = W/kxkq = g(x)?1 for the gauge ?GAU (x) = sup{z ? 0 : z ? x ? Bq (W )}, so
that ??q implicitly performs gauge normalisation of the data. This update is no more computationally
expensive than eq. (9) ? we simply need to compute the p- and q-norms of appropriate terms ? but,
crucially, automatically constrains the norms of wt and its image by ???q .
Lemma 3 For the update in eq. (10), kwt kq = k???q (wt )kp = W, ?t > 0.
Lemma 3 is remarkable, since nowhere in eq. (10) do we project onto the Lq ball. Nonetheless, for
the DN-p-LMS updates to be principled, we need a similar regret guarantee to the original p-LMS.
Fortunately, this may be done using Theorem 1 to exploit the original proof of [20]. For any u ? Rd ,
define the q-normalised regret of {wt }Tt=0 by
T
X
.
Rq (w1:T |u) =
>
(1/gq (u)) ? u> xt ? wt?1
xt
t=1
2
?
T
X
(1/gq (u)) ? u> xt ? yt
2
.(11)
t=1
We have the following bound on Rq for the DN-p-LMS updates (We cannot expect a bound on the
unnormalised R(?) of eq. (8), since by Lemma 3 we can only compete against norm W vectors).
Lemma 4 Pick any u ? Rd , p, q satisfying 1/p + 1/q = 1 and p > 2, and W > 0. Suppose
kxt kp ? Xp and |yt | ? Y, ?t ? T . Let {wt } be as per eq. (10), using learning rate
?t
.
= ?t ?
W
,
> x |X
4(p ? 1) max{W, Xp }Xp W + |yt ? wt?1
t
p
(12)
for any desired ?t ? [1/2, 1]. Then,
Rq (w1:T |u) ? 4(p ? 1)Xp2 W 2 + (16p ? 8) max{W, Xp }Xp2 W + 8Y Xp2 .
(13)
Several remarks can be made. First, the bound depends on the maximal signal value Y , but this is the
maximal signal in the observed sequence, so it may not be very large in practice; if it is comparable to
W , then our bound is looser than [20] by just a constant factor. Second, the learning rate is adaptive
in the sense that its choice depends on the last mistake made. There is a nice way to represent the
.
?offset? vector ?t ? ?`t in eq. (10), since we have, for Q00 = 4(p ? 1) max{W, Xp }Xp W ,
>
|yt ? wt?1
xt |Xp
1
>
?t ? ?`t = W ? 00
?
sign(y
?
w
x
)
?
?
x
,
(14)
t
t?1 t
> x |X
Xp
Q + |yt ? wt?1
t
p
so the Lp norm of the offset is actually equal to W ? Q, where Q ? [0, 1] is all the smaller as the
vector w. gets better. Hence, the update in eq. (10) controls in fact all norms (that of w. , its image by
???q and the offset). Third, because of the normalisation of u, the bound actually does not depend on
u, but on the radius W chosen for the Lq ball.
5
Sphere
q
x
Tq S d
expq (x)
Hyperboloid
Drec(y, c)
Lifting map
R
x
S
Rd
Lifting
map
c
d+1
xH
Im(xd+1 )
Sd
DG (y, c)
Spherical k-means
k-means(++)
(in Sd )
(in Rd+1 )
Sphere Sd
y
k-means(++)
(in Rd+1 )
Figure 1: (L) Lifting map into Rd ? R for clustering on the sphere with k-means++. (M) Drec in Eq.
(15) in vertical thick red line. (R) Lifting map into Rd ? C for the hyperboloid.
5
Clustering on a curved manifold via clustering on a flat manifold
Our final application can be related to two problems that have received a steadily growing interest
over the past decade in unsupervised ML: clustering on a non-linear manifold [12], and subspace
custering [36]. We consider two fundamental manifolds investigated by [16] to compute centers of
mass from relativistic theory: the sphere Sd and the hyperboloid Hd , the former being of positive
curvature, and the latter of negative curvature. Applications involving these specific manifolds are
numerous in text processing, computer vision, geometric modelling, computer graphics, to name a
few [8, 12, 15, 21, 30, 34]. We emphasize the fact that the clustering problem has significant practical
impact for d as small as 2 in computer vision [34].
The problem is non-trivial for two separate reasons. First, the ambient space, i.e. the space of
registration of the input data, is often implicitly Euclidean and therefore not the manifold [12]: if the
mapping to the manifold is not carefully done, then geodesic distances measured on the manifold may
be inconsistent with respect to the ambient space. Second, the fact that the manifold has non-zero
curvature essentially prevents the direct use of Euclidean optimization algorithms [38] ? put simply,
the average of two points that belong to a manifold does not necessarily belong to the manifold, so
we have to be careful on how to compute centroids for hard clustering [16, 27, 30, 31].
What we show now is that Riemannian manifolds with constant sectional curvature may be clustered
with the k-means++ seeding for flat manifolds [2], without even touching a line of the algorithm.
To formalise the problem, we need three key components of Riemannian geometry: tangent planes,
exponential map and geodesics [1]. We assume that the ambient space is a tangent plane to the
manifold M, which conveniently makes it look Euclidean (see Figure 1). The point of tangency is
called q, and the tangent plane Tq M. The exponential map, expq : Tq M ? M, performs a distance
preserving mapping: the geodesic length between q and expq (x) in M is the same as the Euclidean
.
length between q and x in Tq M. Our clustering objective is to find C = {c1 , c2 , ...ck } ? M such
that Drec (S : C) = inf C0 ?M,|C0 |=k Drec (S, C0 ), with
. P
Drec (S, C) = i?[m]? minj?[k]? Drec (expq (xi ), cj ) ,
(15)
where Drec is a reconstruction loss, a function of the geodesic distance between expq (xi ) and cj .
We use two loss functions defined from [16] and used in ML for more than a decade [12]:
1 ? cos DG (y, c) for M = Sd
.
R+ 3 Drec (y, c) =
.
(16)
cosh DG (y, c) ? 1 for M = Hd
Here, DG (y, c) is the corresponding geodesic distance of M between y and c. Figure 1 shows
that Drec (y, c) is the orthogonal distance between Tc M and y when M = Sd . The solution to the
clustering problem in eq. (15) is therefore the one that minimizes the error between tangent planes
defined at the centroids, and points on the manifold.
It turns out that both distances in 16 can be engineered as Bregman divergences via Theorem 1, as seen
in Table 2. Furthermore, they imply the same ?, which is just the generator of Mahalanobis distortion,
but a different g. The construction involves a third party, a lifting map (lift(.)) that increases the
dimension by one. The Sphere lifting map Rd 3 x 7? xS ? Rd+1 is indicated in Table 3 (left). The
new coordinate depends on the norm of x. The Hyperbolic lifting map, Rd 3 x 7? xH ? Rd ? C,
involves a pure imaginary additional coordinate, is indicated in in Table 3 (right, with a slight abuse
of notation) and Figure 1. Both xS and xH live on a d-dimensional manifold, depicted in Figure 1.
6
(Sphere) Sk-means++(S, k)
Input: dataset S ? Tq Sd , k ? N? ;
Step 1: S+ ? {gS?1 (xS ) ? xS : xS ? lift(S)};
Step 2: C+ ? k-means++_seeding(S+ , k);
+
Step 3: C ? exp?1
q (C );
Output: Cluster centers C ? Tq Sd ;
.
xS = [x1 x2 ? ? ? xd kxk2 cot kxk2 ]
.
gS (xS ) = kxk2 / sin kxk2
(Hyperboloid) Hk-means++(S, k)
Input: dataset S ? Tq Hd , k ? N? ;
?1
Step 1: S+ ? {gH
(xH )?xH : xH ? lift(S)};
+
Step 2: C ? k-means++_seeding(S+ , k);
+
Step 3: C ? exp?1
q (C );
Output: Cluster centers C ? Tq Hd ;
.
xH = [x1 x2 ? ? ? xd ikxk2 coth kxk2 ]
.
gH (xH ) = ?kxk2 / sinh kxk2
Table 3: How to use k-means++ to cluster points on the sphere (left) or the hyperboloid (right).
(p, q) = (1.17, 6.9)
70
60
50
40
30
20
10
0
-10
(p, q) = (2.0, 2.0)
5
0
0
-5
-5
-10
-10
-15
-15
-20
20000
? = 1.0
40000
(p, q) = (1.17, 6.9)
12
10
8
6
4
2
0
-2
-4
-6
-8
-20
-25
0
(p, q) = (6.9, 1.17)
5
-25
0
20000
40000
? = 1.0
0
20000
40000
-5
-10
-15
-20
-25
-30
20000
40000
? = 0.5
(p, q) = (6.9, 1.17)
0
-2
-4
-6
-8
-10
-12
-14
-16
0
0
? = 1.0
(p, q) = (2.0, 2.0)
5
0
20000
40000
0
? = 1.3
20000
40000
? = 0.2
Table 4: Summary of the experiments displaying (y) the error of p-LMS minus error of DN-p-LMS
(when > 0, DN-p-LMS beats p-LMS) as a function of t, in the setting of [20], for various values of
(p, q) (columns). Left panel: (D)ense target; Right panel: (S)parse target.
When they are scaled by the corresponding g. (.), they happen to be mapped to Sd or Hd , respectively,
by what happens to be the manifold?s exponential map for the original x (see Appendix III).
Theorem 1 is interesting in this case because ? corresponds to a Mahalanobis distortion: this shows
?1
that k-means++ seeding [2, 25] can be used directly on the scaled coordinates (g{S,H}
(x{S,H} ) ?
x{S,H} ) to pick centroids that yield an approximation of the global optimum for the clustering
problem on the manifold which is just as good as the original Euclidean approximation bound [2].
Lemma 5 The expected potential of Sk-means++ seeding over the random choices of C+ satisfies:
E[Drec (S : C)] ?
8(2 + log k) ? inf Drec (S : C0 ) .
C0 ?Sd
(17)
The same approximation bounds holds for Hk-means++ seeding on the hyperboloid (C0 , C+ ? Hd ).
Lemma 5 is notable since it was only recently shown that such a bound is possible for the sphere [15],
and to our knowledge, no such approximation quality is known for clustering on the hyperboloid [30,
31]. Notice that Lloyd iterations on non-linear manifolds would require repetitive renormalizations
to keep centers on the manifold [12], an additional disadvantage compared to clustering on flat
manifolds that {G, K}-means++ seedings do not bear.
6
Experimental validation
We present some experiments validating our theoretical analysis for the applications above.
Multiple density ratio estimation. See Appendix IX for experiments in this domain.
Dual norm p-LMS (DN-p-LMS). We ran p-LMS and the DN-p-LMS of ?4 on the experimental
setting of [20]. We refer to that paper for an exhaustive description of the experimental setting, which
we briefly summarize: it is a noisy signal processing setting, involving a dense or a sparse target. We
compute, over the signal received, the error of our predictor on the signal. We keep all parameters as
they are in [20], except for one: we make sure that data are scaled to fit in a Lp ball of prescribed
radius, to test the assumption related in [20] that fixing the learning rate ?t is not straightforward
in p-LMS. Knowing the true value of Xp , we then scale it by a misestimation factor ?, typically
in [0.1, 1.7]. We use the same misestimation in DN-p-LMS. Thus, both algorithms suffer the same
source of uncertainty. Also, we periodically change the signal (each 1000 iterations), to assess the
performances of the algorithms in tracking changes in the signal.
Experiments, given in extenso in Appendix X, are sumarized in Table 4. The following trends emerge:
in the mid to long run, DN-p-LMS is never beaten by p-LMS by more than a fraction of percent.
On the other hand, DN-p-LM can beat p-LMS by very significant differences (exceeding 40%), in
particular when p < 2, i.e. when we are outside the regime of the proof of [20]. This indicates that
7
rel. improvement (%)
rel. improvement (%)
60
50
40
30
20
10
0
-10
0 5 10 15 20 25 30 35 40 45 50
100
90
80
70
60
50
40
30
20
10
0
0 5 10 15 20 25 30 35 40 45 50
k
k
90
80
70
60
50
40
0 5 10 15 20 25 30 35 40 45 50
55
50
45
40
35
30
25
20
15
10
5
0
iteration#
100
iteration#
prop (%) DM beats GKM
Table 5: (L) Relative improvement (decrease) in k-means potential of SKM?Sk-means++ compared
to SKM alone. (R) Relative improvement of Sk-means++ over Forgy initialization on the sphere.
0 5 10 15 20 25 30 35 40 45 50
k
k
50
45
40
35
30
25
20
15
10
5
0
0 5 10 15 20 25 30 35 40 45 50
k
Table 6: (L) % of the number of runs of SKM whose output (when it has converged) is better than
Sk-means++. (C) Maximal # of iterations for SKM after which it beats Sk-means++ (ignoring runs
of SKM that do not beat Sk-means++). (R) Average # of iterations for SKM to converge.
significantly stronger and more general results than the one of Lemma 4 may be expected. Also, it
seems that the problem of p-LMS lies in an ?exploding? norm problem: in various cases, we observe
that kwt k (in any norm) blows up with t, and this correlates with a very significant degradation of its
performances. Clearly, DN-p-LMS does not have this problem since all relevant norms are under
tight control. Finally, even when the norm does not explode, DN-p-LMS can still beat p-LMS, by less
important differences though. Of course, the output of p-LMS can repeatedly be normalised, but the
normalisation would escape the theory of [20] and it is not clear which normalisation would be best.
Clustering on the sphere. For k ? [50]? , we simulate on T0 S2 a mixture of spherical Gaussian and
uniform densities in random rectangles with 2k components. We run three algorithms: (i) SKM [12]
on the data embedded on S2 with random (Forgy) initialization, (ii), Sk-means++ and (iii) SKM with
Sk-means++ initialisation. Results are averaged over the algorithms? runs.
Table 5 (left) displays that using Sk-means++ as initialization for SKM brings a very significant
gain over SKM alone, since we almost divide the k-means potential by a factor 2 on some runs.
The right plot of Table 5 shows that S-k-means++ consistently reduces the k-means potential by at
least a factor 2 over Forgy. The left plot in Table 6 displays that even when it has converged, SKM
does not necessarily beat Sk-means++. Finally, the center+right plots in Table 6 display that even
when it does beat Sk-means++ when it has converged, the iteration number after which SKM beats
Sk-means++ increases with k, and in the worst case may exceed the average number of iterations
needed for SKM to converge (we stopped SKM if relative improvement is not above 1o/oo ).
7
Conclusion
We presented a new scaled Bregman identity, and used it to derive novel results in several fields
of machine learning: multiple density ratio estimation, adaptive filtering, and clustering on curved
manifolds. We believe that, like other known key properties of Bregman divergences, there is potential
for other applications of the result; Appendices V, VI present preliminary thoughts in this direction.
8
Acknowledgments
The authors wish to thank Bob Williamson and the reviewers for insightful comments.
References
[1] S.-I. Amari and H. Nagaoka. Methods of Information Geometry. Oxford University Press, 2000.
8
[2] D. Arthur and S. Vassilvitskii. k-means++ : the advantages of careful seeding. In 19 th SODA, 2007.
[3] A. Banerjee, S. Merugu, I. Dhillon, and J. Ghosh. Clustering with Bregman divergences. JMLR, 6:1705?
1749, 2005.
[4] A. Beck and M. Teboulle. Mirror descent and nonlinear projected subgradient methods for convex
optimization. Operations Research Letters, 31(3):167?175, 2003.
[5] J.-D. Boissonnat, F. Nielsen, and R. Nock. Bregman Voronoi diagrams. DCG, 44(2):281?307, 2010.
[6] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[7] A. Buja, W. Stuetzle, and Y. Shen. Loss functions for binary class probability estimation and classification:
Structure and applications, 2005. Unpublished manuscript.
[8] S.-R. Buss and J.-P. Fillmore. Spherical averages and applications to spherical splines and interpolation.
ACM Transactions on Graphics, 20:95?126, 2001.
[9] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning and Games. Cambridge University Press, 2006.
[10] M. Collins, R. Schapire, and Y. Singer. Logistic regression, AdaBoost and Bregman distances. MLJ, 2002.
[11] B. Dacorogna and P. Mar?chal. The role of perspective functions in convexity, polyconvexity, rank-one
convexity and separate convexity. J. Convex Analysis, 15:271?284, 2008.
[12] I. Dhillon and D.-S. Modha. Concept decompositions for large sparse text data using clustering. MLJ,
42:143?175, 2001.
[13] I.-S. Dhillon and J.-A. Tropp. Matrix nearness problems with Bregman divergences. SIAM Journal on
Matrix Analysis and Applications, 29(4):1120?1146, 2008.
[14] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the `1 -ball for learning
in high dimensions. In ICML ?08, pages 272?279, New York, NY, USA, 2008. ACM.
[15] Y. Endo and S. Miyamoto. Spherical k-means++ clustering. In Proc. of the 12th MDAI, pages 103?114,
2015.
[16] G.-A. Galperin. A concept of the mass center of a system of material points in the constant curvature
spaces. Communications in Mathematical Physics, 154:63?84, 1993.
[17] E. Hazan and S. Kale. Projection-free online learning. In John Langford and Joelle Pineau, editors, ICML
?12, pages 521?528, New York, NY, USA, 2012. ACM.
[18] M. Hern?ndez-Lobato, Y. Li, M. Rowland, D. Hern?ndez-Lobato, T. Bui, and R.-E. Turner. Black-box
alpha-divergence minimization. In 33rd ICML, 2016.
[19] M. Jaggi. Revisiting Frank-Wolfe: Projection-free sparse convex optimization. In 30th ICML, 2013.
[20] J. Kivinen, M. Warmuth, and B. Hassibi. The p-norm generalization of the LMS algorithm for adaptive
filtering. IEEE Trans. SP, 54:1782?1793, 2006.
[21] D. Kuang, S. Yun, and H. Park. SymNMF: nonnegative low-rank approximation of a similarity matrix for
graph clustering. J. Global Optimization, 62:545?574, 2014.
[22] P. Mar?chal. On a functional operation generating convex functions, part 1: duality. J. of Optimization
Theory and Applications, 126:175?189, 2005.
[23] P. Mar?chal. On a functional operation generating convex functions, part 2: algebraic properties. J. of
Optimization Theory and Applications, 126:375?366, 2005.
[24] A.-K. Menon and C.-S. Ong. Linking losses for class-probability and density ratio estimation. In ICML,
2016.
[25] R. Nock, P. Luosto, and J. Kivinen. Mixed Bregman clustering with approximation guarantees. In ECML,
2008.
[26] R. Nock and F. Nielsen. Bregman divergences and surrogates for learning. IEEE PAMI, 31:2048?2059,
2009.
[27] R. Nock, F. Nielsen, and S.-I. Amari. On conformal divergences and their population minimizers. IEEE
Trans. IT, 62:527?538, 2016.
[28] M. Reid and R. Williamson. Information, divergence and risk for binary experiments. JMLR, 12:731?817,
2011.
[29] M.-D. Reid and R.-C. Williamson. Composite binary losses. JMLR, 11:2387?2422, 2010.
[30] G. Rong, M. Jin, and X. Guo. Hyperbolic centroidal Voronoi tessellation. In 14 th ACM SPM, 2010.
[31] O. Schwander and F. Nielsen. Matrix Information Geometry, chapter Learning Mixtures by Simplifying
Kernel Density Estimators, pages 403?426. Springer Berlin Heidelberg, 2013.
[32] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver for SVM. In
ICML ?08, page 807?814. ACM, 2007.
[33] H. Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood
function. Journal of Statistical Planning and Inference, 90(2):227 ? 244, 2000.
[34] J. Straub, G. Rosman, O. Freifeld, J.-J. Leonard, and J.-W. Fisher III. A mixture of Manhattan frames:
Beyond the Manhattan world. In Proc. of the 27th IEEE CVPR, pages 3770?3777, 2014.
[35] M. Sugiyama, T. Suzuki, and T. Kanamori. Density-ratio matching under the Bregman divergence: a
unified framework of density-ratio estimation. AISM, 64(5):1009?1044, 2012.
[36] R. Vidal. Subspace clustering. IEEE Signal Processing Magazine, 28:52?68, 2011.
[37] R.-C. Williamson, E. Vernet, and M.-D. Reid. Composite multiclass losses, 2014. Unpublished manuscript.
[38] H. Zhang and S. Sra. First-order methods for geodesically convex optimization. CoRR, abs/1602.06053,
2016.
9
| 6176 |@word cpe:3 briefly:1 seems:1 norm:21 stronger:1 c0:6 crucially:1 decomposition:1 simplifying:1 invoking:1 pick:2 tr:10 minus:1 reduction:10 ndez:2 initialisation:1 past:1 imaginary:1 current:1 expq:5 yet:3 intriguing:1 written:1 must:4 john:1 periodically:1 happen:1 analytic:1 seeding:9 plot:3 update:10 alone:2 warmuth:1 accordingly:1 plane:4 xk:1 nearness:1 provides:2 boosting:1 zhang:1 mathematical:2 rc:2 dn:12 direct:1 c2:1 expected:2 indeed:1 cot:1 roughly:1 planning:1 growing:1 multi:1 relying:1 spherical:5 automatically:1 solver:1 boissonnat:1 spain:1 estimating:2 underlying:1 bounded:1 notation:2 mass:2 project:1 panel:2 what:2 straub:1 argmin:1 minimizes:1 unified:1 ghosh:1 transformation:2 guarantee:6 every:2 xd:3 exactly:3 scaled:14 control:3 unit:1 producing:1 reid:3 positive:4 sd:10 mistake:1 despite:2 oxford:1 modha:1 interpolation:1 abuse:1 lugosi:1 might:1 black:1 twice:1 au:1 initialization:3 pami:1 co:3 range:2 averaged:1 practical:1 kuk2q:1 acknowledgment:1 yj:1 practice:2 regret:4 procedure:2 stuetzle:1 maxx:1 hyperbolic:3 significantly:1 projection:10 thought:1 boyd:1 composite:2 refers:1 matching:1 suggest:2 gau:2 convenience:1 onto:2 cannot:1 operator:1 get:1 put:1 risk:1 applying:1 live:1 pegasos:1 forgy:3 equivalent:2 map:12 reviewer:1 yt:8 center:6 lobato:2 straightforward:1 kale:1 convex:29 focused:1 shen:1 simplicity:2 pure:1 rule:1 estimator:3 vandenberghe:1 hd:6 population:2 renormalizations:1 coordinate:4 target:5 play:2 suppose:3 user:1 construction:1 magazine:1 homogeneous:2 us:1 regularised:1 trend:1 nowhere:1 expensive:2 satisfying:1 wolfe:1 relativistic:1 observed:1 role:3 worst:1 revisiting:1 decrease:1 ran:1 substantial:1 principled:1 rq:3 convexity:4 constrains:1 ong:6 geodesic:7 depend:1 tight:1 predictive:1 negatively:1 efficiency:1 triangle:2 easily:1 chapter:3 various:2 train:1 kp:2 lift:3 outside:1 shalev:2 exhaustive:1 whose:1 supplementary:1 q00:1 distortion:12 cvpr:1 amari:2 nagaoka:1 transform:4 noisy:1 final:1 online:8 sequence:3 differentiable:7 evidently:1 kxt:1 advantage:1 reconstruction:1 gq:3 maximal:3 relevant:1 iff:2 achieve:1 intuitive:1 description:1 validate:2 ky:1 convergence:1 cluster:3 requirement:1 optimum:1 produce:2 generating:2 object:4 illustrate:1 derive:2 depending:1 fixing:1 minimises:1 measured:1 oo:1 received:2 eq:27 sydney:1 involves:2 come:1 australian:1 implies:1 differ:2 direction:1 radius:3 closely:1 nock:6 thick:1 stochastic:1 engineered:1 material:2 require:1 fix:2 clustered:1 really:1 preliminary:1 generalization:1 proposition:2 im:1 extension:1 rong:1 hold:6 considered:1 exp:2 kxk2q:3 predict:1 mapping:2 lm:29 estimation:25 polar:1 proc:2 label:2 gauge:6 establishes:1 tool:1 minimization:1 clearly:1 kxh:1 gaussian:1 ck:1 lifted:1 casting:1 minimisation:5 pervasive:1 derived:1 improvement:5 consistently:1 modelling:1 indicates:1 rank:2 likelihood:1 hk:2 centroid:4 geodesically:1 sense:1 inference:3 voronoi:2 minimizers:1 typically:3 dcg:1 transformed:2 subroutine:1 arg:1 aforementioned:2 dual:7 classification:1 constrained:2 special:2 homogenous:1 equal:2 field:1 never:2 represents:1 park:1 look:1 unsupervised:1 icml:6 spline:1 simplify:2 richard:2 employ:2 few:2 escape:1 dg:7 national:1 divergence:32 homogeneity:2 kwt:4 beck:2 familiar:1 geometry:7 csiro:1 tq:8 ab:1 normalisation:6 interest:1 possibility:1 mixture:3 pc:2 primal:1 implication:1 ambient:3 bregman:48 arthur:1 xy:3 lh:1 respective:1 bq:2 orthogonal:1 iv:1 euclidean:6 divide:1 re:2 desired:1 miyamoto:1 formalise:1 theoretical:1 stopped:1 instance:2 column:1 teboulle:2 disadvantage:1 tessellation:1 subset:1 euler:1 predictor:2 kq:1 uniform:1 kuang:1 graphic:2 data61:2 density:28 fundamental:2 siam:1 retain:1 physic:1 minimised:1 connecting:1 w1:4 vastly:1 central:2 squared:1 cesa:1 derivative:1 li:1 potential:5 blow:1 summarized:1 lloyd:1 notable:1 vi:2 depends:3 closed:1 hazan:1 sup:1 red:1 bayes:1 relied:1 ass:1 square:1 merugu:1 yield:3 basically:2 iid:1 bob:1 converged:3 kykq:2 minj:1 inexpensive:1 against:1 nonetheless:1 involved:2 steadily:1 dm:1 proof:8 riemannian:2 endo:1 prima:1 gain:1 dataset:2 popular:3 knowledge:2 cj:2 nielsen:4 carefully:3 actually:2 mlj:2 manuscript:2 xky:6 wherein:1 adaboost:1 done:2 though:1 mar:4 box:1 furthermore:1 just:4 langford:1 hand:3 parse:1 tropp:1 nonlinear:1 banerjee:1 spm:1 defines:1 logistic:2 brings:1 quality:2 menon:6 indicated:2 pineau:1 believe:1 name:1 usa:2 k22:2 concept:2 true:3 unpacking:1 former:1 equality:2 analytically:1 hence:1 symmetric:2 dhillon:3 illustrated:1 round:1 sin:3 mahalanobis:2 game:1 generalized:4 be1:1 stress:1 yun:1 tt:2 performs:2 duchi:1 interpreting:2 gh:2 percent:1 image:2 variational:1 novel:4 recently:2 functional:2 belong:2 slight:1 linking:1 relating:1 significant:6 refer:2 composition:1 cambridge:2 rd:23 unconstrained:1 similarly:1 sugiyama:1 similarity:2 yk2:1 longer:1 jaggi:1 curvature:6 isometry:3 recent:1 showed:1 perspective:11 touching:1 inf:2 reverse:1 binary:7 success:1 joelle:1 yi:9 krishna:1 preserving:1 additional:3 somewhat:1 fortunately:1 seen:1 converge:2 exploding:1 signal:8 ii:2 multiple:3 dre:2 reduces:2 stem:1 generalises:1 minimising:2 sphere:13 long:1 impact:1 prediction:1 involving:3 regression:2 variant:1 optimisation:7 chandra:1 metric:2 vision:2 essentially:1 iteration:8 sometimes:3 arisen:1 normalization:1 represent:1 repetitive:1 kernel:1 c1:1 addition:1 wealth:1 diagram:1 leaving:1 source:1 standpoint:1 sure:1 comment:1 validating:1 facilitates:1 inconsistent:1 gkm:1 call:4 integer:1 leverage:1 noting:1 revealed:1 iii:5 exceed:1 xj:1 fit:1 idea:1 knowing:1 multiclass:8 det:4 shift:2 qj:1 minimise:1 t0:1 vassilvitskii:1 suffer:1 algebraic:1 proceed:1 york:2 centroidal:1 remark:3 repeatedly:1 deep:1 clear:1 transforms:3 amount:1 mid:1 cosh:2 reduced:2 schapire:1 mirrored:1 canonical:1 notice:1 sign:4 estimated:1 per:4 key:3 tangency:1 registration:1 rectangle:1 graph:1 subgradient:1 merely:1 fraction:1 year:1 enforced:1 compete:1 facilitated:1 everywhere:1 uncertainty:1 run:6 nascent:2 soda:1 letter:1 family:2 almost:1 looser:1 appendix:10 scaling:1 comparable:1 entirely:1 bound:8 uncontrolled:1 sinh:3 display:3 cheng:1 nonnegative:2 g:2 handful:2 afforded:1 flat:6 x2:2 explode:1 skm:14 simulate:1 min:1 prescribed:1 ball:8 shimodaira:1 conjugate:1 smaller:1 lp:4 happens:1 restricted:2 pr:4 invariant:1 classconditional:1 computationally:1 equation:4 bus:1 hern:2 discus:1 turn:1 needed:1 ostensibly:1 singer:3 conformal:2 operation:3 vidal:1 vernet:1 observe:2 appropriate:2 appearing:1 alternative:1 original:4 denotes:1 clustering:26 include:1 remaining:1 ado:1 exploit:1 giving:2 establish:1 objective:1 question:1 added:1 quantity:1 surrogate:1 amongst:1 gradient:4 subspace:2 distance:10 separate:2 mapped:1 misestimation:2 thank:1 berlin:1 manifold:31 trivial:1 reason:1 enforcing:2 viii:1 length:2 ratio:23 equivalently:1 potentially:2 relate:2 frank:1 negative:2 design:2 regulariser:1 unknown:1 bianchi:1 seedings:1 vertical:1 galperin:1 sm:3 jin:1 descent:4 curved:7 ecml:1 beat:9 immediate:1 communication:1 genesis:1 y1:1 worthy:1 frame:1 buja:1 pair:1 namely:2 specified:1 unpublished:2 established:1 barcelona:1 nip:1 trans:2 beyond:1 curiosity:1 chal:4 regime:1 summarize:3 encompasses:1 max:3 suitable:5 natural:1 rely:2 beautiful:1 kivinen:2 drec:11 turner:1 kxk22:4 imply:1 numerous:1 specializes:1 xg:9 unnormalised:1 text:2 review:1 geometric:3 nice:1 tangent:4 relative:4 manhattan:2 regularisation:2 embedded:1 loss:12 expect:1 highlight:1 bear:2 interesting:2 mixed:1 filtering:4 srebro:1 remarkable:2 generator:5 validation:1 degree:1 affine:4 freifeld:1 xp:9 displaying:1 editor:1 translation:1 course:1 summary:1 hyperboloid:8 last:1 soon:1 free:5 kanamori:1 side:2 formal:1 normalised:2 generalise:1 wide:2 focussed:1 emerge:1 sparse:3 regard:1 dimension:2 world:1 author:1 made:2 adaptive:8 projected:1 suzuki:1 employing:1 party:1 rowland:1 correlate:1 transaction:1 alpha:1 emphasize:1 implicitly:3 bui:1 keep:2 ml:6 global:2 generalising:1 xi:11 shwartz:2 triplet:1 decade:2 sk:13 table:17 superficial:1 sra:1 ignoring:1 symmetry:1 improving:1 heidelberg:1 williamson:4 investigated:1 necessarily:4 domain:5 inherit:1 sp:1 main:2 dense:1 s2:2 noise:1 arise:1 positively:1 x1:4 fillmore:1 ny:2 hassibi:1 sub:1 inferring:2 wish:2 xh:10 lq:5 exponential:6 explicit:1 kxk2:18 exceeding:1 lie:1 jmlr:3 third:3 weighting:1 ix:1 admissible:2 theorem:35 rk:1 removing:1 xt:11 covariate:2 specific:1 insightful:1 offset:4 x:9 beaten:1 svm:1 concern:3 rel:2 adding:1 corr:1 mirror:6 lifting:8 illustrates:1 kx:1 vii:1 intersection:1 tc:1 depicted:1 simply:2 conveniently:1 prevents:2 aditya:2 expressed:1 sectional:1 tracking:1 scalar:2 xp2:3 springer:1 corresponds:1 minimizer:1 satisfies:3 acm:5 prop:1 viewed:1 identity:3 goal:1 careful:2 dacorogna:1 leonard:1 replace:1 fisher:1 shortcut:1 analysing:1 change:3 hard:1 generalisation:1 except:1 rosman:1 wt:17 lemma:18 engineer:1 called:1 degradation:1 kxs:1 kxkq:4 experimental:4 duality:1 formally:1 guo:1 latter:1 collins:1 ex:2 |
5,721 | 6,177 | Disease Trajectory Maps
Raman Arora
Dept. of Computer Science
Johns Hopkins University
Baltimore, MD 21218
[email protected]
Peter Schulam
Dept. of Computer Science
Johns Hopkins University
Baltimore, MD 21218
[email protected]
Abstract
Medical researchers are coming to appreciate that many diseases are in fact complex, heterogeneous syndromes composed of subpopulations that express different
variants of a related complication. Longitudinal data extracted from individual
electronic health records (EHR) offer an exciting new way to study subtle differences in the way these diseases progress over time. In this paper, we focus on
answering two questions that can be asked using these databases of longitudinal
EHR data. First, we want to understand whether there are individuals with similar
disease trajectories and whether there are a small number of degrees of freedom
that account for differences in trajectories across the population. Second, we want
to understand how important clinical outcomes are associated with disease trajectories. To answer these questions, we propose the Disease Trajectory Map (DTM), a
novel probabilistic model that learns low-dimensional representations of sparse and
irregularly sampled longitudinal data. We propose a stochastic variational inference
algorithm for learning the DTM that allows the model to scale to large modern
medical datasets. To demonstrate the DTM, we analyze data collected on patients
with the complex autoimmune disease, scleroderma. We find that DTM learns
meaningful representations of disease trajectories and that the representations are
significantly associated with important clinical outcomes.
1
Introduction
Longitudinal data is becoming increasingly important in medical research and practice. This is due,
in part, to the growing adoption of electronic health records (EHRs), which capture snapshots of
an individual?s state over time. These snapshots include clinical observations (apparent symptoms
and vital sign measurements), laboratory test results, and treatment information. In parallel, medical
researchers are beginning to recognize and appreciate that many diseases are in fact complex, highly
heterogeneous syndromes [Craig, 2008] and that individuals may belong to disease subpopulations
or subtypes that express similar sets of symptoms over time (see e.g. Saria and Goldenberg [2015]).
Examples of such diseases include asthma [L?tvall et al., 2011], autism [Wiggins et al., 2012], and
COPD [Castaldi et al., 2014]. The data captured in EHRs can help better understand these complex
diseases. EHRs contain many types of observations and the ability to track their progression can help
bring in to focus the subtle differences across individual disease expression.
In this paper, we focus on two exploratory questions that we can begin to answer using longitudinal
EHR data. First, we want to discover whether there are individuals with similar disease trajectories
and whether there are a small number of degrees of freedom that account for differences across a
heterogeneous population. A low-dimensional characterization of trajectories and how they differ
can yield insights into the biological underpinnings of the disease. In turn, this may motivate new
targeted therapies. In the clinic, physicians can analyze an individual?s clinical history to estimate
the low-dimensional representation of the trajectory and can use this knowledge to make more
accurate prognoses and guide treatment decisions by comparing against representations of past
trajectories. Second, we would like to know whether individuals with similar clinical outcomes (e.g.
death, severe organ damage, or development of comorbidities) have similar disease trajectories. In
complex diseases, individuals are often at risk of developing a number of severe complications and
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
clinicians rarely have access to accurate prognostic biomarkers. Discovering associations between
target outcomes and trajectory patterns may both generate new hypotheses regarding the causes of
these outcomes and help clinicians to better anticipate the event using an individual?s clinical history.
Contributions. Our approach to simultaneously answering these questions is to embed individual
disease trajectories into a low-dimensional vector space wherein similarity in the embedded space
implies that two individuals have similar trajectories. Such a representation would naturally answer
our first question, and could also be used to answer the second by comparing distributions over
representations across groups defined by different outcomes. To learn these representations, we
introduce a novel probabilistic model of longitudinal data, which we term the Disease Trajectory Map
(DTM). In particular, the DTM models the trajectory over time of a single clinical marker, which is
an observation or measurement recorded over time by clinicians that is used to track the progression
of a disease (see e.g. Schulam et al. [2015]). Examples of clinical markers are pulmonary function
tests or creatinine laboratory test results, which track lung and kidney function respectively. The
DTM discovers low-dimensional (e.g. 2D or 3D) latent representations of clinical marker trajectories
that are easy to visualize. We describe a stochastic variational inference algorithm for estimating the
posterior distribution over the parameters and individual-specific representations, which allows our
model to be easily applied to large datasets. To demonstrate the DTM, we analyze clinical marker data
collected on individuals with the complex autoimmune disease scleroderma (see e.g. Allanore et al.
[2015]). We find that the learned representations capture interesting subpopulations consistent with
previous findings, and that the representations suggest associations with important clinical outcomes
not captured by alternative representations.
1.1
Background and Related Work
Clinical marker data extracted from EHRs is a by-product of an individual?s interactions with the
healthcare system. As a result, the time series are often irregularly sampled (the time between samples
varies within and across individuals), and may be extremely sparse (it is not unusual to have a single
observation for an individual). To aid the following discussion, we briefly introduce notation for
this type of data. We use m to denote the number of individual disease trajectories recorded in a
given dataset. For each individual, we use ni to denote the number of observations. We collect the
observation times for subject i into a column vector ti , [ti1 , . . . , tini ]> (sorted in non-decreasing
order) and the corresponding measurements into a column vector yi , [yi1 , . . . , yini ]> . Our goal
is to embed the pair (ti , yi ) into a low-dimensional vector space wherein similarity between two
embeddings xi and xj ) implies that the trajectories have similar shapes. This is commonly done
using basis representations of the trajectories.
Fixed basis representations. In the statistics literature, (ti , yi ) is often referred to as unbalanced
longitudinal data, and is commonly analyzed using linear mixed models (LMMs) [Verbeke and
Molenberghs, 2009]. In their simplest form, LMMs assume the following probabilistic model:
wi | ? ? N (0, ?) , yi | Bi , wi , ?, ? 2 ? N (? + Bi wi , ? 2 Ini ).
(1)
The matrix Bi ? Rni ?d is known as the design matrix, and can be used to capture non-linear
relationships between the observation times ti and measurements yi . Its rows are comprised of
d-dimensional basis expansions of each observation time Bi = [b(ti1 ), ? ? ? , b(tini )]> . Common
choices of b(?) include polynomials, splines, wavelets, and Fourier series. The particular basis
used is often carefully crafted by the analyst depending on the nature of the trajectories and on
the desired structure (e.g. invariance to translations and scaling) in the representation [Brillinger,
2001]. The design matrix can therefore make the LMM remarkably flexible despite its simple
parametric probabilistic assumptions. Moreover, the prior over wi and the conjugate likelihood make
it straightforward to fit ?, ?, and ? 2 using EM or Bayesian posterior inference.
After estimating the model parameters, we can estimate the coefficients wi of a given clinical marker
trajectory using the posterior distribution, which embeds the trajectory in a Euclidean space. To
flexibly capture complex trajectory shapes, however, the basis must be high-dimensional, which makes
interpretability of the representations challenging. We can use low-dimensional summaries such
as the projection on to a principal subspace, but these are not necessarily substantively meaningful.
Indeed, much research has gone into developing principal direction post-processing techniques (e.g.
Kaiser [1958]) or alternative estimators that enhance interpretability (e.g. Carvalho et al. [2012]).
Data-adaptive basis representations. A set of related, but more flexible, techniques comes from
functional data analysis where observations are functions (i.e. trajectories) assumed to be sampled
2
from a stochastic process and the goal is to find a parsimonious representation for the data [Ramsay
et al., 2002]. Functional principal component analysis (FPCA), one of the most popular techniques in
functional data analysis, expresses functional data in the orthonormal basis given by the eigenfunctions
of the auto-covariance operator. This representation is optimal in the sense that no other representation
captures more variation [Ramsay, 2006]. The idea itself can be traced back to early independent work
by Karhunen and Loeve and is also referred to as the Karhunen-Loeve expansion [Watanabe, 1965].
While numerous variants of FPCA have been proposed, the one that is most relevant to the problem
at hand is that of sparse FPCA [Castro et al., 1986, Rice and Wu, 2001] where we allow sparse,
irregularly sampled data as is common in longitudinal data analysis. To deal with the sparsity, Rice
and Wu [2001] used LMMs to model the auto-covariance operator. In very sparse settings, however,
LMMs can suffer from numerical instability of covariance matrices in high dimensions. James et al.
[2000] addressed this by constraining the rank of the covariance matrices?we will refer to this
model as the reduced-rank LMM, but note that it is a variant of sparse FPCA. Although sparse FPCA
represents trajectories using a data-driven basis, the basis is restricted to lie in a linear subspace of a
fixed basis, which may be overly restrictive. Other approaches to learning a functional basis include
Bayesian estimation of B-spline parameters (e.g. [Bigelow and Dunson, 2012]) and placing priors
over reproducing kernel Hilbert spaces (e.g. [MacLehose and Dunson, 2009]). Although flexible,
these two approaches do not learn a low-dimensional representation.
Cluster-based representations. Mixture models and clustering approaches are also commonly
used to represent and discover structure in time series data. Marlin et al. [2012] cluster time series
data from the intensive care unit (ICU) using a mixture model and use cluster membership to predict
outcomes. Schulam and Saria [2015] describe a probabilistic model that represents trajectories using
a hierarchy of features, which includes ?subtype? or cluster membership. LMMs have also been
extended to have nonparametric Dirichlet process priors over the coefficients (e.g. Kleinman and
Ibrahim [1998]), which implicitly induce clusters in the data. Although these approaches flexibly
model trajectory data, the structure they recover is a partition, which does not allow us to compare all
trajectories in a coherent way as we can in a vector space.
Lexicon-based representations. Another line of research has investigated the discovery of motifs
or repeated patterns in continuous time-series data for the purposes of succinctly representing the
data as a string of elements of the discovered lexicon. These include efforts in the speech processing
community to identify sub-word units (parts of words comparable to phonemes) in a data-driven
manner [Varadarajan et al., 2008, Levin et al., 2013]. In computational healthcare, Saria et al. [2011]
propose a method for discovering deformable motifs that are repeated in continuous time series
data. These methods are, in spirit, similar to discretization approaches such as symbolic aggregate
approximation (SAX) [Lin et al., 2007] and piecewise aggregate approximation (PAA) [Keogh et al.,
2001] that are popular in data mining, and aim to find compact descriptions of sequential data,
primarily for the purposes of indexing, search, anomaly detection, and information retrieval. The
focus in this paper is to learn representations for entire trajectories rather than discover a lexicon.
Furthermore, we focus on learning a representation in a vector space where similarities among
trajectories are captured through the standard inner product on Rd .
2
Disease Trajectory Maps
To motivate Disease Trajectory Maps (DTM), we begin with the reduced-rank LMM proposed
by James et al. [2000]. We show that the reduced-rank LMM defines a Gaussian process with a
covariance function that linearly depends on trajectory-specific representations. To define DTMs,
we then use the kernel trick to make the dependence non-linear. Let ? ? R be the marginal mean of
the observations, F ? Rd?q be a rank-q matrix, and ? 2 be the variance of measurement errors. As a
reminder, yi ? Rni denotes the vector of observed trajectory measurements, Bi ? Rni ?d denotes the
individual?s design matrix, and xi ? Rq denotes the individual?s representation. James et al. [2000]
define the reduced-rank LMM using the following conditional distribution:
yi | Bi , xi , ?, F, ? 2 ? N (? + Bi Fxi , ? 2 Ini ).
(2)
They assume an isotropic normal prior over xi and marginalize to obtain the observed-data loglikelihood, which is then optimized with respect to {?, F, ? 2 }. As in Lawrence [2004], we instead
optimize xi and marginalize F. By assuming a normal prior N (0, ?Iq ) over the rows of F and
marginalizing we obtain:
2
yi | Bi , xi , ?, ? 2 , ? ? N (?, ?hxi , xi iBi B>
i + ? Ini ).
3
(3)
Note that by marginalizing over F, we induce a joint distribution over all trajectories in the dataset.
Moreover, this joint distribution is a Gaussian process with mean ? and the following covariance
function defined across trajectories that depends on times {ti , tj } and representations {xi , xj }:
2
Cov(yi , yj | Bi , Bj , xi , xj , ?, ? 2 , ?) = ?hxi , xj iBi B>
j + I[i = j] (? Ini ).
(4)
This reformulation of the reduced-rank LMM highlights that the covariance across trajectories i and
j depends on the inner product between the two representations xi and xj , and suggests that we can
non-linearize the dependency with an inner product in an expanded feature space using the ?kernel
trick?. Let k(?, ?) denote a non-linear kernel defined over the representations with parameters ?, then
we have:
2
Cov(yi , yj | Bi , Bj , xi , xj , ?, ? 2 , ?) = k(xi , xj )Bi B>
j + I[i = j] (? Ini ).
(5)
> >
Let y , [y>
1 , . . . , ym ] denote the column vector obtained by concatenating the measurement
vectors from each trajectory. The joint distribution over y is a multivariate normal:
y | B1:m , x1:m , ?, ? 2 , ? ? N (?, ?DTM + ? 2 In ),
(6)
where ?DTM is a covariance matrix that depends on the times t1:m (through design matrices B1:m )
and representations x1:m . In particular, ?DTM is a block-structured matrix with m row blocks and
m column blocks. The block at the ith row and j th column is the covariance between yi and yj
defined in (5). Finally, we place isotropic Gaussian priors over xi . We use Bayesian inference to
obtain a posterior Gaussian process and to estimate the representations. We tune hyperparameters by
maximizing the observed-data log likelihood. Note that our model is similar to the Bayesian GPLVM
[Titsias and Lawrence, 2010], but models functional data instead of finite-dimensional vectors.
2.1
Learning and Inference in the DTM
As formulated, the model scales poorly to large datasets. Inference within each iteration of an
optimization algorithm, for example, requires storing and inverting ?DTM , which requires O(n2 )
Pm
space and O(n3 ) time respectively, where n , i=1 ni is the number of clinical marker observations.
For modern datasets, where n can be in the hundreds of thousands or millions, this is unacceptable.
In this section, we approximate the log-likelihood using techniques from Hensman et al. [2013] that
allow us to apply stochastic variational inference (SVI) [Hoffman et al., 2013].
Inducing points. Recent work in scaling Gaussian processes to large datasets has focused on the
idea of inducing points [Snelson and Ghahramani, 2005, Titsias, 2009], which are a relatively small
number of artificial observations of a Gaussian process that approximately capture the information
contained in the training data. In general, let f ? Rm denote observations of a GP at inputs {xi }m
i=1
and u ? Rp denote inducing points at inputs {zi }pi=1 . Titsias [2009] constructs the inducing points
as variational parameters by introducing an augmented probability model:
?
u ? N (0, Kpp ) , f | u ? N (Kmp K?1
(7)
pp u, Kmm ),
where Kpp is the Gram matrix between inducing points, Kmm is the Gram matrix between
observations, Kmp is the cross Gram matrix between observations and inducing points, and
? mm , Kmm ? Kmp K?1
K
pp Kpm . We can marginalize over u to construct a low-rank approximate covariance matrix, which is computationally cheaper to invert using the Woodbury identity.
Alternatively, Hensman et al. [2013] extends these ideas by explicitly maintaining a variational
distribution over u that d-separates the observations and satisfies the conditions required to apply
SVI [Hoffman et al., 2013]. Let yf = f + where is iid Gaussian noise with variance ? 2 , then we
use the following inequality to lower bound our data log-likelihood:
Pm
log p(yf | u) ? i=1 Ep(fi |u) [log p(yf i | fi )].
(8)
In the interest of space, we refer the interested reader to Hensman et al. [2013] for details.
DTM evidence lower bound. When marginalizing over the rows of F, we induced a Gaussian
process over the trajectories, but by doing so we also implicitly induced a Gaussian process over
the individual-specific basis coefficients. Let wi , Fxi ? Rd denote the basis weights implied by
the mapping F and representation xi in the reduced-rank LMM, and let w:,k for k ? [d] denote the
k th coefficient of all individuals in the dataset. After marginalizing the k th row of F and applying
the kernel trick, we see that the vector of coefficients w:,k has a Gaussian process distribution
4
with mean zero and covariance function: Cov(wik , wjk ) = ?k(xi , xj ). Moreover, the Gaussian
processes across coefficients are statistically independent of one another. To lower bound the DTM loglikelihood, we introduce p inducing points uk for each vector of coefficients w:,k with shared inducing
point inputs {zi }pi=1 . To refer to all inducing points simultaneously, we use U , [u1 , . . . , ud ] and u
to denote the ?vectorized? form of U obtained by stacking its columns. Applying (8) we have:
m
X
logp(y | u, x1:m ) ?
Ep(wi |u,xi ) [log p(yi | wi )]
i=1
=
m
X
2
log N (yi | ? + Bi U> K?1
pp ki , ? Ini ) ?
i=1
m
X
k?ii
>
Tr[B
B
]
,
log p?(yi | u, xi ),
i
i
2? 2
i=1
(9)
? mm . We can then
where ki , [k(xi , z1 ), . . . , k(xi , zp )]> and k?ii is the ith diagonal element of K
construct the variational lower bound on log p(y):
log p(y) ? Eq(u,x1:m ) [log p(y | u, x1:m )] ? KL(q(u, x1:m ) k p(u, x1:m ))
m
X
?
Eq(u,xi ) [log p?(yi | u, xi )] ? KL(q(u, x1:m ) k p(u, x1:m )),
(10)
(11)
i=1
where we use the lower bound in (9). Finally, to make the lower bound concrete we specify the
variational distribution q(u, x1:m ) to be a product of independent multivariate normal distributions:
Qm
q(u, x1:m ) , N (u | m, S) i=1 N (xi | mi , Si ),
(12)
where the variational parameters to be fit are m, S, and {mi , Si }m
i=1 .
Stochastic optimization of the lower bound. To apply SVI, we must be able to compute the
gradient of the expected value of log p?(yi | u, xi ) under the variational distributions. Because u
and xi are assumed to be independent in the variational posteriors, we can analyze the expectation
in either order. Fix xi , then we see that log p?(yi | u, xi ) depends on u only through the mean of
the Gaussian density, which is a quadratic term in the log likelihood. Because q(u) is multivariate
normal, we can compute the expectation in closed form.
k?ii
?1
2
Eq(u) [log p?(yi | u, xi )] = Eq(U) [log N (yi | ? + (Bi ? k>
Tr[B>
i Kpp )u, ? Ini )] ?
i Bi ]
2? 2
k?ii
1
= log N (yi | ? + Ci m, ? 2 Ini )] ? 2 Tr[SC>
Tr[B>
i Ci ] ?
i Bi ],
2?
2? 2
?1
where we have defined Ci , (Bi ? k>
i Kpp ) to be the extended design matrix and ? is the Kronecker
product. We now need to compute the expectation of this expression with respect to q(xi ), which
entails computing the expectations of ki (a vector) and ki k>
i (a matrix). In this paper, we assume an
RBF kernel, and so the elements of the vector and matrix are all exponentiated quadratic functions of
xi . This makes the expectations straightforward to compute given that q(xi ) is multivariate normal.1
We therefore see that the expected value of log p?(yi ) can be computed in closed form under the
assumed variational distribution.
We use the standard SVI algorithm to optimize the lower bound. We subsample the data, optimize
the likelihood of each example in the batch with respect to the variational parameters over the
representation (mi , Si ), and compute approximate gradients of the global variational parameters (m,
S) and the hyperparameters. The likelihood term is conjugate to the prior over u, and so we can
compute the natural gradients with respect to the global variational parameters m and S [Hoffman
et al., 2013, Hensman et al., 2013]. Additional details on the approximate objective and the gradients
required for SVI are given in the supplement. We provide details on initialization, minibatch selection,
and learning rates for our experiments in Section 3.
Inference on new trajectories. The variational distribution over the inducing point values u can
be used to approximate a posterior process over the basis coefficients wi [Hensman et al., 2013].
Therefore, given a representation xi , we have that
wik | xi , m, S ? N (k> K?1 mk , k?ii + k> K?1 Skk K?1 ki ),
(13)
i
1
pp
i
pp
pp
Other kernels can be used instead, but the expectations may not have closed form expressions.
5
where mk is the approximate posterior mean of the k th column of U and Skk is its covariance. The
approximate joint posterior distribution over all coefficients can be shown to be multivariate normal.
Let ?(xi ) be the mean of this distribution given representation xi and ?(xi ) be the covariance, then
the posterior predictive distribution over a new trajectory y? given the representation x? is
2
y? | x? ? N (? + B? ?(x? ), B? ?(x? )B>
(14)
? + ? In? ).
We can then approximately marginalize with respect to the prior over x? or a variational approximation
of the posterior given a partial trajectory using a Monte Carlo estimate.
3
Experiments
We now use DTM to analyze clinical marker trajectories of individuals with the autoimmune disease,
scleroderma [Allanore et al., 2015]. Scleroderma is a heterogeneous and complex chronic autoimmune
disease. It can potentially affect many of the visceral organs, such as the heart, lungs, kidneys, and
vasculature. Any given individual may experience only a subset of complications, and the timing of
the symptoms relative to disease onset can vary considerably across individuals. Moreover, there are
no known biomarkers that accurately predict an individual?s disease course. Clinicians and medical
researchers are therefore interested in characterizing and understanding disease progression patterns.
Moreover, there are a number of clinical outcomes responsible for the majority of morbidity among
patients with scleroderma. These include congestive heart failure, pulmonary hypertension and
pulmonary arterial hypertension, gastrointestinal complications, and myositis [Varga et al., 2012].
We use the DTM to study associations between these outcomes and disease trajectories.
We study two scleroderma clinical markers. The first is the percent of predicted forced vital capacity
(PFVC); a pulmonary function test result measuring lung function. PFVC is recorded in percentage
points, and a higher value (near 100) indicates that the individual?s lungs are functioning as expected.
The second clinical marker that we study is the total modified Rodnan skin score (TSS). Scleroderma
is named after its effect on the skin, which becomes hard and fibrous during periods of high disease
activity. Because it is the most clinically apparent symptom, many of the current sub-categorizations
of scleroderma depend on an individual?s pattern of skin disease activity over time [Varga et al.,
2012]. To systematically monitor skin disease activity, clinicians use the TSS which is a quantitative
score between 0 and 55 computed by evaluating skin thickness at 17 sites across the body (higher
scores indicate more active skin disease).
3.1 Experimental Setup
For our experiments, we extract trajectories from the Johns Hopkins Hospital Scleroderma Center?s
patient registry; one of the largest in the world. For both PFVC and TSS, we study the trajectory from
the time of first symptom until ten years of follow-up. The PFVC dataset contains trajectories for
2,323 individuals and the TSS dataset contains 2,239 individuals. The median number of observations
per individuals is 3 for the PFVC data and 2 for the TSS data. The maximum number of observations
is 55 and 22 for PFVC and TSS respectively.
We present two sets of results. First, we visualize groups of similar trajectories obtained by clustering
the representations learned by DTM. Although not quantitative, we use these visualizations as a way
to check that the DTM uncovers subpopulations that are consistent with what is currently known
about scleroderma. Second, we use the learned representations of trajectories obtained using the
LMM, the reduced-rank LMM (which we refer to as FPCA), and the DTM to statistically test for
relationships between important clinical outcomes and learned disease trajectory representations.
For all experiments and all models, we use a common 5-dimensional B-spline basis composed of
degree-2 polynomials (see e.g. Chapter 20 in Gelman et al. [2014]). We choose knots using the
percentiles of observation times across the entire training set [Ramsay et al., 2002]. For LMM and
FPCA, we use EM to fit model parameters. To fit the DTM, we use the LMM estimate to set the
mean ? , noise ? 2 , and average the diagonal elements of ? to set the kernel scale ?. Length-scales `
are set to 1. For these experiments, we do not learn the kernel hyperparameters during optimization.
We initialize the variational means over xi using the first two unit-scaled principal components of wi
estimated by LMM and set the variational covariances to be diagonal with standard deviation 0.1. For
both PFVC and TSS, we use minibatches of size 25 and learn for a total of five epochs (passes over
the training data). The initial learning rate for m and S is 0.1 and decays as t?1 for each epoch t.
3.2 Qualitative Analysis of Representations
The DTM returns approximate posteriors over the representations xi for all individuals in the training
set. We examine these posteriors for both the PFVC and TSS datasets to check for consistency with
6
Percent of Predicted FVC (PFVC)
[1]
(A)
[5]
[7]
[8]
[14]
[21]
[15]
[22]
[23]
[28]
[29]
[34]
[35]
Years Since First Symptom
(B)
Total Skin Score (TSS)
Figure 1: (A) Groups of PFVC trajectories obtained by hierarchical clustering of DTM representations. (B)
Trajectory representations are color-coded and labeled according to groups shown in (A). Contours reflect
posterior GP over the second B-spline coefficient (blue contours denote smaller values, red denote larger values).
[1]
[5]
[6]
[10]
[11]
[15]
[16]
[20]
(A)
[19]
(B)
Years Since First Symptom
Figure 2: Same presentation as in Figure 1 but for TSS trajectories.
what is currently known about scleroderma disease trajectories. In Figure 1 (A) we show groups of
trajectories uncovered by clustering the posterior means over the representations, which are plotted
in Figure 1 (B). Many of the groups shown here align with other work on scleroderma lung disease
subtypes (e.g. Schulam et al. [2015]). In particular, we see rapidly declining trajectories (group [5]),
slowly declining trajectories (group [22]), recovering trajectories (group [23]), and stable trajectories
(group [34]). Surprisingly, we also see a group of individuals who we describe as ?late decliners?
(group [28]). These individuals are stable for the first 5-6 years, but begin to decline thereafter. This
is surprising because the onset of scleroderma-related lung disease is currently thought to occur early
in the disease course [Varga et al., 2012]. In Figure 2 (A) we show clusters of TSS trajectories and the
corresponding mean representations in Figure 2 (B). These trajectories corroborate what is currently
known about skin disease in scleroderma. In particular, we see individuals who have minimal activity
(e.g. group [1]) and individuals with early activity that later stabilizes (e.g. group [11]), which
correspond to what are known as the limited and diffuse variants of scleroderma [Varga et al., 2012].
We also find that there are a number of individuals with increasing activity over time (group [6])
and some whose activity remains high over the ten year period (group [19]). These patterns are not
currently considered to be canonical trajectories and warrant further investigation.
3.3 Associations between Representations and Clinical Outcomes
To quantitatively evaluate the low-dimensional representations learned by the DTM, we statistically
test for relationships between the representations of clinical marker trajectories and important clinical
outcomes. We compare the inferences of the hypothesis test with those made using representations
derived from the LMM and FPCA baselines. For the LMM, we project wi into its 2-dimensional principal subspace. For FPCA, we learn a rank-2 covariance, which learns 2-dimensional representations.
To establish that the models are all equally expressive and achieve comparable generalization error, we
present held-out data log-likelihoods in Table 1, which are estimated using 10-fold cross-validation.
We see that the models are roughly equivalent with respect to generalization error.
To test associations between clinical outcomes and learned representations, we use a kernel density
estimator test [Duong et al., 2012] to test the null hypothesis that the distributions across subgroups
with and without the outcome are equivalent. The p-values obtained are listed in Table 2. As a point of
7
Table 1: Disease Trajectory Held-out Log-Likelihoods
PFVC
TSS
Model
Subj. LL
Obs. LL
Subj. LL
Obs. LL
LMM
FPCA
DTM
-17.59 (? 1.18)
-17.89 (? 1.19)
-17.74 (? 1.23)
-3.95 (? 0.04)
-4.03 (? 0.02)
-3.98 (? 0.03)
-13.63 (? 1.41)
-13.76 (? 1.42)
-13.25 (? 1.38)
-3.47 (? 0.05)
-3.47 (? 0.05)
-3.32 (? 0.06)
Table 2: P-values under the null hypothesis that the distributions of trajectory representations are the same
across individuals with and without clinical outcomes. Lower values indicate stronger support for rejection.
PFVC
Outcome
Congestive Heart Failure
Pulmonary Hypertension
Pulmonary Arterial Hypertension
Gastrointestinal Complications
Myositis
Interstitial Lung Disease
Ulcers and Gangrene
LMM
0.170
0.270
0.013
0.328
0.337
?
0.000
0.410
TSS
FPCA
0.081
0.000
0.020
0.073
?
0.002
?
0.000
0.714
?
DTM
LMM
FPCA
DTM
0.013
0.000
0.002
0.347
?
0.004
?
0.000
0.514
0.107
0.485
0.712
0.026
?
0.000
0.553
0.573
0.383
0.606
0.808
0.035
?
0.002
0.515
0.316
0.189
0.564
0.778
0.011
?
0.000
0.495
?
0.009
?
?
reference, we include two clinical outcomes that should be clearly related to the two clinical markers.
Interstitial lung disease is the most common cause of lung damage in scleroderma [Varga et al., 2012],
and so we confirm that the null hypothesis is rejected for all three PFVC representations. Similarly,
for TSS we expect ulcers and gangrene to be associated with severe skin disease. In this case, only
the representations learned by DTM reveal this relationship. For the remaining outcomes, we see
that FPCA and DTM reveal similar associations, but that only DTM suggests a relationship with
pulmonary arterial hypertension (PAH). Presence of fibrosis (which drives lung disease progression)
has been shown to be a risk factor in the development of PAH (see Chapter 36 of Varga et al. [2012]),
but only the representations learned by DTM corroborate this association (see Figure 3).
4
Conclusion
We presented the Disease Trajectory Map (DTM), a novel probabilistic model that learns lowdimensional embeddings of sparse and irregularly sampled clinical time series data. The DTM is a
reformulation of the LMM. We derived it using an approach comparable to that of Lawrence [2004] in
deriving the Gaussian process latent variable model (GPLVM) from probabilistic principal component
analysis (PPCA) [Tipping and Bishop, 1999], and indeed the DTM can be interpreted as a ?twin
kernel? GPLVM (briefly discussed in the concluding paragraphs) over functional observations. The
DTM can also be viewed as an LMM with a ?warped? Gaussian prior over the random effects (see
e.g. Damianou et al. [2015] for a discussion of distributions induced by mapping Gaussian random
variables through non-linear maps). We demonstrated the model by analyzing data extracted from
one of the nation?s largest scleroderma patient registries, and found that the DTM discovers structure
among trajectories that is consistent with previous findings and also uncovers several surprising
disease trajectory shapes. We also explore associations between important clinical outcomes and
the DTM?s representations and found statistically significant differences in representations between
outcome-defined groups that were not uncovered by two sets of baseline representations.
Acknowledgments. PS is supported by an NSF Graduate Research Fellowship. RA is supported in part by
NSF BIGDATA grant IIS-1546482.
(A) LMM Representations
(B) FPCA Representations
(C) DTM Representations
Figure 3: Scatter plots of PFVC representations for the three models color-coded by presence or absence of
pulmonary arterial hypertension (PAH). Groups of trajectories with very few cases of PAH are circled in green.
8
References
Allanore et al. Systemic sclerosis. Nature Reviews Disease Primers, page 15002, 2015.
Jamie L Bigelow and David B Dunson. Bayesian semiparametric joint models for functional predictors. Journal
of the American Statistical Association, 2012.
David R Brillinger. Time series: data analysis and theory, volume 36. SIAM, 2001.
Carvalho et al. High-dimensional sparse factor modeling: applications in gene expression genomics. Journal of
the American Statistical Association, 2012.
P.J. Castaldi et al. Cluster analysis in the copdgene study identifies subtypes of smokers with distinct patterns of
airway disease and emphysema. Thorax, 2014.
PE Castro, WH Lawton, and EA Sylvestre. Principal modes of variation for processes with continuous sample
curves. Technometrics, 28(4):329?337, 1986.
J. Craig. Complex diseases: Research and applications. Nature Education, 1(1):184, 2008.
A. C. Damianou, M. K Titsias, and N. D. Lawrence. Variational inference for latent variables and uncertain
inputs in gaussian processes. JMLR, 2, 2015.
T. Duong, B. Goud, and K. Schauer. Closed-form density-based framework for automatic detection of cellular
morphology changes. Proceedings of the National Academy of Sciences, 109(22):8382?8387, 2012.
Andrew Gelman et al. Bayesian data analysis, volume 2. Taylor & Francis, 2014.
J. Hensman, N. Fusi, and N.D. Lawrence. Gaussian processes for big data. arXiv:1309.6835, 2013.
M.D. Hoffman, D.M. Blei, C. Wang, and J. Paisley. Stochastic variational inference. JMLR, 14(1):1303?1347,
2013.
G.M. James, T.J. Hastie, and C.A. Sugar. Principal component models for sparse functional data. Biometrika, 87
(3):587?602, 2000.
H.F. Kaiser. The varimax criterion for analytic rotation in factor analysis. Psychometrika, 23(3):187?200, 1958.
E. Keogh et al. Locally adaptive dimensionality reduction for indexing large time series databases. ACM
SIGMOD Record, 30(2):151?162, 2001.
K.P Kleinman and J.G. Ibrahim. A semiparametric bayesian approach to the random effects model. Biometrics,
pages 921?938, 1998.
N.D. Lawrence. Gaussian process latent variable models for visualisation of high dimensional data. Advances in
neural information processing systems, 16(3):329?336, 2004.
K. Levin, K. Henry, A. Jansen, and K. Livescu. Fixed-dimensional acoustic embeddings of variable-length
segments in low-resource settings. In ASRU, pages 410?415. IEEE, 2013.
Jessica Lin, Eamonn Keogh, Li Wei, and Stefano Lonardi. Experiencing SAX: a novel symbolic representation
of time series. Data Mining and knowledge discovery, 15(2):107?144, 2007.
J. L?tvall et al. Asthma endotypes: a new approach to classification of disease entities within the asthma
syndrome. Journal of Allergy and Clinical Immunology, 127(2):355?360, 2011.
Richard F MacLehose and David B Dunson. Nonparametric bayes kernel-based priors for functional data
analysis. Statistica Sinica, pages 611?629, 2009.
B.M. Marlin et al. Unsupervised pattern discovery in electronic health care data using probabilistic clustering
models. In Proc. ACM SIGHIT International Health Informatics Symposium, pages 389?398. ACM, 2012.
James Ramsay et al. Applied functional data analysis: methods and case studies. Springer, 2002.
James O Ramsay. Functional data analysis. Wiley Online Library, 2006.
J.A. Rice and C.O. Wu. Nonparametric mixed effects models for unequally sampled noisy curves. Biometrics,
57(1):253?259, 2001.
S. Saria and A. Goldenberg. Subtyping: What it is and its role in precision medicine. Int. Sys., IEEE, 2015.
S. Saria et al. Discovering deformable motifs in continuous time series data. In IJCAI, volume 22, 2011.
P. Schulam and S. Saria. A framework for individualizing predictions of disease trajectories by exploiting
multi-resolution structure. In Advances in Neural Information Processing Systems, pages 748?756, 2015.
P. Schulam, F. Wigley, and S. Saria. Clustering longitudinal clinical marker trajectories from electronic health
data: Applications to phenotyping and endotype discovery. In AAAI, pages 2956?2964, 2015.
E. Snelson and Z. Ghahramani. Sparse gaussian processes using pseudo-inputs. In NIPS, 2005.
M. E. Tipping and C. M. Bishop. Probabilistic principal component analysis. Journal of the Royal Statistical
Society: Series B (Statistical Methodology), 61(3):611?622, 1999.
M.K. Titsias. Variational learning of inducing variables in sparse gaussian processes. In AISTATS, 2009.
M.K. Titsias and N.D. Lawrence. Bayesian gaussian process latent variable model. In AISTATS, 2010.
B. Varadarajan et al. Unsupervised learning of acoustic sub-word units. In Proc. ACL, pages 165?168, 2008.
J. Varga et al. Scleroderma: From pathogenesis to comprehensive management. Springer, 2012.
G. Verbeke and G. Molenberghs. Linear mixed models for longitudinal data. Springer, 2009.
S. Watanabe. Karhunen-loeve expansion and factor analysis, theoretical remarks and applications. In Proc. 4th
Prague Conf. Inform. Theory, 1965.
L.D. Wiggins et al. Support for a dimensional view of autism spectrum disorders in toddlers. Journal of autism
and developmental disorders, 42(2):191?200, 2012.
9
| 6177 |@word briefly:2 polynomial:2 stronger:1 prognostic:1 uncovers:2 covariance:15 creatinine:1 tr:4 reduction:1 initial:1 series:12 score:4 contains:2 uncovered:2 longitudinal:10 past:1 duong:2 current:1 comparing:2 discretization:1 surprising:2 si:3 scatter:1 must:2 john:3 numerical:1 partition:1 shape:3 analytic:1 plot:1 discovering:3 isotropic:2 beginning:1 yi1:1 ith:2 fvc:1 sys:1 record:3 pschulam:1 blei:1 characterization:1 complication:5 lexicon:3 five:1 unacceptable:1 symposium:1 qualitative:1 paragraph:1 manner:1 introduce:3 expected:3 ra:1 indeed:2 roughly:1 examine:1 growing:1 morphology:1 multi:1 kpp:4 decreasing:1 gastrointestinal:2 increasing:1 becomes:1 begin:3 discover:3 spain:1 estimating:2 notation:1 moreover:5 project:1 null:3 what:5 interpreted:1 string:1 finding:2 brillinger:2 marlin:2 pseudo:1 quantitative:2 ti:5 nation:1 biometrika:1 rm:1 qm:1 fpca:14 healthcare:2 unit:4 medical:5 subtype:1 uk:1 scaled:1 grant:1 t1:1 timing:1 despite:1 analyzing:1 scleroderma:18 becoming:1 approximately:2 ibi:2 acl:1 initialization:1 collect:1 challenging:1 suggests:2 limited:1 bi:16 adoption:1 gone:1 statistically:4 graduate:1 acknowledgment:1 woodbury:1 responsible:1 yj:3 systemic:1 practice:1 block:4 svi:5 jhu:2 significantly:1 thought:1 projection:1 word:3 induce:2 subpopulation:4 suggest:1 varadarajan:2 symbolic:2 marginalize:4 selection:1 operator:2 gelman:2 risk:2 applying:2 instability:1 optimize:3 equivalent:2 map:7 demonstrated:1 chronic:1 maximizing:1 center:1 straightforward:2 arterial:4 flexibly:2 kidney:2 focused:1 resolution:1 disorder:2 insight:1 estimator:2 orthonormal:1 deriving:1 population:2 exploratory:1 variation:2 target:1 hierarchy:1 experiencing:1 anomaly:1 livescu:1 hypothesis:5 trick:3 element:4 database:2 labeled:1 observed:3 ep:2 role:1 wang:1 capture:6 thousand:1 disease:52 rq:1 developmental:1 sugar:1 asked:1 motivate:2 depend:1 segment:1 predictive:1 titsias:6 basis:15 unequally:1 easily:1 joint:5 chapter:2 forced:1 distinct:1 describe:3 monte:1 artificial:1 sc:1 eamonn:1 aggregate:2 outcome:21 apparent:2 whose:1 larger:1 loglikelihood:2 ability:1 statistic:1 cov:3 yini:1 gp:2 itself:1 noisy:1 online:1 propose:3 lowdimensional:1 interaction:1 coming:1 product:6 jamie:1 thorax:1 relevant:1 rapidly:1 poorly:1 achieve:1 deformable:2 academy:1 description:1 inducing:11 wjk:1 exploiting:1 ijcai:1 cluster:7 p:1 zp:1 categorization:1 help:3 depending:1 iq:1 linearize:1 andrew:1 autoimmune:4 progress:1 eq:4 recovering:1 c:2 predicted:2 implies:2 come:1 indicate:2 differ:1 direction:1 stochastic:6 education:1 fix:1 generalization:2 investigation:1 biological:1 anticipate:1 keogh:3 subtypes:3 mm:2 therapy:1 considered:1 normal:7 lawrence:7 mapping:2 predict:2 visualize:2 bj:2 stabilizes:1 vary:1 early:3 purpose:2 estimation:1 proc:3 currently:5 castaldi:2 largest:2 organ:2 hoffman:4 clearly:1 gaussian:21 aim:1 modified:1 rather:1 phenotyping:1 derived:2 focus:5 rank:11 likelihood:9 indicates:1 check:2 baseline:2 sense:1 bigelow:2 inference:11 goldenberg:2 motif:3 membership:2 entire:2 paa:1 visualisation:1 interested:2 among:3 flexible:3 classification:1 development:2 jansen:1 initialize:1 marginal:1 construct:3 fibrosis:1 represents:2 placing:1 unsupervised:2 warrant:1 spline:4 piecewise:1 richard:1 primarily:1 few:1 modern:2 quantitatively:1 composed:2 simultaneously:2 recognize:1 national:1 individual:39 cheaper:1 comprehensive:1 technometrics:1 freedom:2 detection:2 tss:14 interest:1 jessica:1 highly:1 mining:2 severe:3 analyzed:1 mixture:2 tj:1 held:2 accurate:2 underpinnings:1 partial:1 experience:1 biometrics:2 euclidean:1 taylor:1 desired:1 plotted:1 theoretical:1 minimal:1 mk:2 uncertain:1 column:7 modeling:1 corroborate:2 measuring:1 logp:1 stacking:1 introducing:1 deviation:1 subset:1 hundred:1 comprised:1 predictor:1 levin:2 dependency:1 answer:4 varies:1 thickness:1 considerably:1 density:3 immunology:1 siam:1 international:1 probabilistic:9 physician:1 informatics:1 enhance:1 ym:1 hopkins:3 concrete:1 ehrs:4 reflect:1 recorded:3 aaai:1 management:1 choose:1 slowly:1 conf:1 warped:1 american:2 return:1 li:1 account:2 twin:1 includes:1 coefficient:10 int:1 schulam:6 explicitly:1 depends:5 onset:2 later:1 view:1 closed:4 analyze:5 doing:1 red:1 francis:1 lung:10 recover:1 parallel:1 bayes:1 contribution:1 ni:2 phoneme:1 variance:2 who:2 yield:1 identify:1 correspond:1 bayesian:8 accurately:1 craig:2 iid:1 knot:1 carlo:1 trajectory:72 researcher:3 autism:3 drive:1 history:2 damianou:2 inform:1 against:1 failure:2 pp:6 james:6 naturally:1 associated:3 mi:3 sampled:6 ppca:1 dataset:5 treatment:2 popular:2 pah:4 wh:1 lonardi:1 knowledge:2 substantively:1 reminder:1 color:2 hilbert:1 subtle:2 dimensionality:1 carefully:1 ea:1 back:1 higher:2 tipping:2 sax:2 follow:1 wherein:2 specify:1 wei:1 methodology:1 done:1 psychometrika:1 symptom:7 furthermore:1 rejected:1 asthma:3 until:1 hand:1 expressive:1 marker:13 minibatch:1 defines:1 mode:1 yf:3 reveal:2 sylvestre:1 effect:4 contain:1 functioning:1 laboratory:2 death:1 deal:1 ll:4 during:2 percentile:1 criterion:1 ini:8 demonstrate:2 bring:1 stefano:1 percent:2 variational:21 snelson:2 novel:4 discovers:2 fi:2 common:4 rotation:1 functional:12 individualizing:1 volume:3 million:1 belong:1 association:10 discussed:1 measurement:7 refer:4 significant:1 declining:2 paisley:1 rd:3 ehr:3 consistency:1 pm:2 similarly:1 automatic:1 ramsay:5 henry:1 hxi:2 access:1 entail:1 similarity:3 stable:2 align:1 posterior:14 multivariate:5 recent:1 driven:2 kpm:1 inequality:1 yi:21 captured:3 additional:1 care:2 syndrome:3 lmm:20 ud:1 period:2 ii:6 offer:1 clinical:30 lin:2 retrieval:1 cross:2 dept:2 post:1 equally:1 coded:2 dtm:39 prediction:1 variant:4 heterogeneous:4 patient:4 expectation:6 kmp:3 iteration:1 kernel:12 represent:1 arxiv:1 invert:1 background:1 want:3 remarkably:1 fellowship:1 baltimore:2 addressed:1 semiparametric:2 median:1 morbidity:1 eigenfunctions:1 pass:1 subject:1 induced:3 spirit:1 prague:1 near:1 presence:2 constraining:1 vital:2 easy:1 embeddings:3 xj:8 fit:4 zi:2 affect:1 hastie:1 prognosis:1 registry:2 inner:3 regarding:1 idea:3 decline:1 copd:1 ti1:2 intensive:1 toddler:1 whether:5 expression:4 biomarkers:2 ibrahim:2 effort:1 suffer:1 peter:1 kleinman:2 speech:1 cause:2 remark:1 hypertension:6 varga:7 listed:1 tune:1 nonparametric:3 ten:2 locally:1 simplest:1 reduced:7 generate:1 percentage:1 canonical:1 nsf:2 sign:1 estimated:2 overly:1 track:3 per:1 blue:1 express:3 group:18 thereafter:1 reformulation:2 traced:1 monitor:1 sighit:1 year:5 named:1 place:1 extends:1 reader:1 electronic:4 wu:3 parsimonious:1 fusi:1 raman:1 decision:1 ob:2 scaling:2 comparable:3 bound:8 ki:5 fold:1 quadratic:2 activity:7 occur:1 subj:2 kronecker:1 n3:1 diffuse:1 loeve:3 fourier:1 u1:1 extremely:1 concluding:1 expanded:1 relatively:1 structured:1 developing:2 according:1 clinically:1 sclerosis:1 conjugate:2 across:13 smaller:1 increasingly:1 em:2 wi:11 kmm:3 castro:2 restricted:1 indexing:2 heart:3 interstitial:2 computationally:1 resource:1 visualization:1 remains:1 turn:1 know:1 irregularly:4 unusual:1 apply:3 progression:4 hierarchical:1 fxi:2 alternative:2 batch:1 primer:1 rp:1 denotes:3 clustering:6 include:7 dirichlet:1 remaining:1 maintaining:1 medicine:1 sigmod:1 restrictive:1 ghahramani:2 establish:1 society:1 appreciate:2 implied:1 objective:1 skin:9 question:5 allergy:1 kaiser:2 damage:2 parametric:1 dependence:1 md:2 diagonal:3 gradient:4 subspace:3 separate:1 capacity:1 majority:1 entity:1 collected:2 cellular:1 analyst:1 assuming:1 length:2 relationship:5 setup:1 dunson:4 sinica:1 potentially:1 skk:2 design:5 observation:20 snapshot:2 datasets:6 finite:1 gplvm:3 varimax:1 extended:2 discovered:1 reproducing:1 wiggins:2 community:1 david:3 inverting:1 pair:1 required:2 kl:2 optimized:1 z1:1 acoustic:2 coherent:1 learned:8 barcelona:1 subgroup:1 nip:2 able:1 pattern:7 sparsity:1 interpretability:2 green:1 royal:1 comorbidities:1 event:1 natural:1 representing:1 wik:2 lmms:5 library:1 numerous:1 identifies:1 arora:2 auto:2 health:5 extract:1 genomics:1 prior:10 literature:1 discovery:4 understanding:1 epoch:2 circled:1 marginalizing:4 relative:1 review:1 embedded:1 expect:1 highlight:1 mixed:3 interesting:1 carvalho:2 validation:1 clinic:1 tini:2 degree:3 rni:3 vectorized:1 consistent:3 exciting:1 systematically:1 storing:1 pi:2 translation:1 row:6 succinctly:1 summary:1 course:2 surprisingly:1 supported:2 guide:1 allow:3 understand:3 exponentiated:1 characterizing:1 sparse:12 curve:2 dimension:1 hensman:6 gram:3 evaluating:1 world:1 contour:2 commonly:3 adaptive:2 made:1 pfvc:14 approximate:8 compact:1 implicitly:2 gene:1 confirm:1 global:2 active:1 b1:2 assumed:3 xi:38 alternatively:1 spectrum:1 continuous:4 latent:5 search:1 table:4 learn:6 nature:3 expansion:3 dtms:1 investigated:1 complex:9 necessarily:1 pathogenesis:1 icu:1 aistats:2 statistica:1 linearly:1 big:1 noise:2 hyperparameters:3 subsample:1 n2:1 repeated:2 x1:11 augmented:1 crafted:1 referred:2 site:1 body:1 aid:1 embeds:1 allanore:3 sub:3 wiley:1 watanabe:2 airway:1 precision:1 concatenating:1 lie:1 answering:2 pe:1 jmlr:2 late:1 learns:4 wavelet:1 embed:2 specific:3 bishop:2 decay:1 evidence:1 sequential:1 ci:3 supplement:1 karhunen:3 smoker:1 rejection:1 explore:1 contained:1 springer:3 pulmonary:8 satisfies:1 extracted:3 rice:3 minibatches:1 acm:3 conditional:1 sorted:1 targeted:1 goal:2 formulated:1 identity:1 rbf:1 presentation:1 viewed:1 shared:1 absence:1 saria:7 hard:1 change:1 asru:1 clinician:5 principal:9 total:3 hospital:1 invariance:1 experimental:1 meaningful:2 rarely:1 support:2 unbalanced:1 wigley:1 bigdata:1 evaluate:1 |
5,722 | 6,178 | 6178 |@word |
|
5,723 | 6,179 | On Explore-Then-Commit Strategies
Aur?lien Garivier?
Institut de Math?matiques de Toulouse; UMR5219
Universit? de Toulouse; CNRS
UPS IMT, F-31062 Toulouse Cedex 9, France
[email protected]
Emilie Kaufmann
Univ. Lille, CNRS, Centrale Lille, Inria SequeL
UMR 9189, CRIStAL - Centre de Recherche en Informatique Signal et Automatique de Lille
F-59000 Lille, France
[email protected]
Tor Lattimore
University of Alberta
116 St & 85 Ave, Edmonton, AB T6G 2R3, Canada
[email protected]
Abstract
We study the problem of minimising regret in two-armed bandit problems with
Gaussian rewards. Our objective is to use this simple setting to illustrate that
strategies based on an exploration phase (up to a stopping time) followed by
exploitation are necessarily suboptimal. The results hold regardless of whether
or not the difference in means between the two arms is known. Besides the
main message, we also refine existing deviation inequalities, which allow us to
design fully sequential strategies with finite-time regret guarantees that are (a)
asymptotically optimal as the horizon grows and (b) order-optimal in the minimax
sense. Furthermore we provide empirical evidence that the theory also holds in
practice and discuss extensions to non-gaussian and multiple-armed case.
1
Introduction
It is now a very frequent issue for companies to optimise their daily profits by choosing between
one of two possible website layouts. A natural approach is to start with a period of A/B Testing
(exploration) during which the two versions are uniformly presented to users. Once the testing is
complete, the company displays the version believed to generate the most profit for the rest of the
month (exploitation). The time spent exploring may be chosen adaptively based on past observations,
but could also be fixed in advance. Our contribution is to show that strategies of this form are
much worse than if the company is allowed to dynamically select which website to display without
restrictions for the whole month.
Our analysis focusses on a simple sequential decision problem played over T time-steps. In time-step
t ? 1, 2, . . . , T the agent chooses an action At ? {1, 2} and receives a normally distributed reward
?
This work was partially supported by the CIMI (Centre International de Math?matiques et d?Informatique)
Excellence program while Emilie Kaufmann visited Toulouse in November 2015. The authors acknowledge the
support of the French Agence Nationale de la Recherche (ANR), under grants ANR-13-BS01-0005 (project
SPADRO) and ANR-13-CORD-0020 (project ALICIA).
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Zt ? N (?At , 1) where ?1 , ?2 ? R are the unknown mean rewards for actions 1 and 2 respectively.
The goal is to find a strategy ? (a way of choosing each action At based on past observation) that
maximises the cumulative reward over T steps in expectation, or equivalently minimises the regret
"
R?? (T )
= T max {?1 , ?2 } ? E?
T
X
#
?At
.
(1)
t=1
This framework is known as the multi-armed bandit problem, which has many applications and
has been studied for almost a century [Thompson, 1933]. Although this setting is now quite well
understood, the purpose of this article is to show that strategies based on distinct phases of exploration
and exploitation are necessarily suboptimal. This is an important message because exploration
followed by exploitation is the most natural approach and is often implemented in applications
(including the website optimisation problem described above). Moreover, strategies of this kind
have been proposed in the literature for more complicated settings [Auer and Ortner, 2010, Perchet
and Rigollet, 2013, Perchet et al., 2015]. Recent progress on optimal exploration policies (e.g., by
Garivier and Kaufmann [2016]) could have suggested that well-tuned variants of two-phase strategies
might be near-optimal. We show, on the contrary, that optimal strategies for multi-armed bandit
problems must be fully-sequential, and in particular should mix exploration and exploitation. It is
known since the work of Wald [1945] on simple hypothese testing that sequential procedures can lead
to significant gains. Here, the superiority of fully sequential procedures is consistent with intuition: if
one arm first appears to be better, but if subsequent observations are disappointing, the obligation to
commit at some point can be restrictive. In this paper, we give a crisp and precise description of how
restrictive it is: it leads to regret asympotically twice as large on average. The proof of this result
combines some classical techniques of sequential analysis and of the bandit literature.
We study two settings, one when the gap ? = |?1 ? ?2 | is known and the other when it is not.
The most straight-forward strategy in the former case is to explore each action a fixed number
of times n and subsequently exploit by choosing the action that appeared best while exploring.
It is easy to calculate the optimal n and consequently show that this strategy suffers a regret of
R?? (T ) ? 4 log(T )/?. A more general approach is to use a so-called Explore-Then-Commit (ETC)
strategy, following a nomenclature introduced by Perchet et al. [2015]. An ETC strategy explores
each action alternately until some data-dependent stopping time and subsequently commits to a single
action for the remaining time-steps. We show in Theorem 2 that by using a sequential probability ratio
test (SPRT) it is possible to design an ETC strategy for which R?? (T ) ? log(T )/?, which improves
on the above result by a factor of 4. We also prove a lower bound showing that no ETC strategy can
improve on this result. Surprisingly it is possible to do even better by using a fully sequential strategy
inspired by the UCB algorithm for multi-armed bandits [Katehakis and Robbins, 1995]. We design a
new strategy for which R?? (T ) ? log(T )/(2?), which improves on the fixed-design strategy by a
factor of 8 and on SPRT by a factor of 2. Again we prove a lower bound showing that no strategy can
improve on this result.
For the case where ? is unknown, fixed-design strategies are hopeless because there is no reasonable
tuning for the exploration budget n. However, it is possible to design an ETC strategy for unknown
gaps. Our approach uses a modified fixed-budget best arm identification (BAI) algorithm in its
exploration phase (see e.g., Even-Dar et al. [2006], Garivier and Kaufmann [2016]) and chooses the
recommended arm for the remaining time-steps. In Theorem 5 we show that a strategy based on
this idea satisfies R?? (T ) ? 4 log(T )/?, which again we show is optimal within the class of ETC
strategies. As before, strategies based on ETC are suboptimal by a factor of 2 relative to the optimal
rates achieved by fully sequential strategies such as UCB, which satisfies R?? (T ) ? 2 log(T )/?
[Katehakis and Robbins, 1995].
In a nutshell, strategies based on fixed-design or ETC are necessarily suboptimal. That this failure
occurs even in the simple setting considered here is a strong indicator that they are suboptimal in
more complicated settings. Our main contribution, presented in more details in Section 2, is to fully
characterise the achievable asymptotic regret when ? is either known or unknown and the strategies
are either fixed-design, ETC or fully sequential. All upper bounds have explicit finite-time forms,
which allow us to derive optimal minimax guarantees. For the lower bounds we give a novel and
generic proof of all results. All proofs contain new, original ideas that we believe are fundamental to
the understanding of sequential analysis.
2
2
Notation and Summary of Results
We assume that the horizon T is known to the agent. The optimal action is a? = arg max(?1 , ?2 ), its
2
mean reward is ?? = ?a? , and the gap between
the means is ? = |?
1 ? ?2 |. Let H = R be the set
2
of all possible pairs of means, and H? = ? ? R : |?1 ? ?2 | = ? . For i ? {1, 2} and n ? N let
?
?i,n be the empirical mean of
Ptthe ith action based on the first n samples. Let At be the action chosen
in time-step t and Ni (t) = s=1 1 {As = i} be the number of times the ith action has been chosen
after time-step t. We denote by ?
?i (t) = ?
?i,Ni (t) the empirical mean of the ith arm after time-step t.
A strategy is denoted by ?, which is a function from past actions/rewards to a distribution over the
next actions. An ETC strategy is governed by a sampling rule (which determines which arm to sample
at each step), a stopping rule (which specifies when to stop the exploration phase) and a decision
rule indicating which arm is chosen in the exploitation phase. As we consider two-armed, Gaussian
bandits with equal variances, we focus here on uniform sampling rules, which have been shown
in Kaufmann et al. [2014] to be optimal in that setting. For this reason, we define an ETC strategy as
a pair (?, a
?), where ? is an even stopping time with respect to the filtration (Ft = ?(Z1 , . . . , Zt ))t
and a
? ? {1, 2} is F? -measurable. In all the ETC strategies presented in this paper, the stopping time
? depends on the horizon T (although
? this is not reflected in the notation). At time t, the action
?1 if t ? ? and t is odd ,
picked by the ETC strategy is At = 2 if t ? ? and t is even ,
?
a
? otherwise .
The regret for strategy ?, given in Eq. (1), depends on T and ?. Assuming, for example that ?1 =
?2 + ?, then an ETC strategy ? chooses the suboptimal arm N2 (T ) = ? ?T
a = 2}
2 + (T ? ? )+ 1 {?
times, and the regret R?? (T ) = ?E? [N2 (T )] thus satisfies
?E? [(? ? T )/2] ? R?? (T ) ? (?/2)E? [? ? T ] + ?T P? (? ? T, a
? 6= a? ) .
(2)
We denote the set of all ETC strategies by ?ETC . A fixed-design strategy is and ETC strategy for
which there exists an integer n such that ? = 2n almost surely, and the set of all such strategies is
denoted by ?DETC . The set of all strategies is denoted by ?ALL . For S ? {H, H? }, we are interested
in strategies ? that are uniformly efficient on S, in the sense that
?? ? S, ?? > 0, R?? (T ) = o(T ? ).
(3)
We show in this paper that any uniformly efficient strategy in ?
?ALL ?ETC ?DETC
has a regret at least equal to CS? log(T )/|?1 ? ?2 |(1 ? oT (1))
?
H
2
4
NA
for every parameter ? ? S, where CS is given in the adjacent
table. Furthermore, we prove that these results are tight. In
H? 1/2
1
4
each case, we propose a uniformly efficient strategy matching
this bound. In addition, we prove a tight and non-asymptotic regret bound which also implies, in
particular, minimax rate-optimality.
The paper is organised as follows. First we consider ETC and fixed-design strategies when ? known
and unknown (Section 3). We then analyse fully sequential strategies that interleave exploration and
exploitation in an optimal way (Section 4). For known ? we present a novel algorithm that exploits
the additional information to improve the regret. For unknown ? we briefly recall the well-known
results, but also propose a new regret analysis of the UCB* algorithm, a variant of UCB that can
be traced back to Lai [1987], for which we also obtain order-optimal minimax regret. Numerical
experiments illustrate and empirically support our results in Section 5. We conclude with a short
discussion on non-uniform exploration, and on models with more than 2 arms, possibly non Gaussian.
All the proofs are given in the supplementary material. In particular, our simple, unified proof for all
the lower bounds is given in Appendix A.
3
Explore-Then-Commit Strategies
Fixed Design Strategies for Known Gaps. As a warm-up we start with the fixed-design ETC
setting where ? is known and where the agent chooses each action n times before committing for the
remainder.
3
input:l T and ?
The optimal decision rule is obviously a
? =
m
arg maxi ?
?i,n with ties broken arbitrarily. The formal
n := 2W T 2 ?4 /(32?) /?2
description of the strategy is given in Algorithm 1,
for k ? {1, . . . , n} do
where W denotes the Lambert function implicitly dechoose A2k?1 = 1 and A2k = 2
fined for y > 0 by W (y) exp(W (y)) = y. We denote
end
for
n
the regret associated to the choice of n by R? (T ). The
a
? := arg maxi ?
?i,n
following theorem is not especially remarkable except
for
t
?
{2n
+
1,
. . . , T } do
that the bound is sufficiently refined to show certain
choose
A
=
a
?
t
negative lower-order terms that would otherwise not
end
for
be apparent.
Algorithm 1: FB-ETC algorithm
Theorem 1. Let ? ? H? , and let
2 4
T ?
2
T ?2
2
T ?2
4
n
?
W
n=
. Then R? (T ) ?
log
? log log
+?
?2
32?
?
4.46
?
4 2?
?
?
whenever T ?2 > 4 2?e, and R?n (T ) ? T ?/2 + ? otherwise. In all cases, R?n (T ) ? 2.04 T + ?.
Furthermore, for all ? > 0, T ? 1 and n ? 4(1 ? ?) log(T )/?2 ,
2
8 log(T )
?T ?
p
R?n (T ) ? 1 ?
1
?
.
2
2
n?
? T
2 ? log(T )
As R?n (T ) ? n?, this entails that
inf R?n (T ) ? 4 log(T )/?.
1?n?T
The proof of Theorem 1 is in Appendix B. Note that the "asymptotic lower bound" 4 log(T )/? is
actually not a lower bound, even up to an additive constant: R?n (T ) ? 4 log(T )/? ? ?? when
T ? ?. Actually, the same phenomenon applies many other cases, and it should be no surprise that,
in numerical experiments, some algorithm reach a regret smaller than Lai and Robbins asymptotic
lower bound, as was already observed in several articles (see e.g. Garivier et al. [2016]). Also note
that the term ? at the end of the upper bound is necessary: if ? is large, the problem is statistically
so simple that one single observation is sufficient to identify the best arm; but that observation cannot
be avoided.
Explore-Then-Commit Strategies for Known Gaps. We now show the existence of ETC strategies
that improve on the optimal fixed-design strategy. Surprisingly, the gain is significant. We describe
an algorithm inspired by ideas from hypothesis testing and prove an upper bound on its regret that is
minimax optimal and that asymptotically matches our lower bound.
Let P be the law of X ? Y , where X (resp. Y ) is a reward from arm 1 (resp. arm 2). As ? is
known, the exploration phase of an ETC algorithm can be viewed as a statistical test of the hypothesis
H1 : (P = N (?, 2)) against H2 : (P = N (??, 2)). The work of Wald [1945] shows that a
significant gain in terms of expected number of samples can be obtained by using a sequential rather
than a batch test. Indeed, for a batch test, a sample size of n ? (4/?2 ) log(1/?) is necessary to
guarantee that both type I and type II errors are upper bounded by ?. In contrast, when a random
number of samples is permitted, there exists a sequential probability ratio test (SPRT) with the same
guarantees that stops after a random number N of samples with expectation E[N ] ? log(1/?)/?2
under both H1 and H2 . The SPRT stops when the absolute value of the log-likelihood ratio between
H1 and H2 exceeds some threshold. Asymptotic upper bound on the expected number of samples
used by a SPRT, as well as the (asymptotic) optimality of such procedures among the class of all
sequential tests can be found in [Wald, 1945, Siegmund, 1985].
Algorithm 2 is an ETC strategy that explores
each action alternately, halting when sufficient
confidence is reached according to a SPRT. The
threshold depends on the gap ? and the horizon
T corresponding to a risk of ? = 1/(T ?2 ). The
exploration phase ends at the stopping time
n
log(T ?2 ) o
? = inf t = 2n : ?
?1,n ??
?2,n ?
.
n?
If ? < T then the empirical best arm a
? at time ?
is played until time T . If T ?2 ? 1, then ? = 1
input: T and ?
A1 = 1, A2 = 2, t := 2
while (t/2)? |?
?1 (t) ? ?
?2 (t)| < log T ?2 do
choose At+1 = 1 and At+2 = 2,
t := t + 2
end while
a
? := arg maxi ?
?i (t)
while t ? T do
choose At = a
?,
t := t + 1
end while
Algorithm 2: SPRT ETC algorithm
4
(one could even define ? = 0 and pick a random arm). The following theorem gives a non-asymptotic
upper bound on the regret of the algorithm. The results rely on non-asymptotic upper bounds on the
expectation of ? , which are interesting in their own right.
Theorem 2. If T ?2 ? 1, then the regret of the SPRT-ETC algorithm is upper-bounded as
p
log(eT ?2 ) 4 log(T ?2 ) + 4
SPRT-ETC
R?
(T ) ?
+
+ ?.
?
?
p
Otherwise it is upper bounded by T ?/2+?, and for all T and ? the regret is less than 10 T /e+?.
The proof of Theorem 2 is given in Appendix C. The following lower bound shows that no uniformly
efficient ETC strategy can improve on the asymptotic regret of Algorithm 2. The proof is given in
Section A together with the other lower bounds.
Theorem 3. Let ? be an ETC strategy that is uniformly efficient on H? . Then for all ? ? H? ,
lim inf
T ??
R?? (T )
1
?
.
log(T )
?
Explore-Then-Commit Strategies for Unknown Gaps. When the gap is unknown it is not possible
to tune a fixed-design strategy that achieves logarithmic regret. ETC strategies can enjoy logarithmic
regret and these are now analysed. We start with the asymptotic lower bound.
Theorem 4. Let ? be a uniformly efficient ETC strategy on H. For all ? ? H, if ? = |?1 ? ?2 | then
lim inf
T ??
R?? (T )
4
?
.
log(T )
?
A simple idea for constructing an algorithm that matches the lower bound is to use a (fixed-confidence)
best arm identification algorithm for the exploration phase. Given a risk parameter ?, a ?-PAC BAI
algorithm consists of a sampling rule (At ), a stopping rule ? and a recommendation rule a
? which
is F? measurable and satisfies, for all ? ? H such that ?1 6= ?2 , P? (?
a = a? ) ? 1 ? ?. In a bandit
model with two Gaussian arms, Kaufmann et al. [2014] propose a ?-PAC algorithm using a uniform
sampling rule and a stopping rule ?? that asymptotically attains the minimal sample complexity
E? [?? ] ? (8/?2 ) log(1/?). Using the regret decomposition (2), it is easy to show that the ETC
algorithm using the stopping rule ?? for ? = 1/T matches the lower bound of Theorem 4.
Algorithm 3 is a slight variant of this optimal BAI
algorithm, based on the stopping time
?
s
?
?
4 log T /(2n) ?
.
? = inf t = 2n : |?
?1,n ? ?
?2,n |>
?
?
n
input: T (? 3)
A1 = 1, A2 = 2, t := 2 q
/t)
while |?
?1 (t) ? ?
?2 (t)| < 8 log(T
do
t
choose At+1 = 1 and At+2 = 2
t := t + 2
end while
a
? := arg maxi ?
?i (t)
while t ? T do
choose At = a
?
t := t + 1
end while
Algorithm 3: BAI-ETC algorithm
The motivation for the difference (which comes from
a more carefully tuned threshold featuring log(T /2n)
in place of log(T )) is that the confidence level should
depend on the unknown gap ?, which determines the
regret when a mis-identification occurs. The improvement only appears in the non-asymptotic regime where
we are able to prove both asymptotic optimality and
order-optimal minimax regret. The latter would not be possible using a fixed-confidence BAI strategy.
The proof of this result can be found in Appendix D. The main difficulty is developing a sufficiently
strong deviation bound, which we do in Appendix G, and that may be of independent interest. Note
that a similar strategy was proposed and analysed by Lai et al. [1983], but in the continuous time
framework and with asymptotic analysis only.
Theorem 5. If T ?2 > 4e2 , the regret of the BAI-ETC algorithm is upper bounded as
2
q
2
4 log T ?
334
log T ?
4
178
4
BAI-ETC
(T ) ?
+
+
+ 2?.
R?
?
?
?
?
It is upper bounded by T ? otherwise, and by 32 T + 2? in any case.
5
4
Fully Sequential Strategies for Known and Unknown Gaps
In the previous section we saw that allowing a random stopping time leads to a factor of 4 improvement
in terms of the asymptotic regret relative to the naive fixed-design strategy. We now turn our attention
to fully sequential strategies when ? is known and unknown. The latter case is the classic 2-armed
bandit problem and is now quite well understood. Our modest contribution in that case is the first
algorithm that is simultaneously asymptotically optimal and order optimal in the minimax sense. For
the former case, we are not aware of any previous research where the gap is known except the line of
work by Bubeck et al. [2013], Bubeck and Liu [2013], where different questions are treated. In both
cases we see that fully sequential strategies improve on the best ETC strategies by a factor of 2.
Known Gaps. We start by stating the lower bound (proved in Section A), which is a straightforward
generalisation of Lai and Robbins? lower bound.
Theorem 6. Let ? be a strategy that is uniformly efficient on H? . Then for all ? ? H? ,
lim inf
T ??
R?? (T )
1
?
log T
2?
We are not aware of any existing algorithm matching this lower bound, which motivates us to
introduce a new strategy called ?-UCB that exploits the knowledge of ? to improve the performance
of UCB. In each round the algorithm chooses the arm that has been played most often so far unless
the other arm has an upper confidence bound that is close to ? larger than the empirical estimate of
the most played arm. Like ETC strategies, ?-UCB is not anytime in the sense that it requires the
knowledge of both the horizon T and the gap ?.
1: input: T and ?
1
2: ?T = ? log? 8 (e + T ?2 )/4
3: for t ? {1, . . . , T } do
4:
let At,min := arg min Ni (t ? 1) and At,max = 3 ? At,min
i?1,2
5:
6:
7:
8:
9:
10:
v
u
T
u 2 log
NAt,min (t?1)
t
??
?At,max (t ? 1) + ? ? 2?T then
if ?
?At,min (t ? 1) +
NAt,min (t ? 1)
choose At = At,min
else
choose At = At,max
end if
end for
Algorithm 4: ?-UCB
Theorem 7. If T (2? ? 3?T )2 ? 2 and T ?2T ? e2 , the regret of the ?-UCB algorithm is upper
bounded as
p
log 2T ?2
? log (2T ?2 )
?-UCB
R? (T ) ?
+
2
2?(1 ? 3?T /(2?))
2?(1 ? 3?T /?)2
"
#
p
30e log(?2T T )
80
2
+?
+ 2 +
+ 5?.
?2T
?T
(2? ? 3?T )2
?
Moreover lim supT ?? R??-UCB (T )/ log(T ) ? (2?)?1 and ?? ? H? , R??-UCB (T ) ? 328 T + 5?.
The proof may be found in Appendix E.
Unknown Gaps. In the classical bandit setting where ? is unknown, UCB by Katehakis and Robbins
[1995] is known to be asymptotically optimal: R?UCB (T ) ? 2 log(T )/?, which matches the lower
bound of Lai and Robbins [1985]. Non-asymptotic regret bounds are given for example by Auer
et al. [2002], Capp? et al. [2013]. Unfortunately, UCB is not optimal in the minimax sense, which
is so far only achieved by algorithms that are not asymptotically optimal [Audibert and Bubeck,
2009, Lattimore, 2015]. Here, with only two arms, we are able to show that Algorithm 5 below is
6
simultaneously minimax order-optimal and asymptotically optimal. The strategy is essentially the
same as suggested by Lai [1987], but with a fractionally smaller confidence bound. The proof of
Theorem 8 is given in Appendix F. Empirically the smaller confidence bonus used by UCB? leads to
a significant improvement relative to UCB.
1: input: T
2: for t ? {1, . . . , T } do
s
3:
At = arg max ?
?i (t ? 1) +
i?{1,2}
2
log
Ni (t ? 1)
T
Ni (t ? 1)
4: end for
Algorithm 5: UCB?
Theorem 8. For all ? ? (0, ?), if T (? ? ?)2 ? 2 and T ?2 ? e2 , the regret of the UCB? strategy is
upper bounded as
2
q
!
2
p
T?
2
log
? log T ?
2
?
2
30e log(?2 T ) + 16e
2
2
UCB
R? (T ) ?
+
+
+?
+ ?.
2
? 2
? 2
? 2
?
? 1? ?
? 1? ?
? 1? ?
?
Moreover, lim supT ?? R?? (T )/ log(T ) = 2/? and for all ? ? H, R?? (T ) ? 33 T + ?.
Note that if there are Kp
> 2 arms, then the strategy above isp
still asymptotically optimal, but suffers
a minimax regret of ?( T K log(K)), which is a factor of log(K) suboptimal.
5
Numerical Experiments
80
100
We represent here the regret of the five strategies presented in this article on a bandit problem
with ? = 1/5, for different values of the horizon. The regret is estimated by 4.105 Monte-Carlo
replications. In the legend, the estimated slopes of ?R? (T ) (in logarithmic scale) are indicated after
the policy names.
0
20
40
60
FB?ETC : 3.65
BAI?ETC : 2.98
UCB : 1.59
SPRT?ETC : 1.03
D?UCB : 0.77
50
100
200
500
1000
2000
5000
10000
20000
50000
The experimental behavior of the algorithms reflects the theoretical results presented above: the
regret asymptotically grows as the logarithm of the horizon, the experimental coefficients correspond
approximately to theory, and the relative ordering of the policies is respected. However, it should
be noted that for short horizons the hierarchy is not quite the same, and the growth rate is not
logarithmic; this question is raised in Garivier et al. [2016]. In particular, on short horizons the
Best-Arm Identification procedure performs very well with respect to the others, and starts to be
beaten (even by the gap-aware strategies) only when T ?2 is much larger that 10.
7
6
Conclusion: Beyond Uniform Exploration, Two Arms and Gaussian
distributions
It is worth emphasising the impossibility of non-trivial lower bounds on the regret of ETC strategies
using any possible (non-uniform) sampling rule. Indeed, using UCB as a sampling rule together with
an a.s. infinite stopping rule defines an artificial but formally valid ETC strategy that achieves the best
possible rate for general strategies. This strategy is not a faithful counter-example to our claim that
ETC strategies are sub-optimal, because UCB is not a satisfying exploration rule. If exploration is the
objective, then uniform sampling is known to be optimal in the two-armed Gaussian case [Kaufmann
et al., 2014], which justifies the uniform sampling assumption.
The use of ETC strategies for regret minimisation (e.g., as presented by Perchet and Rigollet [2013])
is certainly not limited to bandit models with 2 arms. The extension to multiple arms is based on the
successive elimination idea in which a set of active arms is maintained with arms chosen according
to a round robin within the active set. Arms are eliminated from the active set once their optimality
becomes implausible and the exploration phase terminates when the active set contains only a single
arm (an example is by Auer and Ortner [2010]). The Successive Elimination algorithm has been
introduced by Even-Dar et al. [2006] for best-arm identification in the fixed-confidence setting. It was
shown to be rate-optimal, and thus a good compromise for both minimizing regret and finding the
best arm. If one looks more precisely at mutliplicative constants, however, Garivier and Kaufmann
[2016] showed that it is suboptimal for the best arm identification task in almost all settings except
two-armed Gaussian bandits. Regarding regret minimization, the present paper shows that it is
sub-optimal by a factor 2 on every two-armed Gaussian problem.
It is therefore interesting to investigate the performance in terms of regret of an ETC algorithm
using an optimal BAI algorithm. This is actually possible not only for Gaussian distributions, but
more generally for one-parameter exponential families, for which Garivier and Kaufmann [2016]
propose the asymptotically optimal Track-and-Stop strategy. Denoting d(?, ?0 ) = KL(?? , ??0 ) the
Kullback-Leibler divergence between two distributions parameterised by ? and ?0 , they provide
results which can be adapted to obtain the following bound.
Proposition 1. For ? such that ?1 > maxa6=1 ?a , the regret of the ETC strategy using Track-and-Stop
exploration with risk 1/T satisfies
!
K
X
R?TaS (T )
lim sup
? T ? (?)
wa? (?)(?1 ? ?a ) ,
log T
T ??
a=2
where T ? (?) (resp. w? (?)) is the the maximum (resp. maximiser) of the optimisation problem
w1 ?1 + wa ?a
wa ?1 + wa ?a
max inf w1 d ?1 ,
+ wa d ?a ,
,
w??K a6=1
w1 + wa
w1 + wa
where ?K is the set of probability distributions on {1, . . . , K}.
In general, it is not easy to quantify the difference to the lower bound of Lai and Robbins
lim inf
T ??
K
R?? (T ) X
?1 ? ?a
?
.
log T
d(?
a , ?1 )
a=2
Even for Gaussian distributions, there is no general closed-form formula for T ? (?) and w? (?) except
when K = 2. However, we conjecture that the worst case is when ?1 and ?2 are much larger than
the other means: then, the regret is almost the same as in the 2-arm case, and ETC strategies are
suboptimal by a factor 2. On the other hand, the most favourable case (in terms of relative efficiency)
seems to be when ?2 = ? ? ? = ?K : then
?
K ?1
1
?
?
?
?
w1 (?) =
,
w2? (?) = ? ? ? = wK
(?) =
K ?1+ K ?1
K ?1+ K ?1
?
and T ? = 2( K ? 1 + 1)2 /?2 , leading to
R?TaS (T )
1
2(K ? 1)
lim sup
? 1+ ?
,
log(T )
?
K ?1
T ??
while
? Lai and Robbins? lower bound yields 2(K ? 1)/?. Thus, the difference grows with K as
2 K ? 1 log(T )/? , but the relative difference decreases.
8
References
Jean-Yves Audibert and S?bastien Bubeck. Minimax policies for adversarial and stochastic bandits.
In Proceedings of Conference on Learning Theory (COLT), pages 217?226, 2009.
Peter Auer and Ronald Ortner. UCB revisited: Improved regret bounds for the stochastic multi-armed
bandit problem. Periodica Mathematica Hungarica, 61(1-2):55?65, 2010.
Peter Auer, Nicol? Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit
problem. Machine Learning, 47:235?256, 2002.
S?bastien Bubeck and Che-Yu Liu. Prior-free and prior-dependent regret bounds for thompson
sampling. In Advances in Neural Information Processing Systems, pages 638?646, 2013.
S?bastien Bubeck, Vianney Perchet, and Philippe Rigollet. Bounded regret in stochastic multi-armed
bandits. In Proceedings of the 26th Conference On Learning Theory, pages 122?134, 2013.
Olivier Capp?, Aur?lien Garivier, Odalric-Ambrym Maillard, R?mi Munos, and Gilles Stoltz.
Kullback?Leibler upper confidence bounds for optimal sequential allocation. The Annals of
Statistics, 41(3):1516?1541, 2013.
Eyal Even-Dar, Shie Mannor, and Yishay Mansour. Action Elimination and Stopping Conditions for
the Multi-Armed Bandit and Reinforcement Learning Problems. Journal of Machine Learning
Research, 7:1079?1105, 2006.
Aur?lien Garivier and Emilie Kaufmann. Optimal best arm identification with fixed confidence. In
Proceedings of the 29th Conference On Learning Theory (to appear), 2016.
Aur?lien Garivier, Pierre M?nard, and Gilles Stoltz. Explore first, exploit next: The true shape of
regret in bandit problems. arXiv preprint arXiv:1602.07182, 2016.
Abdolhossein Hoorfar and Mehdi Hassani. Inequalities on the lambert w function and hyperpower
function. J. Inequal. Pure and Appl. Math, 9(2):5?9, 2008.
Michael N Katehakis and Herbert Robbins. Sequential choice from several populations. Proceedings
of the National Academy of Sciences of the United States of America, 92(19):8584, 1995.
Emilie Kaufmann, Olivier Capp?, and Aur?lien Garivier. On the Complexity of A/B Testing. In
Proceedings of the 27th Conference On Learning Theory, 2014.
Tze Leung Lai. Adaptive treatment allocation and the multi-armed bandit problem. The Annals of
Statistics, pages 1091?1114, 1987.
Tze Leung Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances in
applied mathematics, 6(1):4?22, 1985.
Tze Leung Lai, Herbert Robbins, and David Siegmund. Sequential design of comparative clinical
trials. In M. Haseeb Rizvi, Jagdish Rustagi, and David Siegmund, editors, Recent advances in
statistics: papers in honor of Herman Chernoff on his sixtieth birthday, pages 51?68. Academic
Press, 1983.
Tor Lattimore. Optimally confident UCB: Improved regret for finite-armed bandits. Technical report,
2015. URL http://arxiv.org/abs/1507.07880.
Peter M?rters and Yuval Peres. Brownian motion, volume 30. Cambridge University Press, 2010.
Vianney Perchet and Philippe Rigollet. The multi-armed bandit with covariates. The Annals of
Statistics, 2013.
Vianney Perchet, Philippe Rigollet, Sylvain Chassang, and Eric Snowberg. Batched bandit problems.
In Proceedings of the 28th Conference On Learning Theory, 2015.
David Siegmund. Sequential Analysis. Springer-Verlag, 1985.
William Thompson. On the likelihood that one unknown probability exceeds another in view of the
evidence of two samples. Biometrika, 25(3/4):285?294, 1933.
Abraham Wald. Sequential Tests of Statistical Hypotheses. Annals of Mathematical Statistics, 16(2):
117?186, 1945.
9
| 6179 |@word trial:1 exploitation:7 briefly:1 version:2 achievable:1 interleave:1 seems:1 decomposition:1 pick:1 profit:2 bai:9 liu:2 contains:1 united:1 bs01:1 tuned:2 denoting:1 detc:2 past:3 existing:2 com:1 analysed:2 gmail:1 must:1 ronald:1 subsequent:1 numerical:3 additive:1 shape:1 website:3 ith:3 short:3 recherche:2 math:4 revisited:1 mannor:1 successive:2 org:1 five:1 mathematical:1 katehakis:4 replication:1 prove:6 consists:1 combine:1 introduce:1 excellence:1 indeed:2 expected:2 automatique:1 behavior:1 multi:8 inspired:2 alberta:1 company:3 armed:16 becomes:1 project:2 spain:1 moreover:3 notation:2 bounded:8 bonus:1 kind:1 unified:1 finding:1 guarantee:4 every:2 growth:1 nutshell:1 tie:1 universit:1 biometrika:1 normally:1 grant:1 enjoy:1 superiority:1 appear:1 before:2 understood:2 approximately:1 birthday:1 inria:1 might:1 umr:1 twice:1 studied:1 dynamically:1 appl:1 limited:1 statistically:1 faithful:1 testing:5 practice:1 regret:46 procedure:4 empirical:5 matching:2 ups:1 confidence:10 cannot:1 close:1 risk:3 restriction:1 crisp:1 measurable:2 layout:1 regardless:1 attention:1 straightforward:1 thompson:3 pure:1 rule:16 his:1 century:1 classic:1 population:1 siegmund:4 resp:4 hierarchy:1 annals:4 yishay:1 user:1 olivier:2 us:1 hypothesis:3 satisfying:1 perchet:7 observed:1 ft:1 preprint:1 worst:1 calculate:1 cord:1 ordering:1 counter:1 decrease:1 intuition:1 broken:1 complexity:2 covariates:1 reward:7 depend:1 tight:2 compromise:1 efficiency:1 eric:1 capp:3 isp:1 america:1 univ:3 informatique:2 distinct:1 committing:1 describe:1 monte:1 kp:1 artificial:1 choosing:3 refined:1 quite:3 apparent:1 supplementary:1 larger:3 jean:1 otherwise:5 anr:3 toulouse:5 statistic:5 fischer:1 commit:6 analyse:1 obviously:1 propose:4 fr:2 frequent:1 remainder:1 academy:1 description:2 comparative:1 spent:1 illustrate:2 derive:1 stating:1 minimises:1 a2k:2 odd:1 progress:1 eq:1 strong:2 implemented:1 c:2 implies:1 come:1 quantify:1 subsequently:2 stochastic:3 exploration:19 material:1 elimination:3 emphasising:1 proposition:1 extension:2 exploring:2 hold:2 sufficiently:2 considered:1 exp:1 claim:1 tor:3 achieves:2 a2:2 purpose:1 visited:1 saw:1 robbins:11 reflects:1 minimization:1 gaussian:11 supt:2 modified:1 rather:1 minimisation:1 focus:2 improvement:3 likelihood:2 impossibility:1 contrast:1 ave:1 attains:1 adversarial:1 sense:5 dependent:2 stopping:13 cnrs:2 leung:3 bandit:22 lien:5 france:2 interested:1 issue:1 arg:7 among:1 colt:1 denoted:3 raised:1 equal:2 once:2 aware:3 sampling:9 eliminated:1 chernoff:1 lille:4 look:1 yu:1 others:1 report:1 ortner:3 simultaneously:2 divergence:1 national:1 phase:10 william:1 ab:2 interest:1 message:2 investigate:1 certainly:1 daily:1 necessary:2 institut:1 modest:1 unless:1 stoltz:2 periodica:1 logarithm:1 theoretical:1 minimal:1 a6:1 deviation:2 uniform:7 optimally:1 chooses:5 adaptively:1 st:1 confident:1 international:1 explores:2 aur:5 fundamental:1 sequel:1 michael:1 together:2 na:1 w1:5 again:2 cesa:1 choose:7 possibly:1 worse:1 leading:1 halting:1 de:7 imt:1 wk:1 coefficient:1 audibert:2 depends:3 h1:3 picked:1 closed:1 eyal:1 view:1 sup:2 reached:1 start:5 complicated:2 slope:1 contribution:3 yves:1 ni:5 kaufmann:12 variance:1 correspond:1 identify:1 yield:1 identification:7 lambert:2 carlo:1 worth:1 straight:1 chassang:1 implausible:1 reach:1 emilie:5 suffers:2 whenever:1 failure:1 against:1 mathematica:1 e2:3 proof:11 associated:1 mi:2 gain:3 stop:5 proved:1 treatment:1 recall:1 lim:8 knowledge:2 improves:2 anytime:1 maillard:1 hassani:1 carefully:1 auer:5 back:1 actually:3 appears:2 ta:2 reflected:1 permitted:1 improved:2 furthermore:3 parameterised:1 until:2 hand:1 receives:1 mehdi:1 french:1 defines:1 indicated:1 grows:3 believe:1 name:1 snowberg:1 contain:1 true:1 former:2 leibler:2 adjacent:1 round:2 during:1 maintained:1 noted:1 complete:1 performs:1 motion:1 lattimore:4 matiques:2 novel:2 rigollet:5 empirically:2 volume:1 slight:1 significant:4 multiarmed:1 cambridge:1 tuning:1 mathematics:1 centre:2 entail:1 etc:47 agence:1 own:1 recent:2 showed:1 brownian:1 inf:8 disappointing:1 certain:1 verlag:1 honor:1 inequality:2 arbitrarily:1 herbert:3 additional:1 surely:1 period:1 recommended:1 signal:1 ii:1 multiple:2 mix:1 exceeds:2 technical:1 match:4 academic:1 minimising:1 believed:1 fined:1 clinical:1 lai:11 a1:2 variant:3 wald:4 optimisation:2 expectation:3 essentially:1 arxiv:3 represent:1 achieved:2 addition:1 else:1 ot:1 rest:1 w2:1 cedex:1 shie:1 contrary:1 legend:1 integer:1 near:1 easy:3 suboptimal:9 idea:5 regarding:1 whether:1 url:1 peter:3 nomenclature:1 action:17 dar:3 generally:1 characterise:1 tune:1 generate:1 specifies:1 http:1 estimated:2 rustagi:1 track:2 fractionally:1 threshold:3 traced:1 obligation:1 garivier:12 asymptotically:11 place:1 almost:4 reasonable:1 family:1 decision:3 appendix:7 maximiser:1 bound:37 followed:2 played:4 display:2 refine:1 adapted:1 precisely:1 aurelien:1 optimality:4 min:7 conjecture:1 developing:1 according:2 centrale:1 smaller:3 terminates:1 discus:1 r3:1 turn:1 end:11 generic:1 pierre:1 batch:2 vianney:3 existence:1 original:1 denotes:1 remaining:2 exploit:4 commits:1 restrictive:2 especially:1 classical:2 respected:1 objective:2 already:1 question:2 occurs:2 strategy:81 che:1 odalric:1 trivial:1 reason:1 assuming:1 besides:1 ratio:3 minimizing:1 equivalently:1 unfortunately:1 negative:1 filtration:1 design:16 zt:2 policy:4 unknown:14 sprt:10 gilles:2 maximises:1 upper:15 allowing:1 observation:5 motivates:1 bianchi:1 finite:4 acknowledge:1 november:1 philippe:3 peres:1 precise:1 mansour:1 canada:1 introduced:2 david:3 pair:2 kl:1 z1:1 barcelona:1 nip:1 alternately:2 able:2 cristal:1 suggested:2 below:1 beyond:1 alicia:1 appeared:1 regime:1 program:1 herman:1 optimise:1 max:7 including:1 natural:2 warm:1 rely:1 difficulty:1 indicator:1 treated:1 arm:34 minimax:11 improve:7 naive:1 hungarica:1 prior:2 literature:2 understanding:1 nicol:1 relative:6 asymptotic:15 law:1 fully:11 interesting:2 organised:1 allocation:3 remarkable:1 h2:3 agent:3 sufficient:2 t6g:1 consistent:1 article:3 editor:1 hopeless:1 summary:1 featuring:1 supported:1 surprisingly:2 free:1 formal:1 allow:2 ambrym:1 munos:1 absolute:1 distributed:1 valid:1 cumulative:1 fb:2 author:1 forward:1 reinforcement:1 adaptive:2 avoided:1 far:2 implicitly:1 kullback:2 active:4 conclude:1 continuous:1 table:1 robin:1 necessarily:3 constructing:1 main:3 abraham:1 whole:1 motivation:1 paul:1 n2:2 allowed:1 rizvi:1 en:1 edmonton:1 batched:1 sub:2 explicit:1 exponential:1 governed:1 theorem:16 formula:1 bastien:3 showing:2 pac:2 maxi:4 favourable:1 beaten:1 evidence:2 lille1:1 exists:2 sequential:23 nat:2 budget:2 justifies:1 horizon:9 nationale:1 gap:15 surprise:1 logarithmic:4 tze:3 explore:7 bubeck:6 partially:1 recommendation:1 applies:1 springer:1 satisfies:5 determines:2 month:2 goal:1 viewed:1 consequently:1 generalisation:1 except:4 uniformly:8 infinite:1 yuval:1 sylvain:1 called:2 experimental:2 la:1 ucb:26 indicating:1 select:1 formally:1 support:2 latter:2 phenomenon:1 |
5,724 | 618 | Interposing an ontogenic model between
Genetic Algorithms and Neural Networks
Richard K. Belew
rikGcs.ucsd.edu
Cognitive Computer Science Research Group
Computer Science & Engr. Dept. (0014)
University of California - San Diego
La Jolla, CA 92093
Abstract
The relationships between learning, development and evolution in
Nature is taken seriously, to suggest a model of the developmental
process whereby the genotypes manipulated by the Genetic Algorithm (GA) might be expressed to form phenotypic neural networks
(NNet) that then go on to learn. ONTOL is a grammar for generating polynomial NN ets for time-series prediction. Genomes correspond to an ordered sequence of ONTOL productions and define a
grammar that is expressed to generate a NNet. The NNet's weights
are then modified by learning, and the individual's prediction error
is used to determine GA fitness. A new gene doubling operator
appears critical to the formation of new genetic alternatives in the
preliminary but encouraging results presented.
1
Introduction
Two natural phenomena, the learning done by individuals' nervous systems and the
evolution done by populations of individuals, have served as the basis of distinct
classes of adaptive algorithms, neural networks (NNets) and Genetic Algorithms
(GAs), resp. Interactions between learning and evolution in Nature suggests that
combining NNet and GA algorithmic techniques might also yield interesting hybrid
algorithms.
99
100
Belew
Dimension
X
x
t-H
? t-2
X?
t-I
X
t
D
e
9
r
2
><t
= ' W4
Xt - 2
+
W3
Xt-f,..-X t - I
+
e
e
Figure 1: Polynomial networks
Taking the analogy to learning and evolution seriously, we propose that the missing
feature is a the developmental process whereby the genotypes manipulated by the
GA are expressed to form phenotypic NNets that then go on to learn. Previous
attempts to use the GA to search for good NN et topologies have foundered exactly
because they have assumed an overly direct genotype-to-phenotype correspondence.
This research is therefore consistent with other NN et research the physiology of
neural development [3] as well as those into "constructive" methods for changing
network topologies adaptively during the training process [4]. Additional motivation
derives from the growing body of neuroscience demonstrating the importance of
developmental processes as the shapers of effective learning networks. Cognitively,
the resolution of false dicotomies like "nature/nurture" and "nativist/empiricist"
also depends on a richer language for describing the way genetically determined
characteristics and within-lifetime changes by individuals can interact.
Because GAs and NNets are each complicated technologies in their own right, and
because the focus of the current research is a model of development that can span
between them, three major simplifications have been imposed for the preliminary
research reported here. First, in order to stay close to the mathematical theory of
functional approximation, we restrict the form of our NNets to what can be called
"polynomials networks" (cf. [7]). That is, we will consider networks with a first
layer of linear units (Le., terms in the polynomial that are simply weighted input
Xi), a second layer with units that form products of first-layer units, a third layer
with units that form products of second-layer units, etc.; see Figure 1, and below
for an example. As depicted in Figure 1 the space of polynomial networks can be
viewed as two-dimensional, parameterized by dimension (Le., how much history of
the time series is used) and degree.
There remains the problem of finding the best parameter values for this particular
polynomial form. Much of classicial optimization theory and more recent NNet
research is concerned with various methods for performing this task. Previous re-
Interposing an ontogenic model between Genetic Algorithms and Neural Networks
search has demonstrated that the global sampling behavior of the GA works very
effectively with any gradient, local search technique [2]. The second major simplification, then, is that for the time being we use only the most simple-minded
gradient method: first-order, fixed-step gradient descent. Analytically, this is the
most tractible, and the general algorithm design can readily replace this with any
other local search technique.
The final simplification is that we focus on one of the most parsimonious of problems, time series prediction: The GA is used to evolve NNets that are good at
predicting X1+ 1 given access to an unbounded history X 1 , X 1 -1, X 1-2, .... Polynomial ai)proximations of an arbitrary time series can vary in two dimensions: the
extent to which they rely on this history, and (e.g., how far back in time), and
in their degree. The Stone-Weierstrauss Aproximation Theorem guarantees that,
within this two-dimensional space, there exists some polynomial that will match
the desired temporal sequence to arbitrarily precision. The problem, of course, is
that over a history H and allowing m degree terms there exists O(Hm) terms, far
too many to search effectively. From the perspective of function approximation,
then, this work corresponds to a particular heuristic for searching for the correct
polynomial form, the parameters of which will be tuned with a gradient technique.
2
Expression of the ONTOL grammar
Every multi-cellular organism has the problem of using a single genetic description
contained in the first germ cell as specification for all of its various cell types. The
genome therefore appears to contain a set of developmental instructions, subsets of
which become "relevant" to the particular context in which each developing cell finds
itself. If we imagine that each cell type is a unique symbol in some alphabet, and
that the mature organism is a string of symbols, it becomes very natural to model
the developmental process as a (context-sensitive) grammar generating this string
[6, 5]. The initial germ cell becomes the start symbol. A series of production rules
specify the expansion (mitosis) of this non-terminal (cell) into two other symbols
that then develop according to the same set of genetically-determined rules, until
all cells are in a mature, terminal state.
ONTOL is a grammar for generating cells in the two-dimensional space of polynomial networks. The left hand side (LHS) of productions in this grammar define
conditions on the cells' internal Clock state and on the state of its eight Moore
neighbors. The RHS of the production defines one of five cell-state update actions
that are performed ifthe LHS condition is satisfied: A cell can mitosize either left or
down (M Left, M Down), meaning that this adjacent cell now becomes filled with
an identical copy; Die (Le., disappear entirely); Tick (simply decrement its internal Clock state); or Terminate (cease development). Only terminating cells form
synaptic connections, and only to adjacent neighbors.
The developmental process is begun by placing a single "gamete" cell at the origin of
the 2d polyspace, with its Clock state initialized to a maximal value M azClock = 4;
this state is decremented every time a gene is fired. If and when a gene causes this
cell to undergo mitosis, a new cell, either to the left or below the originial cell, is
created. Critically, the same set of genetic instructions contained in the original
gametic cell are used to control transitions of all its progeny cells (much like a
101
102
Belew
B
B
B
B
B
:2
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
1
1
B
B
B
B
B
B
B
B
B
B
B
B
B
1
1
Deg=l DiII:l
#-
0
U
0
Deg:l Di.-l
*-~.
Deg=:2 DiII=1
#-~
Figure 2: Logistic genome, engineered
cellular automaton's transition table), even though the differing contexts of each
cell are likely to cause different genes to be applied in different cells. Figure 2 shows
a trace of this developmental process: each snap-shot shows the Clock states of all
active (non-terminated) cells, the coordinates of the cell being expressed, and the
gene used to control its expression.
3
Experimental design
Each generation begins by developing and evaluating each genotype in the population. First, each genome in the population is expressed to form an executable
Lisp lambda expression computing a polynomial and a corresponding set of initial
weights for each of its terms. If this expression can be performed successfully and
the individual is viable (Le., their genomes can be interpretted to build well-formed
networks), the individual is exposed to NTrain sequential instances of the time
series. Fitness is then defined to be its cumulative error on the next NTest time
steps.
After the entire population has been evaluated, the next generation is formed according to a relatively conventional genetic algorithm: more successful individuals
are differentially reproduced and genetic operators are applied to these to experiment with novel, but similar, alternatives. Each genome is cloned zero, one or more
times using a proportional selection algorithm that guarantees the expected number
of offspring is proportional to an individual's relative fitness.
Variation is introduced into the population by mutation and recombination genetic
operators that explore new genes and genomic combinations. Four types of mutation
were applied, with the probability of a mutation proportional to genome length.
First, some random portion of an extant gene might be randomly altered, e.g.,
changing an initial weight, adding or deleting a constraint on a condition, changing
the gene's action. Because a gene's order in the genome can affect its probability
of being expressed, a second form of mutation permutes the order of the genes on
Interposing an ontogenic model between Genetic Algorithms and Neural Networks
3.S
Min Avg ---
3
2.S
III
III
2
QI
C
~~~(!~~~~~Ifl
~
..-4
"-
loS
1
.~~~~
O.S
o ~----~----~------~----~------~----~------~----~
600
700
800
100
200
300
400
SOO
o
Generations
Figure 3: Poulation Minimum and Average Fitness
the genome. A third class of mutation removes genes from the genome, always
"trimming" them from the end. Combined with the expression mechanism's bias
towards the head of the genomic list, this trimming operation creates a pressure
towards putting genes critical to early ontogeny near the head. The final and
critical form of mutation randomly selects a gene to be doubled: a duplicate copy of
the gene is constructed and inserted at a randomly selected position in the genome.
After all mutations have been performed, cross-over is performed between pairs of
individuals.
4
Experiments
To demonstrate, consider the problem of predicting a particularly difficult time
series, the chaotic logistic map: X t = 4.0Xt _ 1 - 4.oxl_ 1 . The example of Figure 2
showed an ONTOL genome engineered to produce the desired logistic polynomial.
This "genetically engineered" solution is merely evidence that a genetic solution
exists that can be interpretted to form the desired phenotypic form; the real test is
of course whether the GA can find it or something similar.
Early generations are not encouraging. Figure 3 shows the minimum (i.e., best)
prediction error and popUlation average error for the first 800 generations of a typical
simulation. Initial progress is rapid because in the initial, randomly constructed
population, fully half of the individuals are not even viable. These are strongly
103
104
Belew
100 ~----~----~------~----~------~-----r----~~----~
Nonlinear polys 90
80
70
~o
50
40
30
20
10
100
200
300
400
Generations
500
600
700
800
Figure 4: Complex polynomials
selected against, of course, and within the first two or three generations at least
95% of all generations remain viable.
For the next several hundred generations, however, all of aNTOL's developmental
machinery appears for naught as the dominant phenotypic individuals are the most
"simplistic" linear, first-degree approximators of the form W1Xl + woo Even here,
however, the GA is able to work in conjunction with the gradient learning process
is able to achieve Baldwin-like effects optimizing Wo and WI [1]. The simulation
reaches a "simplistic plateau," then, as it converges on a population composed of
the best predictors the simplistic linear, first-degree network topology permits for
this time series.
In the background, however, genetic operators are continuing to explore a wide
variety of genotypic forms that all have the property of generating roughly the
same simplistic phenotypes. Figure 4 shows that there are significant numbers of
"complex" polynomials l in early generations, and some of these have much higher
than average fitness 2 On average, however, genes leading to complex phenotypes
provide lead to poorer approximations than the simplistic ones, and are quickly
culled.
lI.e., either nonlinear terms or higher dimensional dependence on the past
2Note the good solutions in the first 50 generations, as well as subsequent dips during
the simplistic plateau.
Interposing an ontogenic model between Genetic Algorithms and Neural Networks
12000~-----r----~~----'------'------~----~------~----~
Selected Neutral --_.
10000
8000
Ul
Q)
i
t!)
...
6000
III
~
~
4000
2000
o~----~----~~----~----~------~----~------~----~
o
100
200
300
400
Generations
500
600
700
800
Figure 5: Genome length
A critical aspect of the redundancy introduced by gene doubling is that old genetic
material is freed to mutate into new forms without threatening the phenotype's
viability. When compared to a population of mediocre, simplistic networks any
complex networks able to provide more accurate predictions have much higher fitness, and eventually are able to take over the population. Around generation 400,
then, Figure 3 shows the fitness dropping from the simplistic plateau, and Figure
4 shows the number of complex polynomials increasing. Many of these individuals'
genomes indeed encode grammars that form polynomials of the desired functional
form.
A surprising feature of these simulations is that while the genes leading to complex
phenotypes are present from the beginning and continue to be explored during
the simplistic plateau, it takes many generations before these genes are successfully
composed into robust, consistently viable genotypes. How do the complex genotypes
discovered in later generations differ from those in the initial population?
One piece of the answer is revealed in Figure 5: later genomes are much longer.
All 100 individuals in the initial population have exactly five genes, and so the initial "gene pool" size is 500. In the experiments just described, this number grows
asymptotically to approximately 6000 total genes (i.e., 60 per individual, on average) during the simplistic plateau, and then explodes a second time to more than
10,000 as the population converts to complex polynomials. It appears that gene
duplication creates a very constructive form of redundancy: mulitple copies of crit-
105
106
Belew
ical genes help the genotype maintain the more elaborate development programs
required to form complex phenotypes. Micro-analysis of the most successful individuals in later generations supports this view. While many parts of their genomes
appear inconsequential (for example, relative to the engineered genome of Figure 2),
both the M Down gene and the two-element Terminate genes, critical to forming
polynomials that are "morphologically isomorphic" with the correct solution, are
consistently present.
This hypothesis is also supported by results from a second experiment, also plotted
on Figure 5. Recall that the increase in genome size caused by gene doubling is offset
by a trimming mutation that periodically shortens a genome. The curve labelled
"Neutral" shows the results of these opposing operations when the next generation
is formed randomly, rather than being selected for better prediction. Under neutral
selection, genome size grows slightly from initial size, but gene doubling and genome
trimming then quickly reach equilibrium. When we select for better predictors,
however, longer genomes are clearly preferred, at least up to a point. The apparent
asymptote accompanying the simplistic plateau suggests that if these simulations
were extended, the length of complex genotypes would also stabalize.a
Acknowledgements
I gratefully acknowledge the warm and stimulating research environments provide
by Domenico Parisi and colleagues at the Psychological Institute, CNR, Rome, Italy,
and Jean-Arcady Meyer and colleagues in the Groupe de BioInformatique, Ecole
Normale Superieure in Paris, France.
References
[1] R. K. Belew. Evolution, learning and culture: computational metaphors for
adaptive search. Complex Systems, 4(1):11-49, 1990.
[2] R. K. Belew, J. McInerney, and N. N. Schraudolph. Evolving networks: Using
the Genetic Algorithm with connectionist learning. In Proc. Second Artificial
Life Conference, pages 511-547, New York, 1991. Addison-Wesley.
[3] J. D. Cowan and A. E. Friedman. Development and regeneration of eye-brain
maps: A computational model. In Advances in Neural Info. Proc. Systems 2,
pages 92-99. Morgan Kaufman, 1990.
[4] S. E. Fahlman and C. Lebiere. The Cascade-Correlation learning architecture.
In D. S. Touretzky, editor, Advances in Neural Info. Proc. Systems ..I, pages
524-532. Morgan Kaufmann, 1990.
[5] H. Kitano. Designing neural networks using genetic algorithms with graph generation system. Complex Systems, 4(4), 1990.
[6] A. Lindenmayer and G. Rozenberg. Automata, languages, development. NorthHolland, Amsterdam, 1976.
[7] T. D. Sanger, R. S. Sutton, and C. J. Matheus. Iterative construction of sparse
polynomials. In J. E. Moody, S. J. Hanson, and R. P. Lippman, editors, Advances
in Neural Info. Proc. Systems ..I, pages 1064-1071. Morgan Kaufmann, 1992.
| 618 |@word polynomial:19 instruction:2 cloned:1 simulation:4 pressure:1 shot:1 initial:9 series:8 genetic:17 seriously:2 tuned:1 ecole:1 past:1 kitano:1 current:1 surprising:1 readily:1 subsequent:1 periodically:1 remove:1 asymptote:1 update:1 half:1 selected:4 nervous:1 ntrain:1 beginning:1 five:2 unbounded:1 mathematical:1 constructed:2 direct:1 become:1 viable:4 ontogeny:1 indeed:1 expected:1 roughly:1 rapid:1 behavior:1 growing:1 multi:1 brain:1 terminal:2 encouraging:2 metaphor:1 increasing:1 becomes:3 begin:1 what:1 kaufman:1 string:2 differing:1 finding:1 tractible:1 guarantee:2 temporal:1 every:2 exactly:2 control:2 unit:5 appear:1 before:1 local:2 offspring:1 sutton:1 ets:1 approximately:1 inconsequential:1 might:3 suggests:2 unique:1 chaotic:1 lippman:1 germ:2 w4:1 evolving:1 physiology:1 cascade:1 suggest:1 doubled:1 ga:9 close:1 operator:4 selection:2 mediocre:1 context:3 conventional:1 imposed:1 demonstrated:1 missing:1 map:2 go:2 automaton:2 resolution:1 permutes:1 rule:2 population:13 searching:1 coordinate:1 variation:1 resp:1 diego:1 imagine:1 construction:1 designing:1 hypothesis:1 origin:1 element:1 particularly:1 baldwin:1 inserted:1 ifl:1 developmental:8 environment:1 engr:1 terminating:1 exposed:1 crit:1 creates:2 basis:1 various:2 alphabet:1 distinct:1 effective:1 artificial:1 formation:1 apparent:1 richer:1 heuristic:1 jean:1 snap:1 regeneration:1 grammar:7 itself:1 final:2 reproduced:1 sequence:2 parisi:1 ifthe:1 propose:1 interaction:1 product:2 maximal:1 relevant:1 combining:1 fired:1 achieve:1 description:1 differentially:1 los:1 mutate:1 produce:1 generating:4 converges:1 help:1 develop:1 progress:1 differ:1 correct:2 engineered:4 material:1 preliminary:2 accompanying:1 around:1 equilibrium:1 algorithmic:1 major:2 vary:1 early:3 proc:4 sensitive:1 successfully:2 minded:1 weighted:1 clearly:1 genomic:2 always:1 modified:1 rather:1 normale:1 conjunction:1 encode:1 focus:2 consistently:2 nn:3 entire:1 ical:1 france:1 selects:1 morphologically:1 development:7 sampling:1 identical:1 placing:1 connectionist:1 decremented:1 richard:1 duplicate:1 micro:1 randomly:5 manipulated:2 proximations:1 composed:2 individual:15 fitness:7 cognitively:1 mulitple:1 maintain:1 attempt:1 opposing:1 friedman:1 trimming:4 threatening:1 nnet:5 genotype:8 antol:1 accurate:1 poorer:1 lh:2 culture:1 machinery:1 filled:1 continuing:1 old:1 initialized:1 re:1 desired:4 plotted:1 rozenberg:1 psychological:1 instance:1 subset:1 neutral:3 hundred:1 predictor:2 successful:2 too:1 reported:1 answer:1 combined:1 adaptively:1 stay:1 pool:1 quickly:2 moody:1 extant:1 satisfied:1 lambda:1 cognitive:1 leading:2 li:1 de:1 caused:1 depends:1 piece:1 performed:4 later:3 view:1 portion:1 start:1 complicated:1 mutation:8 formed:3 kaufmann:2 characteristic:1 correspond:1 yield:1 mitosis:2 critically:1 served:1 history:4 plateau:6 reach:2 touretzky:1 synaptic:1 against:1 colleague:2 lebiere:1 di:1 begun:1 recall:1 back:1 appears:4 wesley:1 higher:3 specify:1 done:2 though:1 evaluated:1 strongly:1 lifetime:1 just:1 until:1 clock:4 hand:1 correlation:1 nonlinear:2 defines:1 logistic:3 grows:2 effect:1 contain:1 evolution:5 analytically:1 moore:1 adjacent:2 during:4 whereby:2 die:1 stone:1 demonstrate:1 meaning:1 novel:1 functional:2 executable:1 organism:2 significant:1 ai:1 language:2 gratefully:1 access:1 specification:1 longer:2 etc:1 something:1 dominant:1 own:1 recent:1 showed:1 perspective:1 optimizing:1 jolla:1 italy:1 arbitrarily:1 continue:1 life:1 approximators:1 morgan:3 minimum:2 additional:1 determine:1 match:1 cross:1 schraudolph:1 mcinerney:1 qi:1 prediction:6 simplistic:11 cell:23 background:1 explodes:1 duplication:1 undergo:1 mature:2 cnr:1 cowan:1 lisp:1 near:1 revealed:1 iii:3 viability:1 concerned:1 variety:1 affect:1 w3:1 architecture:1 topology:3 restrict:1 aproximation:1 whether:1 expression:5 ul:1 wo:1 york:1 cause:2 action:2 interposing:4 generate:1 neuroscience:1 overly:1 per:1 dropping:1 group:1 putting:1 four:1 redundancy:2 demonstrating:1 changing:3 phenotypic:4 freed:1 asymptotically:1 graph:1 merely:1 convert:1 parameterized:1 parsimonious:1 entirely:1 layer:5 simplification:3 correspondence:1 constraint:1 aspect:1 span:1 min:1 performing:1 relatively:1 developing:2 according:2 combination:1 remain:1 slightly:1 wi:1 taken:1 remains:1 describing:1 eventually:1 mechanism:1 addison:1 end:1 operation:2 permit:1 eight:1 alternative:2 original:1 cf:1 belew:7 sanger:1 recombination:1 build:1 disappear:1 dependence:1 gradient:5 extent:1 cellular:2 length:3 relationship:1 culled:1 difficult:1 info:3 trace:1 design:2 allowing:1 acknowledge:1 descent:1 gas:2 matheus:1 extended:1 head:2 rome:1 discovered:1 ucsd:1 arbitrary:1 introduced:2 pair:1 required:1 paris:1 connection:1 hanson:1 california:1 able:4 below:2 genetically:3 genotypic:1 program:1 soo:1 deleting:1 critical:5 natural:2 hybrid:1 rely:1 predicting:2 warm:1 shaper:1 altered:1 technology:1 eye:1 created:1 hm:1 woo:1 diii:2 acknowledgement:1 evolve:1 relative:2 fully:1 interesting:1 generation:18 proportional:3 analogy:1 degree:5 consistent:1 editor:2 production:4 course:3 supported:1 fahlman:1 copy:3 side:1 tick:1 bias:1 institute:1 neighbor:2 wide:1 taking:1 shortens:1 sparse:1 dip:1 dimension:3 curve:1 transition:2 empiricist:1 genome:22 evaluating:1 cumulative:1 adaptive:2 san:1 avg:1 far:2 nativist:1 preferred:1 gene:27 deg:3 global:1 active:1 assumed:1 xi:1 search:6 iterative:1 table:1 nature:3 learn:2 terminate:2 ca:1 robust:1 northholland:1 interact:1 expansion:1 complex:12 rh:1 motivation:1 decrement:1 terminated:1 body:1 x1:1 elaborate:1 precision:1 position:1 meyer:1 third:2 theorem:1 down:3 xt:3 symbol:4 list:1 explored:1 cease:1 offset:1 evidence:1 derives:1 exists:3 false:1 sequential:1 effectively:2 importance:1 adding:1 phenotype:6 depicted:1 simply:2 likely:1 explore:2 forming:1 expressed:6 ordered:1 contained:2 amsterdam:1 doubling:4 corresponds:1 stimulating:1 viewed:1 towards:2 labelled:1 nurture:1 replace:1 change:1 determined:2 typical:1 called:1 total:1 isomorphic:1 ntest:1 experimental:1 la:1 select:1 internal:2 support:1 superieure:1 constructive:2 dept:1 phenomenon:1 |
5,725 | 6,180 | Learning Kernels with Random Features
Aman Sinha1
John Duchi1,2
1
Departments of Electrical Engineering and 2 Statistics
Stanford University
{amans,jduchi}@stanford.edu
Abstract
Randomized features provide a computationally efficient way to approximate kernel
machines in machine learning tasks. However, such methods require a user-defined
kernel as input. We extend the randomized-feature approach to the task of learning
a kernel (via its associated random features). Specifically, we present an efficient
optimization problem that learns a kernel in a supervised manner. We prove the
consistency of the estimated kernel as well as generalization bounds for the class
of estimators induced by the optimized kernel, and we experimentally evaluate our
technique on several datasets. Our approach is efficient and highly scalable, and we
attain competitive results with a fraction of the training cost of other techniques.
1
Introduction
An essential element of supervised learning systems is the representation of input data. Kernel
methods [27] provide one approach to this problem: they implicitly transform the data to a new
feature space, allowing non-linear data representations. This representation comes with a cost, as
kernelized learning algorithms require time that grows at least quadratically in the data set size,
and predictions with a kernelized procedure require the entire training set. This motivated Rahimi
and Recht [24, 25] to develop randomized methods that efficiently approximate kernel evaluations
with explicit feature transformations; this approach gives substantial computational benefits for large
training sets and allows the use of simple linear models in the randomly constructed feature space.
Whether we use standard kernel methods or randomized approaches, using the ?right? kernel for a
problem can make the difference between learning a useful or useless model. Standard kernel methods
as well as the aforementioned randomized-feature techniques assume the input of a user-defined
kernel?a weakness if we do not a priori know a good data representation. To address this weakness,
one often wishes to learn a good kernel, which requires substantial computation. We combine kernel
learning with randomization, exploiting the computational advantages offered by randomized features
to learn the kernel in a supervised manner. Specifically, we use a simple pre-processing stage for
selecting our random features rather than jointly optimizing over the kernel and model parameters.
Our workflow is straightforward: we create randomized features, solve a simple optimization problem
to select a subset, then train a model with the optimized features. The procedure results in lowerdimensional models than the original random-feature approach for the same performance. We give
empirical evidence supporting these claims and provide theoretical guarantees that our procedure is
consistent with respect to the limits of infinite training data and infinite-dimensional random features.
1.1
Related work
To discuss related work, we first describe the supervised learning problem underlying our approach.
We have a cost c : R ? Y ? R, where c(?, y) is convex for y ? Y, and a reproducing kernel Hilbert
space (RKHS) of functions F with kernel K. Given a sample {(xi , y i )}ni=1 , the usual `2 -regularized
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
learning problem is to solve the following (shown in primal and dual forms respectively):
minimize
f ?F
n
X
c(f (xi ), y i ) +
i=1
?
2
kf k2 ,
2
maximize
?
n
or
??R
n
X
c? (?i , y i ) ?
i=1
1 T
? G?,
2?
(1)
where k?k2 denotes the Hilbert space norm, c? (?, y) = supz {?z ? c(z, y)} is the convex conjugate
of c (for fixed y) and G = [K(xi , xj )]ni,j=1 denotes the Gram matrix.
Several researchers have studied kernel learning. As noted by G?nen and Alpayd?n [14], most
formulations fall into one of a few categories. In the supervised setting, one assumes a base class
or classes of kernels and either uses heuristic rules to combine kernels [2, 23], optimizes structured
(e.g. linear, nonnegative, convex) compositions of the kernels with respect to an alignment metric
[9, 16, 20, 28], or jointly optimizes kernel compositions with empirical risk [17, 20, 29]. The latter
approaches require an eigendecomposition of the Gram matrix or costly optimization problems
(e.g. quadratic or semidefinite programs) [10, 14], but these models have a variety of generalization
guarantees [1, 8, 10, 18, 19]. Bayesian variants of compositional kernel search also exist [12, 13]. In
un- and semi-supervised settings, the goal is to learn an embedding of the input distribution followed
by a simple classifier in the embedded space (e.g. [15]); the hope is that the input distribution carries
the structure relevant to the task. Despite the current popularity of these techniques, especially deep
neural architectures, they are costly, and it is difficult to provide guarantees on their performance.
Our approach optimizes kernel compositions with respect to an alignment metric, but rather than work
with Gram matrices in the original data representation, we work with randomized feature maps that
approximate RKHS embeddings. We learn a kernel that is structurally different from a user-supplied
base kernel, and our method is an efficiently (near linear-time) solvable convex program.
2
Proposed approach
At a high level, we take a feature mapping, find a distribution that aligns this mapping with the labels
y, and draw random features from the learned distribution; we then use these features in a standard
supervised learning approach.
For simplicity, we focus on binary classification: we have n datapoints (xi , y i ) ? Rd ? {?1, 1}.
Letting ? : Rd ? W ? [?1, 1] and Q be a probability measure on a space W, define the kernel
Z
KQ (x, x0 ) := ?(x, w)?(x0 , w)dQ(w).
(2)
We want to find the ?best? kernel KQ over all distributions Q in some (large, nonparametric) set P of
possible distributions on random features; we consider a kernel alignment problem of the form
X
maximize
KQ (xi , xj )y i y j .
(3)
Q?P
i,j
We focus on sets P defined by divergence measures on the space of probability distributions.
For a convex function
R dP f with f (1) = 0, the f -divergence between distributions P and Q is
Df (P ||Q) = f ( dQ
)dQ. Then, for a base (user-defined) distribution P0 , we consider collections P := {Q : Df (Q||P0 ) ? ?} where ? > 0 is a specified constant. In this paper, we focus
on divergences f (t) = tk ? 1 for k ? 2. Intuitively, the distribution Q maximizing the alignment (3) gives a feature space in which pairwise distances are similar to those in the output space Y.
Unfortunately, the problem (3) is generally intractable as it is infinite dimensional.
Using the randomized feature approach, we approximate the integral (2) as a discrete sum over
iid
samples W i ? P0 , i ? [Nw ]. Defining the discrete approximation PNw := {q : Df (q||1/Nw ) ? ?}
to P, we have the following empirical version of problem (3):
maximize
q?PNw
X
i,j
yi yj
Nw
X
qm ?(xi , wm )?(xj , wm ).
(4)
m=1
Using randomized features, matching the input and output distances in problem (4) translates to
finding a (weighted) set of points among w1 , w2 , ..., wNw that best ?describe? the underlying dataset,
or, more directly, finding weights q so that the kernel matrix matches the correlation matrix yy T .
2
Given a solution qb to problem (4), we can solve the primal form of problem (1) in two ways. First, we
iid
can apply the Rahimi and Recht [24] approach by drawing D samples W 1 , . . . , W D ? qb, defining
i
i
1
i
D T
features ? = [?(x , w ) ? ? ? ?(x , w )] , and solving the risk minimization problem
X
n
T
i
i
1
b
(5)
? = argmin
c ?D ? ? , y + r(?)
?
i=1
for some regularization r. Alternatively, we may set ?i = [?(xi , w1 ) ? ? ? ?(xi , wNw )]T , where
w1 , . . . , wNw are the original random samples from P0 used to solve (4), and directly solve
X
n
1
T
i i
b
2
? = argmin
c(? diag(b
q ) ? , y ) + r(?) .
(6)
?
i=1
Notably, if qb is sparse, the problem (6) need only store the random features corresponding to non-zero
entries of qb. Contrast our two-phase procedure to that of Rahimi and Recht [25], which samples
iid
W 1 , . . . , W D ? P0 and solves the minimization problem
X
n
D
X
i
m
i
minimize
c
?m ?(x , w ), y
subject to k?k? ? C/Nw ,
(7)
??RNw
i=1
m=1
where C is a numerical constant. At first glance, it appears that we may suffer both in terms of
computational efficiency and in classification or learning performance compared to the one-step
procedure (7). However, as we show in the sequel, the alignment problem (4) can be solved very
efficiently and often yields sparse vectors qb, thus substantially decreasing the dimensionality of
problem (6). Additionally, we give experimental evidence in Section 4 that the two-phase procedure
yields generalization performance similar to standard kernel and randomized feature methods.
2.1
Efficiently solving problem (4)
The optimization problem (4) has structure that enables efficient (near linear-time) solutions. Define
the matrix ? = [?1 ? ? ? ?n ] ? RNw ?n , where ?i = [?(xi , w1 ) ? ? ? ?(xi , wNw )]T ? RNw is the
iid
randomized feature representation for xi and wm ? P0 . We can rewrite the optimization objective as
X
2
Nw
Nw
n
X
X
X
yi yj
qm ?(xi , wm )?(xj , wm ) =
qm
y i ?(xi , wm ) = q T ((?y) (?y)) ,
i,j
m=1
m=1
i=1
where denotes the Hadamard product. Constructing the linear objective requires the evaluation of
?y. Assuming that the computation of ? is O(d), construction of ? is O(nNw d) on a single processor.
However, this construction is trivially parallelizable. Furthermore, computation can be sped up even
further for certain distributions P0 . For example, the Fastfood technique can approximate ? in
O(nNw log(d)) time for the Gaussian kernel [21].
The problem (4) is also efficiently solvable via bisection over a scalar dual variable. Using ? ? 0 for
the constraint Df (Q||P0 ) ? ?, a partial Lagrangian is
L(q, ?) = q T ((?y) (?y)) ? ? (Df (q||1/Nw ) ? ?) .
T
w
The corresponding dual function is g(?) = supq?? L(q, ?), where ? := {q ? RN
+ : q 1 = 1}
is the probability simplex. Minimizing g(?) yields the solution to problem (4); this is a convex
optimization problem in one dimension so we can use bisection. The computationally expensive step
in each iteration is maximizing L(q, ?) with respect to q for a given ?. For f (t) = tk ? 1, we define
v := (?y) (?y) and solve
Nw
1 X
maximize q v ? ?
(Nw qm )k .
q??
Nw m=1
T
(8)
1
P
This has a solution of the form qm = vm /?Nwk?1 + ? +k?1 , where ? is chosen so that m qm = 1.
We can find such a ? by a variant of median-based search in O(Nw ) time [11]. Thus, for any k ? 2,
an -suboptimal solution to problem (4) can be found in O(Nw log(1/)) time (see Algorithm 1).
3
Algorithm 1 Kernel optimization with f (t) = tk ? 1 as divergence
I NPUT: distribution P0 on W, sample {(xi , y i )}n
i=1 , Nw ? N, feature function ?, > 0
O UTPUT: q ? RNw that is an -suboptimal solution to (4).
iid
S ETUP : Draw Nw samples wm ? P0 , build feature matrix ?, compute v := (?y) (?y).
Set ?u ? ?, ?l ? 0, ?s ? 1
while ?u = ?
q ? argmaxq?? L(q, ?s )
// (solution to problem (8))
if Df (q||1/Nw ) < ? then ?u ? ?s else ?s ? 2?s
while ?u ? ?l > ?s
? ? (?u + ?l )/2
q ? argmaxq?? L(q, ?)
// (solution to problem (8))
if Df (q||1/Nw ) < ? then ?u ? ? else ?l ? ?
3
Consistency and generalization performance guarantees
Although the procedure (4) is a discrete approximation to a heuristic kernel alignment problem,
we can provide guarantees on its performance as well as the generalization performance of our
subsequent model trained with the optimized kernel.
Consistency First, we provide guarantees that the solution to problem (4) approaches a population
optimum as the data and random sampling increase (n ? ? and Nw ? ?, respectively). We
consider the following (slightly more general) setting: let S : X ? X ? [?1, 1] be a bounded
function, where we intuitively think of S(x, x0 ) as a similarity metric between labels for x and x0 ,
and denote Sij := S(xi , xj ) (in the binary case with y ? {?1, 1}, we have Sij = y i y j ). We then
define the alignment functions
X
1
T (P ) := E[S(X, X 0 )KP (X, X 0 )],
Tb(P ) :=
Sij KP (xi , xj ),
n(n ? 1)
i6=j
where the expectation is taken over S and the independent variables X, X 0 . Lemmas 1 and 2 provide
consistency guarantees with respect to the data sample (xi and Sij ) and the random feature sample
(wm ); together they give us the overall consistency result of Theorem 1. We provide proofs in the
supplement (Sections A.1, A.2, and A.3 respectively).
Lemma 1 (Consistency with respect to data). Let f (t) = tk ? 1 for k ? 2. Let P0 be any distribution
on the space W, and let P = {Q : Df (Q||P0 ) ? ?}. Then
?
nt2
.
P sup Tb(Q) ? T (Q) ? t ? 2 exp ?
16(1 + ?)
Q?P
Lemma 1 shows that the empirical quantity Tb is close to the true T . Now we show that, independent
of the size of the training data, we can consistently estimate the optimal Q ? P via sampling (i.e.
Q ? PNw ).
Lemma 2 (Consistency with respectp
to sampling features). Let the conditions of Lemma 1 hold.
Then, with C? = ?2(?+1)
and
D
=
8(1 + ?), we have
?
1+??1
s
s
log 2?
log(2N
)
w
sup Tb(Q) ? sup Tb(Q) ? 4C?
+
D
?
Q?P
Nw
Nw
Q?P
Nw
iid
with probability at least 1 ? ? over the draw of the samples W m ? P0 .
Finally, we combine the consistency guarantees for data and sampling to reach our main result, which
b is nearly optimal.
shows that the alignment provided by the estimated distribution Q
b w maximize Tb(Q) over Q ? PN . Then, with probability at least 1 ? 3? over the
Theorem 1. Let Q
w
sampling of both (x, y) and W , we have
s
s
s
2
log
2 log 2?
log(2N
)
w
?
b w ) ? sup T (Q) ? 4C?
T (Q
+
D
+
2D
.
?
?
Nw
Nw
n
Q?P
4
Generalization performance The consistency results above show that our optimization procedure
nearly maximizes alignment T (P ), but they say little about generalization performance for our model
trained using the optimized kernel. We now show that the class of estimators employed by our method
has strong performance guarantees. By construction, our estimator (6) uses the function class
Nw
o
n
X
?
FNw := h(x) =
?m qm ?(x, wm ) | q ? PNw , k?k2 ? B ,
m=1
and we provide bounds on its generalization
via empirical Rademacher complexity. To that end,
Pn
define Rn (FNw ) := n1 E[supf ?FNw i=1 ?i f (xi )], where the expectation is taken over the i.i.d.
Rademacher variables ?i ? {?1, 1}. We have the following lemma, whose proof is in Section A.4.
q
Lemma 3. Under the conditions of the preceding paragraph, Rn (FNw ) ? B 2(1+?)
.
n
Applying standard concentration results, we obtain the following generalization guarantee.
Theorem 2 ([8, 18]). Let the true misclassification risk and ?-empirical misclassification risk for an
estimator h be defined as follows:
n
n
X
o
b? (h) := 1
R(h) := P(Y h(X) < 0),
R
min 1, 1 ? yh(xi )/? + .
n i=1
q 2
b? (h)} ? 2 Rn (FN ) + 3 log ? with probability at least 1 ? ?.
Then suph?FNw {R(h) ? R
w
?
2n
The bound is independent of the number of terms Nw , though in practice we let B grow with Nw .
4
Empirical evaluations
We now turn to empirical evaluations, comparing our approach?s predictive performance with that of
Rahimi and Recht?s randomized features [24] as well as a joint optimization over kernel compositions
and empirical risk. In each of our experiments, we investigate the effect of increasing dimensionality
of the randomized feature space D. For our approach, we use the ?2 -divergence (k = 2 or f (t) =
t2 ? 1). Letting qb denote the solution to problem (4), we use two variants of our approach: when
D < nnz(b
q ) we use estimator (5), and we use estimator (6) otherwise. For the original randomized
feature approach, we relax the constraint in problem (7) with an `2 penalty. Finally, for the joint
optimization in which we learn the kernel and classifier together, we consider the kernel-learning
objective, i.e. finding the best Gram matrix G in problem (1) for the soft-margin SVM [14]:
P
PNw
minimizeq?PNw sup?
?T 1 ? 21 i,j ?i ?j y i y j m=1
qm ?(xi , wm )?(xj , wm )
(9)
T
subject to 0 ? C1, ? y = 0.
We use a standard primal-dual algorithm [4] to solve the min-max problem (9). While this is an
expensive optimization, it is a convex problem and is solvable in polynomial time.
In Section 4.1, we visualize a particular problem that illustrates the effectiveness of our approach
when the user-defined kernel is poor. Section 4.2 shows how learning the kernel can be used to quickly
find a sparse set of features in high dimensional data, and Section 4.3 compares our performance with
unoptimized random features and the joint procedure (9) on benchmark datasets. The supplement
contains more experimental results in Section C.
Learning a new kernel with a poor choice of P0
4.1
?
iid
For our first experiment, we generate synthetic data xi ? N(0, I) with labels y i = sign(kxk2 ? d),
where x ? Rd . The Gaussian kernel is ill-suited for this task, as the Euclidean distance used
in this kernel does not capture the underlying structure of the classes. Nevertheless, we use the
Gaussian kernel, which corresponds [24] to ?(x, (w, v)) = cos((x, 1)T (w, v)) where (W, V ) ?
N(0, I) ? Uni(0, 2?), to showcase the effects of our method. We consider a training set of size
n = 104 and a test set of size 103 , and we employ logistic regression with D = nnz(b
q ) for both our
technique as well as the original random feature approach.1
1
For 2 ? d ? 15, nnz(b
q ) < 250 when the kernel is trained with Nw = 2 ? 104 , and ? = 200.
5
0.45
3
0.4
2
0.35
GK-train
GK-test
OK-train
OK-test
0.3
1
0.25
0
0.2
-1
0.15
-2
0.1
-3
0.05
0
-4
-2
0
2
4
2
4
6
(a) Training data & optimized features for d = 2
8
10
12
14
(b) Error vs. d
Figure 1. Experiments with synthetic data. (a) Positive and negative training examples are blue and red,
and optimized randomized features (wm ) are yellow. All offset parameters v m were optimized to be
near 0 or ? (not shown). (b) Misclassification error of logistic regression model vs. dimensionality of
data. GK denotes random features with a Gaussian kernel, and our optimized kernel is denoted OK.
0.05
0.5
0.04
0.4
0.03
0.3
0.02
0.2
0.1
101
0.01
102
103
0
101
104
102
103
104
105
(b) qbi vs. i
(a) Error vs. D
Figure 2. Feature selection in sparse data. (a) Misclassification error of ridge regression model vs.
dimensionality of data. LK denotes random features with a linear kernel, and OK denotes our method.
Our error is fixed above D = nnz(b
q ) after which we employ estimator (6). (b) Weight of feature i in
optimized kernel (qi ) vs. i. Vertical bars delineate separations between k-grams, where 1 ? k ? 5 is
nondecreasing in i. Circled features are prefixes of GGTTG and GTTGG at indices 60?64.
Figure 1 shows the results of the experiments for d ? {2, . . . , 15}. Figure 1(a) illustrates the output
of the optimization when d = 2. The selected kernel features wm lie near (1, 1) and (?1, ?1); the
offsets v m are near 0 and ?, giving the feature ?(?, w, v) a parity flip. Thus, the kernel computes
similarity between datapoints via neighborhoods of (1, 1) and (?1, ?1) close to the classification
boundary. In higher dimensions, this generalizes to neighborhoods of pairs of opposing points along
the surface of the d-sphere; these features provide a coarse approximation to vector magnitude.
Performance degradation with d occurs because the neighborhoods grow exponentially larger and
less dense (due to fixed Nw and n). Nevertheless, as shown in Figure 1(b), this degradation occurs
much more slowly than that of the Gaussian kernel, which suffers a similar curse of dimensionality
due to its dependence on Euclidean distance. Although somewhat contrived, this example shows that
even in situations with poor base kernels our approach learns a more suitable representation.
4.2
Feature selection and biological sequences
In addition to the computational advantages rendered by the sparsity of q after performing the
optimization (4), we can use this sparsity to gain insights about important features in high-dimensional
datasets; this can act as an efficient filtering mechanism before further investigation. We present
one example of this task, studying an aptamer selection problem [6]. In this task, we are given
n = 2900 nucleotide sequences (aptamers) xi ? A81 , where A = {A,C,G,T} and labels y i indicate
(thresholded) binding affinity of the aptamer to a molecular target. We create one-hot encoded forms
P5
of k-grams of the sequence, where 1 ? k ? 5, resulting in d = k=1 |A|k (82 ? k) = 105,476
6
0.5
0.2
0.45
0.18
0.4
0.16
0.35
0.14
0.24
0.22
0.2
0.3
0.18
0.12
0.25
0.1
0.2
0.16
0.08
0.15
0.06
0.14
0.1
2
3
10
1
10
10
(a) Error vs. D, adult
10
2
3
10
4
10
0.04
101
102
(b) Error vs. D, reuters
103
103
(c) Error vs. D, buzz
101
101
102
101
100
100
0
10
10-1
102
103
(d) Speedup vs. D, adult
10-1
101
102
103
104
(e) Speedup vs. D, reuters
101
102
103
(f) Speedup vs. D, buzz
Figure 3. Performance analysis on benchmark datasets. The top row shows training and test misclassification rates. Our method is denoted as OK and is shown in red. The blue methods are random features
with Gaussian, linear, or arc-cosine kernels (GK, LK, or ACK respectively). Our error and running
time become fixed above D = nnz(b
q ) after which we employ estimator (6). The bottom row shows the
speedup factor of using our method over regular random features (speedup = x indicates our method
takes 1/x of the time required to use regular random features). Our method is faster at moderate to large
D and shows better performance than the random feature approach at small to moderate D.
Table 1: Best test results over benchmark datasets
Dataset
adult
reuters
buzz
n, ntest
32561, 16281
23149, 781265
105530, 35177
d
123
47236
77
Model
Logistic
Ridge
Ridge
Our error (%), time(s)
15.54,
3.6
9.27,
0.8
4.92, 2.0
Random error (%), time(s)
15.44,
43.1
9.36, 295.9
4.58,
11.9
features. We consider the linear kernel, i.e. ?(x, w) = xw , where w ? Uni({1, . . . , d}). Figure 2(a)
compares the misclassification error of our method with that of random k-gram features, while Figure
2(b) indicates the weights qi given to features by our method. In under 0.2 seconds, we whittle down
the original feature space to 379 important features. By restricting random selection to just these
features, we outperform the approach of selecting features uniformly at random when D d. More
importantly, however, we can derive insights from this selection. For example, the circled features in
Figure 2(b) correspond to k-gram prefixes for the 5-grams GGTTG and GTTGG at indices 60 through
64; G-complexes are known to be relevant for binding affinities in aptamers [6], so this is reasonable.
4.3
Performance on benchmark datasets
We now show the benefits of our approach on large-scale datasets, since we exploit the efficiency
of random features with the performance of kernel-learning techniques. We perform experiments
on three distinct types of datasets, tracking training/test error rates as well as total (training + test)
time. For the adult2 dataset we employ the Gaussian kernel with a logistic regression model, and
for the reuters3 dataset we employ a linear kernel with a ridge regression model. For the buzz4
dataset we employ ridge regression with an arc-cosine kernel of order 2, i.e. P0 = N (0, I) and
?(x, w) = H(wT x)(wT x)2 , where H(?) is the Heavyside step function [7].
2
https://archive.ics.uci.edu/ml/datasets/Adult
http://www.ai.mit.edu/projects/jmlr/papers/volume5/lewis04a/lyrl2004_rcv1v2_README.htm. We consider predicting whether a document has a CCAT label.
4
http://ama.liglab.fr/data/buzz/classification/. We use the Twitter dataset.
3
7
Dataset
adult
reuters
buzz
Table 2: Comparisons with joint optimization on subsampled data
Our training / test error (%), time(s) Joint training / test error (%), time(s)
16.22 / 16.36,
1.8
14.88 / 16.31,
198.1
7.64 / 9.66,
0.6
6.30 / 8.96,
173.3
8.44 / 8.32,
0.4
7.38 / 7.08,
137.5
Comparison with unoptimized random features Results comparing our method with unoptimized random features are shown in Figure 3 for many values of D, and Table 1 tabulates the best
test error and corresponding time for the methods. Our method outperforms the original random
feature approach in terms of generalization error for small and moderate values of D; at very large D
the random feature approach either matches our surpasses our performance. The trends in speedup
are opposite: our method requires extra optimizations that dominate training time at extremely small
D; at very large D we use estimator (6), so our method requires less overall time. The nonmonotonic
behavior for reuters (Figure 3(e)) occurs due to the following: at D . nnz(b
q ), sampling indices
from the optimized distribution takes a non-neglible fraction of total time, and solving the linear
system requires more time when rows of ? are not unique (due to sampling).
Performance improvements also depend on the kernel choice for a dataset. Namely, our method
provides the most improvement, in terms of training time for a given amount of generalization error,
over random features generated for the linear kernel on the reuters dataset; we are able to surpass
the best results of the random feature approach 2 orders of magnitude faster. This makes sense when
considering the ability of our method to sample from a small subset of important features. On the
other hand, random features for the arc-cosine kernel are able to achieve excellent results on the
buzz dataset even without optimization, so our approach only offers modest improvement at small to
moderate D. For the Gaussian kernel employed on the adult dataset, our method is able to achieve
the same generalization performance as random features in roughly 1/12 the training time.
Thus, we see that our optimization approach generally achieves competitive results with random
features at lower computational costs, and it offers the most improvements when either the base
kernel is not well-suited to the data or requires a large number of random features (large D) for good
performance. In other words, our method reduces the sensitivity of model performance to the user?s
selection of base kernels.
Comparison with joint optimization Despite the fact that we do not choose empirical risk as our
objective in optimizing kernel compositions, our optimized kernel enjoys competitive generalization
performance compared to the joint optimization procedure (9). Because the joint optimization is
very costly, we consider subsampled training datasets of 5000 training examples. Results are shown
in Table 2, where it is evident that the efficiency of our method outweighs the marginal gain in
classification performance for joint optimization.
5
Conclusion
We have developed a method to learn a kernel in a supervised manner using random features. Although
we consider a kernel alignment problem similar to other approaches in the literature, we exploit
computational advantages offered by random features to develop a much more efficient and scalable
optimization procedure. Our concentration bounds guarantee the results of our optimization procedure
closely match the limits of infinite data (n ? ?) and sampling (Nw ? ?), and our method produces
models that enjoy good generalization performance guarantees. Empirical evaluations indicate that
our optimized kernels indeed ?learn? structure from data, and we attain competitive results on
benchmark datasets at a fraction of the training time for other methods. Generalizing the theoretical
results for concentration and risk to other f ?divergences is the subject of further research. More
broadly, our approach opens exciting questions regarding the usefulness of simple optimizations on
random features in speeding up other traditionally expensive learning problems.
Acknowledgements This research was supported by a Fannie & John Hertz Foundation Fellowship
and a Stanford Graduate Fellowship.
8
References
[1] P. L. Bartlett and S. Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results.
The Journal of Machine Learning Research, 3:463?482, 2003.
[2] A. Ben-Hur and W. S. Noble. Kernel methods for predicting protein?protein interactions. Bioinformatics,
21(suppl 1):i38?i46, 2005.
[3] A. Ben-Tal, D. den Hertog, A. D. Waegenaere, B. Melenberg, and G. Rennen. Robust solutions of
optimization problems affected by uncertain probabilities. Management Science, 59(2):341?357, 2013.
[4] D. Bertsekas. Nonlinear Programming. Athena Scientific, 1999.
[5] S. Boucheron, G. Lugosi, and P. Massart. Concentration Inequalities: a Nonasymptotic Theory of
Independence. Oxford University Press, 2013.
[6] M. Cho, S. S. Oh, J. Nie, R. Stewart, M. Eisenstein, J. Chambers, J. D. Marth, F. Walker, J. A. Thomson,
and H. T. Soh. Quantitative selection and parallel characterization of aptamers. Proceedings of the National
Academy of Sciences, 110(46), 2013.
[7] Y. Cho and L. K. Saul. Kernel methods for deep learning. In Advances in neural information processing
systems, pages 342?350, 2009.
[8] C. Cortes, M. Mohri, and A. Rostamizadeh. Generalization bounds for learning kernels. In Proceedings of
the 27th International Conference on Machine Learning (ICML-10), pages 247?254, 2010.
[9] C. Cortes, M. Mohri, and A. Rostamizadeh. Algorithms for learning kernels based on centered alignment.
The Journal of Machine Learning Research, 13(1):795?828, 2012.
[10] N. Cristianini, J. Kandola, A. Elisseeff, and J. Shawe-Taylor. On kernel target alignment. In Innovations in
Machine Learning, pages 205?256. Springer, 2006.
[11] J. C. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the `1 -ball for learning
in high dimensions. In Proceedings of the 25th International Conference on Machine Learning, 2008.
[12] D. Duvenaud, J. R. Lloyd, R. Grosse, J. B. Tenenbaum, and Z. Ghahramani. Structure discovery in
nonparametric regression through compositional kernel search. arXiv preprint arXiv:1302.4922, 2013.
[13] M. Girolami and S. Rogers. Hierarchic bayesian models for kernel learning. In Proceedings of the 22nd
international conference on Machine learning, pages 241?248. ACM, 2005.
[14] M. G?nen and E. Alpayd?n. Multiple kernel learning algorithms. The Journal of Machine Learning
Research, 12:2211?2268, 2011.
[15] G. E. Hinton and R. R. Salakhutdinov. Using deep belief nets to learn covariance kernels for gaussian
processes. In Advances in neural information processing systems, pages 1249?1256, 2008.
[16] J. Kandola, J. Shawe-Taylor, and N. Cristianini. Optimizing kernel alignment over combinations of kernel.
2002.
[17] M. Kloft, U. Brefeld, S. Sonnenburg, and A. Zien. Lp-norm multiple kernel learning. The Journal of
Machine Learning Research, 12:953?997, 2011.
[18] V. Koltchinskii and D. Panchenko. Empirical margin distributions and bounding the generalization error of
combined classifiers. Annals of Statistics, pages 1?50, 2002.
[19] V. Koltchinskii, D. Panchenko, et al. Complexities of convex combinations and bounding the generalization
error in classification. The Annals of Statistics, 33(4):1455?1496, 2005.
[20] G. R. Lanckriet, N. Cristianini, P. Bartlett, L. E. Ghaoui, and M. I. Jordan. Learning the kernel matrix with
semidefinite programming. The Journal of Machine Learning Research, 5:27?72, 2004.
[21] Q. Le, T. Sarl?s, and A. Smola. Fastfood-computing hilbert space expansions in loglinear time. In
Proceedings of the 30th International Conference on Machine Learning, pages 244?252, 2013.
[22] D. Luenberger. Optimization by Vector Space Methods. Wiley, 1969.
[23] S. Qiu and T. Lane. A framework for multiple kernel support vector regression and its applications to
sirna efficacy prediction. Computational Biology and Bioinformatics, IEEE/ACM Transactions on, 6(2):
190?199, 2009.
[24] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In Advances in Neural
Information Processing Systems 20, 2007.
[25] A. Rahimi and B. Recht. Weighted sums of random kitchen sinks: replacing minimization with randomization in learning. In Advances in Neural Information Processing Systems 21, 2008.
[26] P. Samson. Concentration of measure inequalities for Markov chains and ?-mixing processes. Annals of
Probability, 28(1):416?461, 2000.
[27] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press,
2004.
[28] Y. Ying, K. Huang, and C. Campbell. Enhanced protein fold recognition through a novel data integration
approach. BMC bioinformatics, 10(1):1, 2009.
[29] A. Zien and C. S. Ong. Multiclass multiple kernel learning. In Proceedings of the 24th international
conference on Machine learning, pages 1191?1198. ACM, 2007.
9
| 6180 |@word version:1 polynomial:1 norm:2 nd:1 open:1 covariance:1 p0:15 elisseeff:1 carry:1 contains:1 efficacy:1 selecting:2 rkhs:2 document:1 prefix:2 outperforms:1 current:1 comparing:2 john:2 fn:1 numerical:1 subsequent:1 enables:1 v:12 selected:1 coarse:1 provides:1 characterization:1 along:1 constructed:1 become:1 prove:1 combine:3 paragraph:1 manner:3 pairwise:1 x0:4 notably:1 indeed:1 roughly:1 behavior:1 salakhutdinov:1 decreasing:1 little:1 curse:1 considering:1 increasing:1 spain:1 provided:1 underlying:3 bounded:1 maximizes:1 project:1 argmin:2 substantially:1 developed:1 finding:3 transformation:1 jduchi:1 guarantee:12 quantitative:1 act:1 k2:3 classifier:3 qm:8 enjoy:1 bertsekas:1 positive:1 before:1 engineering:1 limit:2 despite:2 oxford:1 lugosi:1 koltchinskii:2 studied:1 co:1 supq:1 graduate:1 unique:1 yj:2 practice:1 procedure:12 nnz:6 empirical:12 attain:2 matching:1 projection:1 pre:1 word:1 regular:2 protein:3 onto:1 close:2 selection:7 risk:8 applying:1 www:1 map:1 lagrangian:1 maximizing:2 straightforward:1 convex:8 simplicity:1 estimator:9 rule:1 supz:1 insight:2 importantly:1 dominate:1 oh:1 datapoints:2 embedding:1 population:1 traditionally:1 annals:3 construction:3 target:2 enhanced:1 user:6 programming:2 us:2 lanckriet:1 element:1 trend:1 expensive:3 recognition:1 showcase:1 bottom:1 p5:1 preprint:1 electrical:1 solved:1 capture:1 sonnenburg:1 substantial:2 panchenko:2 complexity:3 nie:1 cristianini:4 ong:1 trained:3 depend:1 solving:3 rewrite:1 predictive:1 efficiency:3 sink:1 htm:1 joint:9 train:3 distinct:1 describe:2 kp:2 neighborhood:3 nonmonotonic:1 shalev:1 sarl:1 whose:1 heuristic:2 stanford:3 solve:7 larger:1 say:1 drawing:1 otherwise:1 relax:1 encoded:1 ability:1 statistic:3 think:1 transform:1 jointly:2 nondecreasing:1 advantage:3 sequence:3 brefeld:1 net:1 interaction:1 product:1 fr:1 relevant:2 hadamard:1 uci:1 mixing:1 ama:1 achieve:2 academy:1 exploiting:1 contrived:1 optimum:1 rademacher:3 produce:1 hertog:1 ben:2 tk:4 derive:1 develop:2 strong:1 solves:1 come:1 indicate:2 girolami:1 closely:1 centered:1 rogers:1 require:4 generalization:17 investigation:1 randomization:2 biological:1 duchi1:1 hold:1 duvenaud:1 ic:1 exp:1 mapping:2 nw:28 visualize:1 claim:1 achieves:1 label:5 soh:1 create:2 weighted:2 hope:1 minimization:3 mit:1 gaussian:10 rather:2 pn:2 focus:3 improvement:4 consistently:1 indicates:2 contrast:1 rostamizadeh:2 sense:1 twitter:1 nnw:2 entire:1 kernelized:2 unoptimized:3 overall:2 aforementioned:1 dual:4 classification:6 among:1 priori:1 ill:1 denoted:2 integration:1 marginal:1 sampling:8 biology:1 bmc:1 icml:1 nearly:2 noble:1 simplex:1 t2:1 few:1 employ:6 randomly:1 kandola:2 divergence:6 national:1 subsampled:2 kitchen:1 phase:2 n1:1 opposing:1 argmaxq:2 highly:1 investigate:1 evaluation:5 alignment:13 weakness:2 semidefinite:2 primal:3 chain:1 integral:1 partial:1 nucleotide:1 modest:1 euclidean:2 taylor:3 theoretical:2 uncertain:1 soft:1 stewart:1 cost:4 surpasses:1 subset:2 entry:1 kq:3 usefulness:1 synthetic:2 cho:2 combined:1 recht:6 international:5 sensitivity:1 randomized:16 kloft:1 sequel:1 vm:1 together:2 quickly:1 w1:4 management:1 choose:1 slowly:1 huang:1 nonasymptotic:1 whittle:1 fannie:1 lloyd:1 sup:5 red:2 competitive:4 wm:13 parallel:1 minimize:2 ni:2 efficiently:5 yield:3 correspond:1 yellow:1 bayesian:2 ccat:1 iid:7 bisection:2 buzz:6 researcher:1 processor:1 parallelizable:1 suffers:1 reach:1 aligns:1 associated:1 proof:2 gain:2 dataset:11 hur:1 qbi:1 dimensionality:5 hilbert:3 campbell:1 appears:1 ok:5 higher:1 supervised:8 formulation:1 though:1 delineate:1 furthermore:1 just:1 stage:1 smola:1 correlation:1 hand:1 replacing:1 nonlinear:1 glance:1 logistic:4 scientific:1 grows:1 effect:2 true:2 regularization:1 boucheron:1 noted:1 eisenstein:1 cosine:3 ack:1 evident:1 ridge:5 thomson:1 duchi:1 novel:1 sped:1 exponentially:1 extend:1 composition:5 cambridge:1 ai:1 rd:3 consistency:9 trivially:1 i6:1 shawe:3 samson:1 similarity:2 surface:1 base:6 optimizing:3 optimizes:3 moderate:4 store:1 certain:1 inequality:2 binary:2 yi:2 lowerdimensional:1 preceding:1 somewhat:1 employed:2 maximize:5 semi:1 zien:2 multiple:4 reduces:1 rahimi:6 match:3 faster:2 offer:2 sphere:1 molecular:1 qi:2 prediction:2 scalable:2 variant:3 regression:8 metric:3 df:8 expectation:2 chandra:1 iteration:1 kernel:90 arxiv:2 suppl:1 c1:1 addition:1 want:1 fellowship:2 else:2 median:1 grow:2 walker:1 w2:1 extra:1 archive:1 massart:1 induced:1 subject:3 effectiveness:1 jordan:1 structural:1 near:5 embeddings:1 variety:1 xj:7 independence:1 architecture:1 suboptimal:2 opposite:1 regarding:1 multiclass:1 translates:1 whether:2 motivated:1 bartlett:2 penalty:1 suffer:1 compositional:2 deep:3 workflow:1 useful:1 generally:2 amount:1 nonparametric:2 tenenbaum:1 category:1 generate:1 http:3 supplied:1 exist:1 outperform:1 sign:1 estimated:2 aptamers:3 popularity:1 yy:1 blue:2 broadly:1 discrete:3 affected:1 nevertheless:2 thresholded:1 fraction:3 sum:2 reasonable:1 separation:1 draw:3 bound:6 followed:1 fold:1 quadratic:1 nonnegative:1 adult2:1 waegenaere:1 constraint:2 lane:1 tal:1 min:2 extremely:1 qb:6 performing:1 rendered:1 speedup:6 department:1 structured:1 ball:1 poor:3 combination:2 conjugate:1 hertz:1 slightly:1 lp:1 intuitively:2 den:1 sij:4 ghaoui:1 taken:2 computationally:2 discus:1 turn:1 mechanism:1 singer:1 know:1 letting:2 flip:1 end:1 studying:1 generalizes:1 luenberger:1 nen:2 apply:1 chamber:1 rennen:1 original:7 denotes:6 assumes:1 top:1 running:1 outweighs:1 xw:1 exploit:2 giving:1 tabulates:1 ghahramani:1 especially:1 build:1 objective:4 nput:1 quantity:1 occurs:3 question:1 costly:3 concentration:5 usual:1 dependence:1 loglinear:1 affinity:2 dp:1 distance:4 athena:1 assuming:1 alpayd:2 useless:1 index:3 minimizing:1 innovation:1 ying:1 difficult:1 unfortunately:1 gk:4 negative:1 perform:1 allowing:1 vertical:1 datasets:11 markov:1 benchmark:5 arc:3 supporting:1 defining:2 situation:1 hinton:1 rn:4 reproducing:1 nwk:1 pair:1 required:1 specified:1 namely:1 optimized:12 quadratically:1 learned:1 barcelona:1 nip:1 address:1 adult:6 bar:1 able:3 pattern:1 sparsity:2 program:2 tb:6 max:1 belief:1 hot:1 misclassification:6 suitable:1 regularized:1 predicting:2 solvable:3 lk:2 speeding:1 wnw:4 neglible:1 circled:2 literature:1 acknowledgement:1 kf:1 discovery:1 heavyside:1 embedded:1 suph:1 filtering:1 pnw:6 eigendecomposition:1 foundation:1 offered:2 consistent:1 dq:3 exciting:1 row:3 mohri:2 supported:1 parity:1 enjoys:1 hierarchic:1 fall:1 saul:1 sparse:4 benefit:2 boundary:1 dimension:3 gram:9 computes:1 collection:1 transaction:1 approximate:5 uni:2 implicitly:1 ml:1 rnw:4 xi:22 shwartz:1 alternatively:1 search:3 un:1 table:4 additionally:1 learn:8 robust:1 expansion:1 excellent:1 utput:1 complex:1 constructing:1 diag:1 main:1 fastfood:2 dense:1 reuters:6 bounding:2 qiu:1 grosse:1 wiley:1 structurally:1 explicit:1 wish:1 lie:1 kxk2:1 jmlr:1 yh:1 learns:2 theorem:3 down:1 offset:2 svm:1 cortes:2 evidence:2 essential:1 intractable:1 mendelson:1 restricting:1 supplement:2 magnitude:2 illustrates:2 margin:2 supf:1 suited:2 generalizing:1 melenberg:1 tracking:1 scalar:1 binding:2 springer:1 corresponds:1 acm:3 goal:1 experimentally:1 specifically:2 infinite:4 uniformly:1 wt:2 surpass:1 lemma:7 degradation:2 total:2 ntest:1 experimental:2 select:1 nt2:1 support:1 latter:1 bioinformatics:3 evaluate:1 |
5,726 | 6,181 | Learning Influence Functions from Incomplete
Observations
Xinran He
Ke Xu
David Kempe
Yan Liu
University of Southern California, Los Angeles, CA 90089
{xinranhe, xuk, dkempe, yanliu.cs}@usc.edu
Abstract
We study the problem of learning influence functions under incomplete observations of node activations. Incomplete observations are a major concern as most
(online and real-world) social networks are not fully observable. We establish
both proper and improper PAC learnability of influence functions under randomly
missing observations. Proper PAC learnability under the Discrete-Time Linear
Threshold (DLT) and Discrete-Time Independent Cascade (DIC) models is established by reducing incomplete observations to complete observations in a modified
graph. Our improper PAC learnability result applies for the DLT and DIC models
as well as the Continuous-Time Independent Cascade (CIC) model. It is based
on a parametrization in terms of reachability features, and also gives rise to an
efficient and practical heuristic. Experiments on synthetic and real-world datasets
demonstrate the ability of our method to compensate even for a fairly large fraction
of missing observations.
1
Introduction
Many social phenomena, such as the spread of diseases, behaviors, technologies, or products, can
naturally be modeled as the diffusion of a contagion across a network. Owing to the potentially high
social or economic value of accelerating or inhibiting such diffusions, the goal of understanding
the flow of information and predicting information cascades has been an active area of research
[10, 7, 9, 14, 1, 20]. A key task here is learning influence functions, mapping sets of initial adopters
to the individuals who will be influenced (also called active) by the end of the diffusion process [10].
Many methods have been developed to solve the influence function learning problem [9, 7, 5, 8, 3,
16, 18, 24, 25]. Most approaches are based on fitting the parameters of a diffusion model based on
observations, e.g., [8, 7, 18, 9, 16]. Recently, Du et al. [3] proposed a model-free approach to learn
influence functions as coverage functions; Narasimhan et al. [16] establish proper PAC learnability of
influence functions under several widely-used diffusion models.
All existing approaches rely on the assumption that the observations in the training dataset are
complete, in the sense that all active nodes are observed as being active. However, this assumption
fails to hold in virtually all practical applications [15, 6, 2, 21]. For example, social media data are
usually collected through crawlers or acquired with public APIs provided by social media platforms,
such as Twitter or Facebook. Due to non-technical reasons and established restrictions on the APIs, it
is often impossible to obtain a complete set of observations even for a short period of time. In turn,
the existence of unobserved nodes, links, or activations may lead to a significant misestimation of the
diffusion model?s parameters [19, 15].
We take a step towards addressing the problem of learning influence functions from incomplete
observations. Missing data are a complicated phenomenon, but to address it meaningfully and
rigorously, one must make at least some assumptions about the process resulting in the loss of data.
We focus on random loss of observations: for each activated node independently, the node?s activation
is observed only with probability r, the retention rate, and fails to be observed with probability
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
1 r. Random observation loss naturally occurs when crawling data from social media, where rate
restrictions are likely to affect all observations equally.
We establish both proper and improper PAC learnability of influence functions under incomplete
observations for two popular diffusion models: the Discrete-Time Independent Cascade (DIC) and
Discrete-Time Linear Threshold (DLT) models. In fact, randomly missing observations do not
even significantly increase the required sample complexity. The result is proved by interpreting the
incomplete observations as complete observations in a transformed graph,
The PAC learnability result implies good sample complexity bounds for the DIC and DLT models. However, the PAC learnability result does not lead to an efficient algorithm, as it involves
marginalizing a large number of hidden variables (one for each node not observed to be active).
Towards designing more practical algorithms and obtaining learnability under a broader class of
diffusion models, we pursue improper learning approaches. Concretely, we use the parameterization
of Du et al. [3] in terms of reachability basis functions, and optimize a modified loss function
suggested by Natarajan et al. [17] to address incomplete observations. We prove that the algorithm
ensures improper PAC learning for the DIC, DLT and Continuous-Time Independent Cascade (CIC)
models. Experimental results on synthetic cascades generated from these diffusion models and
real-world cascades in the MemeTracker dataset demonstrate the effectiveness of our approach. Our
algorithm achieves nearly a 20% reduction in estimation error compared to the best baseline methods
on the MemeTracker dataset.
Several recent works also aim to address the issue of missing observations in social network analysis,
but with different emphases. For example, Chierichetti et al. [2] and Sadikov et al. [21] mainly focus
on recovering the size of a diffusion process, while our task is to learn the influence functions from
several incomplete cascades. Myers et al. [15] mainly aim to model unobserved external influence
in diffusion. Duong et al. [6] examine learning diffusion models with missing links from complete
observations, while we learn influence functions from incomplete cascades with missing activations.
Most related to our work are papers by Wu et al. [23] and simultaneous work by Lokhov [13]. Both
study the problem of network inference under incomplete observations. Lokhov proposes a dynamic
message passing approach to marginalize all the missing activations, in order to infer diffusion model
parameters using maximum likelihood estimation, while Wu et al. develop an EM algorithm. Notice
that the goal of learning the model parameters differs from our goal of learning the influence functions
directly. Both [13] and [23] provide empirical evaluation, but do not provide theoretical guarantees.
2
Preliminaries
2.1 Models of Diffusion and Incomplete Observations
Diffusion Model. We model propagation of opinions, products, or behaviors as a diffusion process
over a social network. The social network is represented as a directed graph G = (V, E), where
n = |V | is the number of nodes, and m = |E| is the number of edges. Each edge e = (u, v) is
associated with a parameter wuv representing the strength of influence user u has on v. We assume
that the graph structure (the edge set E) is known, while the parameters wuv are to be learned.
Depending on the diffusion model, there are different ways to represent the strength of influence
between individuals. Nodes can be in one of two states, inactive or active. We say that a node gets
activated if it adopts the opinion/product/behavior under the diffusion process. In this work, we focus
on progressive diffusion models, where a node remains active once it gets activated.
The diffusion process begins with a set of seed nodes (initial adopters) S, who start active. It then
proceeds in discrete or continuous time: according to a probabilistic process, additional nodes may
become active based on the influence from their neighbors. Let N (v) be the in-neighbors of node v
and At the set of nodes activated by time t. We consider the following three diffusion models:
Discrete-time Linear Threshold (DLT) model [10]: Each node v has a threshold ?v drawn independently and uniformly from the interval [0, 1]. The diffusion under the DLT model unfolds in
discrete time steps: a node v becomes
active at step t if the total incoming weight from its active
P
neighbors exceeds its threshold: u2N (v)\At 1 wuv ?v .
Discrete-time Independent Cascade (DIC) model [10]: The DIC model is also a discrete-time
model. The weight wuv 2 [0, 1] captures an activation probability. When a node u becomes active in
step t, it attempts to activate all currently inactive neighbors in step t + 1. For each neighbor v, it
2
succeeds with probability wuv . If it succeeds, v becomes active; otherwise, v remains inactive. Once
u has made all these attempts, it does not get to make further activation attempts at later times.
Continuous-time Independent Cascade (CIC) model [8]: The CIC model unfolds in continuous
time. Each edge e = (u, v) is associated with a delay distribution with wuv as its parameter. When a
node u becomes newly active at time t, for every neighbor v that is still inactive, a delay time duv is
drawn from the delay distribution. duv is the duration it takes u to activate v, which could be infinite
(if u does not succeed in activating v). Nodes are considered activated by the process if they are
activated within a specified observation window [0, ? ].
Fix one of the diffusion models defined above and its parameters. For each seed set S, let S
be the distribution of final active sets. (In the case of the DIC and DLT model, this is the set of
active nodes when no new activations occur; for the CIC model, it is the set of nodes active at
time ? .) For any node v, let Fv (S) = ProbA? S [v 2 A] be the (marginal) probability that v
is activated according to the dynamics of the diffusion model with initial seeds S. Then, define
the influence function F : 2V ! [0, 1]n mapping seed sets to the vector of marginal activation
probabilities: F (S) = [F1 (S), . . . , Fn (S)]. Notice that the marginal probabilities do not capture the
full information about the diffusion process contained in S (since they do not observe co-activation
patterns), but they are sufficient for many applications, such as influence maximization [10] and
influence estimation [4].
Cascades and Incomplete Observations. We focus on the problem of learning influence functions
from cascades. A cascade C = (S, A) is a realization of the random diffusion process; S is the set of
seeds and A ? S , A ? S is the set of activated nodes at the end of the random process. Similar to
Narasimhan et al. [16], we focus on activation-only observations, namely, we only observe which
nodes were activated, but not when these activations occurred.1
To capture the fact that some of the node activations may have been unobserved, we use the following
model of independently randomly missing data: for each (activated) node v 2 A \ S, the activation of
v is actually observed independently with probability r. With probability 1 r, the node?s activation
is unobservable. For seed nodes v 2 S, the activation is never lost. Formally, define A? as follows:
? and each v 2 A \ S is in A? independently with probability r.
each v 2 S is deterministically in A,
?
Then, the incomplete cascade is denoted by C? = (S, A).
2.2 Objective Functions and Learning Goals
To measure estimation error, we primarily use a quadratic loss function, as in [16, 3]. For two
n-dimensional vectors x, y, the quadratic loss is defined as `sq (x, y) = n1 ? ||x y||22 . We also use
this notation when one or both arguments are sets: when an argument is a set S, we formally mean to
use the indicator function S as a vector, where S (v) = 1 if v 2 S, and S (v) = 0 otherwise. In
particular, for an activated set A, we write `sq (A, F (S)) = n1 || A F (S)||22 .
We now formally define the problem of learning influence functions from incomplete observations.
Let P be a distribution over seed sets (i.e., a distribution over 2V ), and fix a diffusion model M and
parameters, together giving rise to a distribution S for each seed set. The algorithm is given a set
of M incomplete cascades C? = {(S1 , A?1 ), . . . , (SM , A?M )}, where each Si is drawn independently
from P, and A?i is obtained by the incomplete observation process described above from the (random)
activated set Ai ? Si . The goal is to learn an influence function F that accurately captures the
diffusion process. Accuracy of the learned influence function is measured in terms of the squared
error with respect to the true model: errsq [F ] = ES?P,A? S [`sq (A, F (S))]. That is, the expectation
is over the seed set and the randomness in the diffusion process, but not the data loss process.
PAC Learnability of Influence Functions. We characterize the learnability of influence functions
under incomplete observations using the Probably Approximately Correct (PAC) learning framework [22]. Let FM be the class of influence functions under the diffusion model M, and FL the class
of influence functions the learning algorithm is allowed to choose from. We say that FM is PAC learnable if there exists an algorithm A with the following property: for all ", 2 (0, 1), all parametrizations of the diffusion model, and all distributions P over seed sets S: when given activation-only
1
Narasimhan et al. [16] refer to this model as partial observations; we change the terminology to avoid
confusion with ?incomplete observations.?
3
and incomplete training cascades C? = {(S1 , A?1 ), . . . , (SM , A?M )} with M poly(n, m, 1/", 1/ ),
A outputs an influence function F 2 FL satisfying ProbC?[errsq [F ] errsq [F ? ] "] ? .
Here, F ? 2 FM is the ground truth influence function. The probability is over the training cascades,
including the seed set generation, the stochastic diffusion process, and the missing data process.
We say that an influence function learning algorithm A is proper if FL ? FM ; that is, the learned
influence function is guaranteed to be an instance of the true diffusion model. Otherwise, we say that
A is an improper learning algorithm.
3
Proper PAC Learning under Incomplete Observations
In this section, we establish proper PAC learnability of influence functions under the DIC and DLT
models. For both diffusion models, FM can be parameterized by an edge parameter vector w, whose
entries we are the activation probabilities (DIC model) or edge weights (DLT model). Our goal is to
find an influence function F w 2 FM that outputs accurate marginal activation probabilities. While
our goal is proper learning ? meaning that the function must be from FM ? we do not require
that the inferred parameters match the true edge parameters w. Our main theoretical results are
summarized in Theorems 1 and 2.
Theorem 1. Let 2 (0, 0.5). The class of influence functions under the DIC model in which all
edge activation probabilities satisfy we 2 [ , 1
] is PAC learnable under incomplete observations
? n23 m4 ).
with retention rate r. The sample complexity2 is O(
" r
Theorem 2. Let 2 (0, 0.5), and consider the class of influence functions under theP
DLT model such
that the edge weight for every edge satisfies we 2 [ , 1 ], and for every node v, 1
u2N (v) wuv 2
[ ,1
]. This class is PAC learnable under incomplete observations with retention rate r. The
? n23 m4 ).
sample complexity is O(
" r
In this section, we present the intuition and a proof sketch for the two theorems. Details of the proof
are provided in Appendix B.
The key idea of the proof of both theorems is that a set of incomplete cascades C? on G under the two
? = (V? , E).
? The
models can be considered as essentially complete cascades on a transformed graph G
?
influence functions of nodes in G can be learned using a modification of the result of Narasimhan et
al. [16]. Subsequently, the influence functions for G are directly obtained from the influence functions
? by exploiting that influence functions only focus on the marginal activation probabilities.
for G,
? is built by adding a layer of n nodes to the graph G. For each node v 2 V
The transformed graph G
of the original graph, we add a new node v 0 2 V 0 and a directed edge (v, v 0 ) with known and fixed
edge parameter w
?vv0 = r. (The same parameter value serves as activation probability under the DIC
model and as edge weight under the DLT model.) The new nodes V 0 have no other incident edges,
and we retain all edges e = (u, v) 2 E. Inferring their parameters is the learning task.
For each observed (incomplete) cascade (Si , A?i ) on G (with Si ? A?i ), we produce an observed
activation set A0i as follows: (1) for each v 2 A?i \ Si , we let v 0 2 A0i deterministically; (2) for each
v 2 Si independently, we include v 0 2 A0i with probability r. This defines the training cascades
C? = {(Si , A0i )}.
? Let F (S) denote
Now consider any edge parameters w, applied to both G and the first layer of G.
?
the influence function on G, and F (S) = [F?10 (S), . . . , F?n0 (S)] the influence function of the nodes
? Then, by the transformation, for all nodes v 2 V , we get that
in the added layer V 0 of G.
F?v0 (S)
=
r ? Fv (S).
(1)
And by the definition of the observation loss process, Prob[v 2 A?i ] = r ? Fv (S) = F?v0 (S) for all
non-seed nodes v 2
/ Si .
? in a precise sense, they provide complete
While the cascades C? are not complete on all of G,
information on the activation of nodes in V 0 . In Appendix B, we show that Theorem 2 of Narasimhan
et al. [16] can be extended to provide identical guarantees for learning F? (S) from the modified
2
? notation suppresses poly-logarithmic dependence on 1/ , 1/ , n, and m.
The O
4
? For the DIC model, this is a straightforward modification of the proof from
observed cascades C.
[16]. For the DLT model, [16] had not established PAC learnability3 , so we provide a separate proof.
Because the results of [16] and our generalizations ensure proper learning, they provide edge
weights w between the nodes of V . We use these exact same edge weights to define the learned
influence functions in G. Equation (1) then implies that the learned influence functions on V satisfy
Fv (S) = 1r ? F?v0 (S). The detailed analysis in Appendix B shows that the learning error only scales
by a multiplicative factor r12 .
The PAC learnability result shows that there is no information-theoretical obstacle to influence
function learning under incomplete observations. However, it does not imply an efficient algorithm.
The reason is that a hidden variable would be associated with each node not observed to be active,
and computing the objective function for empirical risk minimization would require marginalizing
over all of the hidden variables. The proper PAC learnability result also does not readily generalize to
the CIC model and other diffusion models, even under complete observations. This is due to the lack
of a succinct characterization of influence functions as under the DIC and DLT models. Therefore,
in the next section, we explore improper learning approaches with the goal of designing practical
algorithms and establishing learnability under a broader class of diffusion models.
4
Efficient Improper Learning Algorithm
Instead of parameterizing the influence functions using the edge parameters, we adopt the model-free
influence function learning framework, InfluLearner, proposed by Du et al. [3] to represent the
influence function as a sum of weighted basis functions. From now on, we focus on the influence
function Fv (S) of a single fixed node v.
Influence Function Parameterization. For all three diffusion models (CIC, DIC and DLT), the
diffusion process can be characterized equivalently using live-edge graphs. Concretely, the results
of [10, 4] state that for each instance of the CIC, DIC, and DLT models, there exists a distribution
?
Pover live-edge graphs H assigning probability H to each live-edge graph H such that Fv (S) =
H:at least one node in S has a path to v in H H .
To reduce the representation complexity, notice that from the perspective of activating v, two different
live-edge graphs H, H 0 are ?equivalent? if v P
is reachable from exactly the same nodes in H and
H 0 . Therefore, for any node set T , let T? := H:exactly the nodes in T have paths to v in H H . We then use
characteristic vectors as feature vectors rT = T , where we will interpret the entry for node u
as u having a path to v in a live-edge graph. More precisely, let (x) = min{x, 1}, and S the
characteristic vector of the P
seed set S. Then, ( >
S ? rT ) = 1 if and only if v is reachable from S,
?
and we can write Fv (S) = T T? ? ( >
?
r
).
T
S
This representation still has exponentially many features (one for each T ). In order to make the learning problem tractable, we sample a smaller set T of K features from a suitably chosen distribution,
implicitly setting the weights P
T of all other features to 0. Thus, we will parametrize the learned
influence function as Fv (S) = T 2T T ? ( >
S ? rT ).
The goal is then to learn weights T for the sampled features. (They will form a distribution, i.e.,
|| ||1 = 1 and
0.) The crux of the analysis is to show that a sufficiently small number K of
features (i.e., sampled sets) suffices for a good approximation, and that the weights can be learned
efficiently from a limited number of observed incomplete cascades. Specifically, we consider the
log likelihood function `(t, y) = y log t + (1 y) log(1 t), and learn the parameter vector (a
PM
distribution) by maximizing the likelihood i=1 `(Fv (Si ), Ai (v)).
Handling Incomplete Observations. The maximum likelihood estimation cannot be directly applied to incomplete cascades, as we do not have access to Ai (only the incomplete version A?i ). To
address this issue, notice that the MLE problem is actually a binary classification problem with log
loss and yi = Ai (v) as the label. From this perspective, incompleteness is simply class-conditional
noise on the labels. Let y?i = A?i (v) be our observation of whether v was activated or not under
the incomplete cascade i. Then, Prob[?
yi = 1|yi = 1] = r and Prob[?
yi = 1|yi = 0] = 0. In words,
3
[16] shows that the DLT model with fixed thresholds is PAC learnable under complete cascades. We study
the DLT model when the thresholds are uniformly distributed random variables.
5
the incomplete observation y?i suffers from one-sided error compared to the complete observation yi .
By results of Natarajan et al. [17], we can construct an unbiased estimator of `(t, y) using only the
incomplete observations y?, as in the following lemma.
Lemma 3 (Corollary of Lemma 1 of [17]). Let y be the true activation of node v and y? the
? y) := 1 y log t + 2r 1 (1 y) log(1 t), for any t, we
incomplete
Then, defining `(t,
r
r
h observation.
i
? y?) = `(t, y).
have Ey? `(t,
Based on this lemma, we arrive at the final algorithm of solving the maximum likelihood estimation
? y):
problem with the adjusted likelihood function `(t,
PM ?
Maximize
(2)
?i (v))
i=1 `(Fv (Si ), A
subject to
|| ||1 = 1,
0.
We analyze conditions under which the solution to (2) provides improper PAC learnability under
incomplete observations; these conditions will apply for all three diffusion models.
These conditions are similar to those of Lemma 1 in the work of Du et al. [3], and concern the
approximability of the reachability distribution T? . Specifically, let q be a distribution over node
sets T such that q(T ) ? C T? for all node sets T . Here, C is a (possibly very large) number that we
will want to bound below. Let T1 , . . . , TK be K i.i.d. samples drawn from the distribution q. The
features are then rk = Tk . We use the truncated version of the function Fv , (S) with parameter4
as in [3]: Fv , (S) = (1 2 )Fv (S) + .
?
Let M be the class of all such truncated influence functions, and Fv , 2 M the influence
functions obtained from the optimization problem (2). The following theorem (proved in Appendix C)
establishes the accuracy of the learned functions.
? C22 ) features in the influence function
Theorem 4. Assume that the learning algorithm uses K = ?(
"
C
? log
it constructs, and observes5 M = ?(
4 2 ) incomplete cascades with retention rate r. Then, with
" r
?
probability at least
, the learned influence
functions Fv , for each node v and seed distribution
h 1
i
?,
?
2
P satisfy ES?P (Fv (S) Fv (S)) ? ".
The theorem implies that with enough incomplete cascades, an algorithm can approximate the ground
truth influence function to arbitrary accuracy. Therefore, all three diffusion models are improperly
PAC learnable under incomplete observations. The final sample complexity does not contain the graph
size, but is implicitly captured by C, which will depend on the graph and how well the distribution
?
T can be approximated. Notice that with r = 1, our bound on M has logarithmic dependency on C
instead of polynomial, as in [3]. The reason for this improvement is discussed further in Appendix C.
Efficient Implementation. As mentioned above, the features T cannot be sampled from the exact
reachability distribution T? , because it is inaccessible (and complex). In order to obtain useful
guarantees from Theorem 4, we follow the approach of Du et al. [3], and approximate the distribution
?
T with the product of the marginal distributions, estimated from observed cascades.
The optimization problem (2) is convex and can therefore be solved in time polynomial in the number
of features K. However, the guarantees in Theorem 4 require a possibly large number of features. In
order to obtain an efficient algorithm for practical use and our experiments, we sacrifice the guarantee
and use a fixed number of features.
5
Experiments
In this section, we experimentally evaluate the algorithm from Section 4. Since no other methods
explicitly account for incomplete observations, we compare it to several state-of-the-art methods
for influence function learning with full information. Hence, the main goal of the comparison is to
examine to what extent the impact of missing data can be mitigated by being aware of it. We compare
4
The proof of Theorem 4 in Appendix C will show how to choose .
? notation suppresses all logarithmic terms except log C, as C could be exponential or worse in the
The ?
number of nodes.
5
6
(a) CIC
(b) DIC
(c) DLT
Figure 1: MAE of estimated influence as a function of the retention rate on synthetic datasets for (a)
CIC model, (b) DIC model, (c) DLT model. The error bars show one standard deviation.
our algorithm to the following approaches: (1) CIC fits the parameters of a CIC model, using the
NetRate algorithm [7] with exponential delay distribution. (2) DIC fits the activation probabilities
of a DIC model using the method in [18]. (3) InfluLearner is the model-free approach proposed
by Du et al. in [3] and discussed in Section 4. (4) Logistic uses logistic regression to learn the
influence functions Fu (S) = f ( >
S ? cu + b) for each u independently, where cu is a learnable weight
vector and f (x) = 1+e1 x is the logistic function. (5) Linear uses linear regression to learn the total
influence (S) = c> ? S + b of the set S. Notice that the CIC and DIC methods have access to
the activation time of each node in addition to the final activation status, giving them an inherent
advantage.
5.1 Synthetic cascades
Data generation. We generate synthetic networks with core-peripheral structure following the
Kronecker graph model [12] with parameter matrix [0.9, 0.5; 0.5, 0.3].6 Each generated network
has 512 nodes and 1024 edges. Subsequently, we generate 8192 cascades as training data using
the CIC, DIC and DLT models, with random seed sets whose sizes are power law distributed. The
retention rate is varied between 0.1 and 0.9. The test set contains 200 independently sampled seed
sets generated in the same way as the training data. Details of the data generation process are provided
in Appendix A.
Algorithm settings. We apply all algorithms to cascades generated from all three models; that
is, we also consider the results under model misspecification. Whenever applicable, we set the
hyperparameters of the five comparison algorithms to the ground truth values. When applying the
NetRate algorithm to discrete-time cascades, we set the observation window to 10.0. When applying
the method in [18] to continuous-time cascades, we round activation times up to the nearest multiple
of 0.1, resulting in 10 discrete time steps. For the model-free approaches (InfluLearner and our
algorithm), we use K = 200 features.
Results. Figure 1 shows the Mean Absolute Error (MAE) between the estimated total influence
(S) and the true influence value, averaged over all testing seed sets. For each setting (diffusion
model and retention rate), the reported MAE is averaged over five independent runs.
The main insight is that accounting for missing observations indeed strongly mitigates their effect:
notice that for retention rates as small as 0.5, our algorithm can almost completely compensate for
the data loss, whereas both the model-free and parameter fitting approaches deteriorate significantly
even for retention rates close to 1. For the parameter fitting approaches, even such large retention
rates can lead to missing and spurious edges in the inferred networks, and thus significant estimation
errors. Additional observations include that fitting influence using (linear or logistic) regression
does not perform well at all, and that the CIC inference approach appears more robust to model
misspecification than the DIC approach.
Sensitivity of retention rate. We presented the algorithms as knowing r. Since r itself is inferred
from noisy data, it may be somewhat misestimated. Figure 2 shows the impact of misestimating r.
We generate synthetic cascades from all three diffusion models with a true retention rate of 0.8, and
6
We also experimented on Kronecker graphs with hierarchical community structure ([0.9, 0.1; 0.1, 0.9]) and
random structure ([0.5, 0.5; 0.5, 0.5]). The results are similar and omitted due to space constraints.
7
CIC
DIC
Linear
DLT
Logistic
DIC
CIC
InfluLearner
Our3Method
35
30
25
MAE
40
35
30
25
20
15
10
5
0
!5
!10
20
15
10
5
0
1
0.95 0.9 0.85 0.8 0.75 0.7 0.65 0.6
Figure 2: Relative error in MAE under retention rate misspecification. x-axis: retention rate
r used by the algorithm. y-axis: relative difference of MAE compared to using the true
retention rate 0.8.
1
2
3
4
Groups2of2memes
5
6
7
Figure 3: MAE of influence estimation on seven
sets of real-world cascades with 20% of activations missing.
then apply our algorithm with (incorrect) retention rate r 2 {0.6, 0.65, . . . , 0.95, 1}. The results are
averaged over five independent runs. While the performance decreases as the misestimation gets
worse (after all, with r = 1, the algorithm is basically the same as InfluLearner), the degradation is
graceful.
5.2 Influence Estimation on real cascades
We further evaluate the performance of our method on the real-world MemeTracker7 dataset [11].
The dataset consists of the propagation of short textual phrases, referred to as Memes, via the
publication of blog posts and main-stream media news articles between March 2011 and February
2012. Specifically, the dataset contains seven groups of cascades corresponding to the propagation
of Memes with certain keywords, namely ?apple and jobs?, ?tsunami earthquake?, ?william kate
marriage??, ?occupy wall-street?, ?airstrikes?, ?egypt? and ?elections.? Each cascade group consists
of 1000 nodes, with a number of cascades varying from 1000 to 44000. We follow exactly the same
evaluation method as Du et al. [3] with a training/test set split of 60%/40%.
To test the performance of influence function learning under incomplete observations, we randomly
delete 20% of the occurrences, setting r = 0.8. The results for other retention rates are similar and
omitted. Figure 3 shows the MAE of our methods and the five baselines, averaged over 100 random
draws of test seed sets, for all groups of memes. While some baselines perform very poorly, even
compared to the best baseline (InfluLearner), our algorithm provides an 18% reduction in MAE
(averaged over the seven groups), showing the potential of data loss awareness to mitigate its effects.
6
Extensions and Future Work
In the full version available on arXiv, we show both experimentally and theoretically how to generalize
our results to non-uniform (but independent) loss of node activations, and how to deal with a
misestimation of the retention rate r. Any non-trivial partial information about r leads to positive
PAC learnability results.
A much more significant departure for future work would be dependent loss of activations, e.g., losing
all activations of some randomly chosen nodes. As another direction, it would be worthwhile to
generalize the PAC learnability results to other diffusion models, and to design an efficient algorithm
with PAC learning guarantees.
Acknowledgments
We would like to thank anonymous reviewers for useful feedback. The research was sponsored
in part by NSF research grant IIS-1254206 and by the U.S. Defense Advanced Research Projects
Agency (DARPA) under the Social Media in Strategic Communication (SMISC) program, Agreement
Number W911NF-12-1-0034. The views and conclusions are those of the authors and should not be
interpreted as representing the official policies of the funding agency or the U.S. Government.
7
We use the preprocessed version of the dataset released by Du et al. [3] and available at http://www.cc.
gatech.edu/~ndu8/InfluLearner.html. Notice that the dataset is semi-real, as multi-node seed cascades
are artificially created by merging single-node seed cascades.
8
References
[1] K. Amin, H. Heidari, and M. Kearns. Learning from contagion (without timestamps). In Proc.
31st ICML, pages 1845?1853, 2014.
[2] F. Chierichetti, D. Liben-Nowell, and J. M. Kleinberg. Reconstructing Patterns of Information
Diffusion from Incomplete Observations. In Proc. 23rd NIPS, pages 792?800, 2011.
[3] N. Du, Y. Liang, M.-F. Balcan, and L. Song. Influence Function Learning in Information
Diffusion Networks. In Proc. 31st ICML, pages 2016?2024, 2014.
[4] N. Du, L. Song, M. Gomez-Rodriguez, and H. Zha. Scalable Influence Estimation in ContinuousTime Diffusion Networks. In Proc. 25th NIPS, pages 3147?3155, 2013.
[5] N. Du, L. Song, S. Yuan, and A. J. Smola. Learning Networks of Heterogeneous Influence. In
Proc. 24th NIPS, pages 2780?2788. 2012.
[6] Q. Duong, M. P. Wellman, and S. P. Singh. Modeling information diffusion in networks with
unobserved links. In SocialCom/PASSAT, pages 362?369, 2011.
[7] M. Gomez-Rodriguez, D. Balduzzi, and B. Sch?lkopf. Uncovering the temporal dynamics of
diffusion networks. In Proc. 28th ICML, pages 561?568, 2011.
[8] M. Gomez-Rodriguez, J. Leskovec, and A. Krause. Inferring networks of diffusion and influence.
ACM Transactions on Knowledge Discovery from Data, 5(4), 2012.
[9] A. Goyal, F. Bonchi, and L. V. S. Lakshmanan. Learning influence probabilities in social
networks. In Proc. 3rd WSDM, pages 241?250, 2010.
[10] D. Kempe, J. Kleinberg, and E. Tardos. Maximizing the Spread of Influence in a Social Network.
In Proc. 9th KDD, pages 137?146, 2003.
[11] J. Leskovec, L. Backstrom, and J. Kleinberg. Meme-tracking and the dynamics of the news
cycle. In Proc. 15th KDD, pages 497?506, 2009.
[12] J. Leskovec, D. Chakrabarti, J. Kleinberg, C. Faloutsos, and Z. Ghahramani. Kronecker graphs:
An approach to modeling networks. The Journal of Machine Learning Research, 11:985?1042,
2010.
[13] A. Lokhov. Reconstructing parameters of spreading models from partial observations. In Proc.
28th NIPS, pages 3459?3467, 2016.
[14] S. A. Myers and J. Leskovec. On the convexity of latent social network inference. In Proc.
22nd NIPS, pages 1741?1749, 2010.
[15] S. A. Myers, C. Zhu, and J. Leskovec. Information Diffusion and External Influence in Networks.
In Proc. 18th KDD, pages 33?41, 2012.
[16] H. Narasimhan, D. C. Parkes, and Y. Singer. Learnability of Influence in Networks. In Proc.
27th NIPS, pages 3168?3176, 2015.
[17] N. Natarajan, I. S. Dhillon, P. K. Ravikumar, and A. Tewari. Learning with noisy labels. In
Proc. 25th NIPS, pages 1196?1204, 2013.
[18] N. Praneeth and S. Sujay. Learning the Graph of Epidemic Cascades. In Proc. 12th ACM
Sigmetrics, pages 211?222, 2012.
[19] D. Quang, W. M. P, and S. S. P. Modeling Information Diffusion in Networks with Unobserved
Links. In SocialCom, pages 362?369, 2011.
[20] N. Rosenfeld, M. Nitzan, and A. Globerson. Discriminative learning of infection models. In
Proc. 9th WSDM, pages 563?572, 2016.
[21] E. Sadikov, M. Medina, J. Leskovec, and H. Garcia-Molina. Correcting for missing data in
information cascades. In Proc. 4th WSDM, pages 55?64, 2011.
[22] L. G. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134?1142,
1984.
[23] X. Wu, A. Kumar, D. Sheldon, and S. Zilberstein. Parameter learning for latent network
diffusion. In Proc. 29th IJCAI, pages 2923?2930, 2013.
[24] S.-H. Yang and H. Zha. Mixture of Mutually Exciting Processes for Viral Diffusion. In Proc.
30th ICML, pages 1?9, 2013.
[25] K. Zhou, H. Zha, and L. Song. Learning Social Infectivity in Sparse Low-rank Network Using
Multi-dimensional Hawkes Processes. In Proc. 30th ICML, pages 641?649, 2013.
9
| 6181 |@word cu:2 version:4 polynomial:2 nd:1 suitably:1 accounting:1 lakshmanan:1 reduction:2 memetracker:2 liu:1 contains:2 initial:3 existing:1 duong:2 activation:33 si:10 crawling:1 must:2 readily:1 assigning:1 fn:1 timestamps:1 kdd:3 sponsored:1 n0:1 parameterization:2 parametrization:1 short:2 core:1 parkes:1 characterization:1 provides:2 node:58 c22:1 five:4 quang:1 become:1 chakrabarti:1 incorrect:1 prove:1 consists:2 yuan:1 fitting:4 bonchi:1 deteriorate:1 theoretically:1 acquired:1 sacrifice:1 indeed:1 behavior:3 examine:2 multi:2 wsdm:3 n23:2 election:1 window:2 becomes:4 provided:3 spain:1 begin:1 notation:3 mitigated:1 medium:5 project:1 what:1 interpreted:1 pursue:1 suppresses:2 developed:1 narasimhan:6 unobserved:5 transformation:1 guarantee:6 temporal:1 mitigate:1 every:3 exactly:3 grant:1 t1:1 retention:18 positive:1 infectivity:1 wuv:7 establishing:1 path:3 approximately:1 emphasis:1 co:1 limited:1 averaged:5 directed:2 practical:5 acknowledgment:1 earthquake:1 testing:1 globerson:1 lost:1 goyal:1 differs:1 sq:3 area:1 empirical:2 yan:1 cascade:46 significantly:2 word:1 get:5 cannot:2 marginalize:1 close:1 risk:1 influence:74 impossible:1 live:5 applying:2 restriction:2 optimize:1 equivalent:1 reviewer:1 missing:15 maximizing:2 www:1 straightforward:1 independently:9 duration:1 convex:1 ke:1 correcting:1 parameterizing:1 estimator:1 insight:1 tardos:1 user:1 exact:2 losing:1 us:3 designing:2 agreement:1 satisfying:1 natarajan:3 approximated:1 observed:11 solved:1 capture:4 ensures:1 improper:9 news:2 cycle:1 decrease:1 liben:1 disease:1 intuition:1 mentioned:1 inaccessible:1 complexity:5 meme:4 agency:2 convexity:1 rigorously:1 dynamic:4 depend:1 solving:1 singh:1 basis:2 completely:1 darpa:1 represented:1 activate:2 whose:2 heuristic:1 widely:1 solve:1 say:4 otherwise:3 epidemic:1 ability:1 rosenfeld:1 itself:1 noisy:2 final:4 online:1 advantage:1 myers:3 product:4 realization:1 parametrizations:1 poorly:1 amin:1 los:1 exploiting:1 ijcai:1 produce:1 tk:2 depending:1 develop:1 measured:1 nearest:1 keywords:1 job:1 coverage:1 c:1 reachability:4 implies:3 involves:1 recovering:1 direction:1 correct:1 owing:1 stochastic:1 subsequently:2 opinion:2 public:1 require:3 government:1 activating:2 crux:1 fix:2 f1:1 generalization:1 preliminary:1 suffices:1 wall:1 anonymous:1 adjusted:1 extension:1 cic:17 hold:1 sufficiently:1 considered:2 ground:3 marriage:1 seed:20 mapping:2 inhibiting:1 major:1 achieves:1 lokhov:3 adopt:1 omitted:2 released:1 nowell:1 estimation:10 proc:20 applicable:1 label:3 currently:1 spreading:1 establishes:1 weighted:1 minimization:1 sigmetrics:1 aim:2 modified:3 avoid:1 zhou:1 varying:1 broader:2 publication:1 gatech:1 corollary:1 zilberstein:1 focus:7 improvement:1 rank:1 likelihood:6 mainly:2 baseline:4 sense:2 inference:3 twitter:1 dependent:1 hidden:3 spurious:1 transformed:3 issue:2 unobservable:1 classification:1 html:1 denoted:1 uncovering:1 proposes:1 platform:1 art:1 kempe:2 fairly:1 marginal:6 construct:2 nitzan:1 once:2 never:1 having:1 aware:1 identical:1 progressive:1 icml:5 nearly:1 future:2 inherent:1 primarily:1 randomly:5 individual:2 m4:2 usc:1 n1:2 proba:1 attempt:3 william:1 continuoustime:1 message:1 evaluation:2 mixture:1 wellman:1 activated:13 a0i:4 accurate:1 edge:26 fu:1 partial:3 incomplete:41 smisc:1 theoretical:3 delete:1 leskovec:6 instance:2 modeling:3 obstacle:1 w911nf:1 maximization:1 phrase:1 strategic:1 addressing:1 deviation:1 entry:2 uniform:1 delay:4 learnability:18 characterize:1 reported:1 dependency:1 synthetic:6 st:2 u2n:2 sensitivity:1 retain:1 probabilistic:1 together:1 xuk:1 squared:1 choose:2 possibly:2 worse:2 external:2 account:1 potential:1 summarized:1 satisfy:3 kate:1 explicitly:1 stream:1 later:1 multiplicative:1 view:1 analyze:1 start:1 zha:3 complicated:1 accuracy:3 who:2 characteristic:2 efficiently:1 generalize:3 lkopf:1 accurately:1 basically:1 apple:1 cc:1 randomness:1 simultaneous:1 influenced:1 suffers:1 whenever:1 infection:1 facebook:1 definition:1 naturally:2 associated:3 proof:6 sampled:4 newly:1 dataset:8 proved:2 popular:1 knowledge:1 socialcom:2 actually:2 appears:1 follow:2 strongly:1 smola:1 heidari:1 sketch:1 propagation:3 lack:1 rodriguez:3 defines:1 logistic:5 effect:2 contain:1 true:7 unbiased:1 hence:1 dhillon:1 deal:1 round:1 hawkes:1 complete:11 demonstrate:2 confusion:1 egypt:1 interpreting:1 balcan:1 meaning:1 recently:1 funding:1 duv:2 viral:1 exponentially:1 discussed:2 he:1 occurred:1 mae:9 interpret:1 significant:3 refer:1 ai:4 rd:2 sujay:1 pm:2 had:1 reachable:2 access:2 v0:3 add:1 recent:1 xinran:1 perspective:2 certain:1 binary:1 blog:1 yi:6 molina:1 pover:1 captured:1 additional:2 somewhat:1 ey:1 maximize:1 period:1 semi:1 ii:1 full:3 multiple:1 infer:1 exceeds:1 technical:1 match:1 characterized:1 compensate:2 post:1 equally:1 dic:25 mle:1 e1:1 ravikumar:1 impact:2 scalable:1 regression:3 heterogeneous:1 essentially:1 expectation:1 arxiv:1 represent:2 addition:1 want:1 whereas:1 krause:1 interval:1 sch:1 probably:1 subject:1 virtually:1 meaningfully:1 flow:1 effectiveness:1 yang:1 split:1 enough:1 affect:1 fit:2 fm:7 economic:1 dlt:22 idea:1 reduce:1 knowing:1 praneeth:1 angeles:1 inactive:4 whether:1 defense:1 accelerating:1 improperly:1 song:4 passing:1 useful:2 tewari:1 detailed:1 generate:3 occupy:1 http:1 nsf:1 r12:1 notice:8 estimated:3 discrete:11 write:2 group:4 key:2 terminology:1 threshold:7 drawn:4 preprocessed:1 diffusion:53 graph:19 fraction:1 sum:1 run:2 prob:3 parameterized:1 arrive:1 almost:1 wu:3 draw:1 appendix:7 incompleteness:1 bound:3 fl:3 layer:3 guaranteed:1 gomez:3 quadratic:2 strength:2 occur:1 precisely:1 kronecker:3 constraint:1 sheldon:1 kleinberg:4 argument:2 min:1 approximability:1 kumar:1 graceful:1 according:2 peripheral:1 march:1 across:1 smaller:1 em:1 reconstructing:2 backstrom:1 modification:2 s1:2 tsunami:1 sided:1 equation:1 mutually:1 remains:2 turn:1 singer:1 tractable:1 end:2 serf:1 parametrize:1 available:2 apply:3 observe:2 hierarchical:1 worthwhile:1 occurrence:1 faloutsos:1 existence:1 original:1 include:2 ensure:1 giving:2 balduzzi:1 ghahramani:1 establish:4 february:1 objective:2 added:1 occurs:1 dependence:1 rt:3 southern:1 link:4 misestimation:3 separate:1 thank:1 street:1 seven:3 collected:1 extent:1 trivial:1 reason:3 modeled:1 equivalently:1 liang:1 potentially:1 rise:2 implementation:1 design:1 proper:10 policy:1 perform:2 observation:52 datasets:2 sm:2 truncated:2 defining:1 extended:1 communication:2 precise:1 misspecification:3 varied:1 arbitrary:1 community:1 inferred:3 david:1 namely:2 required:1 specified:1 california:1 fv:17 learned:10 textual:1 established:3 barcelona:1 nip:8 address:4 suggested:1 proceeds:1 usually:1 pattern:2 below:1 bar:1 departure:1 program:1 built:1 including:1 power:1 rely:1 predicting:1 indicator:1 advanced:1 zhu:1 representing:2 technology:1 imply:1 contagion:2 axis:2 created:1 understanding:1 yanliu:1 discovery:1 marginalizing:2 relative:2 law:1 fully:1 loss:13 generation:3 awareness:1 incident:1 sufficient:1 article:1 exciting:1 free:5 neighbor:6 absolute:1 sparse:1 distributed:2 feedback:1 world:5 unfolds:2 concretely:2 adopts:1 made:1 author:1 social:14 transaction:1 approximate:2 observable:1 implicitly:2 status:1 apis:2 active:18 incoming:1 discriminative:1 thep:1 continuous:6 latent:2 learn:8 robust:1 ca:1 adopter:2 obtaining:1 du:11 poly:2 complex:1 artificially:1 official:1 spread:2 main:4 noise:1 netrate:2 hyperparameters:1 succinct:1 allowed:1 xu:1 referred:1 chierichetti:2 fails:2 inferring:2 medina:1 deterministically:2 exponential:2 complexity2:1 theorem:12 rk:1 pac:24 mitigates:1 showing:1 learnable:7 experimented:1 concern:2 exists:2 adding:1 merging:1 valiant:1 logarithmic:3 garcia:1 simply:1 likely:1 explore:1 contained:1 tracking:1 applies:1 truth:3 satisfies:1 acm:3 succeed:1 conditional:1 goal:10 towards:2 change:1 experimentally:2 infinite:1 specifically:3 reducing:1 uniformly:2 except:1 lemma:5 degradation:1 called:1 total:3 kearns:1 experimental:1 e:2 succeeds:2 formally:3 crawler:1 evaluate:2 phenomenon:2 handling:1 |
5,727 | 6,182 | Fast Mixing Markov Chains for Strongly Rayleigh
Measures, DPPs, and Constrained Sampling
Chengtao Li
MIT
[email protected]
Stefanie Jegelka
MIT
[email protected]
Suvrit Sra
MIT
[email protected]
Abstract
We study probability measures induced by set functions with constraints. Such
measures arise in a variety of real-world settings, where prior knowledge, resource
limitations, or other pragmatic considerations impose constraints. We consider
the task of rapidly sampling from such constrained measures, and develop fast
Markov chain samplers for them. Our first main result is for MCMC sampling
from Strongly Rayleigh (SR) measures, for which we present sharp polynomial
bounds on the mixing time. As a corollary, this result yields a fast mixing sampler
for Determinantal Point Processes (DPPs), yielding (to our knowledge) the first
provably fast MCMC sampler for DPPs since their inception over four decades
ago. Beyond SR measures, we develop MCMC samplers for probabilistic models
with hard constraints and identify sufficient conditions under which their chains
mix rapidly. We illustrate our claims by empirically verifying the dependence of
mixing times on the key factors governing our theoretical bounds.
1
Introduction
Distributions over subsets of objects arise in a variety of machine learning applications. They occur
as discrete probabilistic models [5, 20, 28, 36, 38] in computer vision, computational biology and
natural language processing. They also occur in combinatorial bandit learning [9], as well as in recent
applications to neural network compression [32] and matrix approximations [29].
Yet, practical use of discrete distributions can be hampered by computational challenges due to
their combinatorial nature. Consider for instance sampling, a task fundamental to learning, optimization, and approximation. Without further restrictions, efficient sampling can be impossible
[13]. Several lines of work thus focus on identifying tractable sub-classes, which in turn have had
wide-ranging impacts on modeling and algorithms. Important examples include the Ising model [22],
matchings (and the matrix permanent) [23], spanning trees (and graph algorithms) [2, 6, 16, 37],
and Determinantal Point Processes (D PPs) that have gained substantial attention in machine learning [3, 17, 24, 26, 28, 30].
In this work, we extend the classes of tractable discrete distributions. Specifically, we consider
the following two classes of distributions on 2V (the set of subsets of a ground set V = [N ] :=
{1, . . . , N }): (1) strongly Rayleigh (SR) measures, and (2) distributions with certain cardinality or
matroid-constraints. We analyze Markov chains for sampling from both classes. As a byproduct of
our analysis, we answer a long-standing question about rapid mixing of MCMC sampling from DPPs.
SR measures are defined by strong negative correlations, and have recently emerged as valuable
tools in the design of algorithms [2], in the theory of polynomials and combinatorics [4], and in
machine learning through D PPs, a special case of SR distributions. Our first main result is the first
polynomial-time sampling algorithm that applies to all SR measures (and thus a fortiori to D PPs).
General distributions on 2V with constrained support (case (2) above) typically arise upon incorporating prior knowledge or resource constraints. We focus on resource constraints such as bounds on
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
cardinality and bounds on including limited items from sub-groups. Such constraints can be phrased
as a family C ? 2V of subsets; we say S satisfies the constraint C iff S 2 C. Then the distribution of
interest is of the form
?C (S) / exp( F (S))JS 2 CK,
(1.1)
V
where F : 2 ! R is a set function that encodes relationships between items i 2 V , J?K is the
Iverson bracket, and a constant (also referred to as the inverse temperature). Most prior work on
sampling with combinatorial constraints (such as sampling the bases
P of a matroid), assumes that
F breaks up linearly using element-wise weights wi , i.e., F (S) = i2S wi . In contrast, we allow
generic, nonlinear functions, and obtain a mixing times governed by structural properties of F .
Contributions. We briefly summarize the key contributions of this paper below.
? We derive a provably fast mixing Markov chain for efficient sampling from strongly Rayleigh
measure ? (Theorem 2). This Markov chain is novel and may be of independent interest. Our
results provide the first polynomial guarantee (to our knoweldge) for Markov chain sampling from
a general D PP, and more generally from an SR distribution.1
? We analyze (Theorem 4) mixing times of an exchange chain when the constraint family C is the
set of bases of a special matroid, i.e., |S| = k or S obeys a partition constraint. Both of these
constraints have high practical relevance [25, 27, 38].
? We analyze (Theorem 6) mixing times of an add-delete chain for the case |S| ? k, which, perhaps
surprisingly, turns out to be quite different from |S| = k. This constraint can be more practical
than the strict choice |S| = k, because in many applications, the user may have an upper bound on
the budget, but may not necessarily want to expend all k units.
Finally, a detailed set of experiments illustrates our theoretical results.
Related work. Recent work in machine learning addresses sampling from distributions with sub- or
supermodular F [19, 34], determinantal point processes [3, 29], and sampling by optimization [14, 31].
Many of these works (necessarily) make additional assumptions on ?C , or are approximate, or cannot
handle constraints. Moreover, the constraints cannot easily be included in F : an out-of-the-box
application of the result in [19], for instance, would lead to an unbounded constant in the mixing
time.
Apart from sampling, other related tracts include work on variational inference for combinatorial
distributions [5, 11, 36, 38] and inference for submodular processes [21]. Special instances of (1.1)
include [27], where the authors limit D PPs to sets that satisfy |S| = k; partition matroid constraints
are studied in [25], while the budget constraint |S| ? k has been used recently in learning D PPs [17].
Important existing results show fast mixing for a sub-family of strongly Rayleigh distributions [3, 15];
but those results do not include, for instance, general D PPs.
1.1
Background and Formal Setup
Before describing the details of our new contributions, let us briefly recall some useful background
that also
P serves to set the notation. Our
P focus is on sampling from ?C in (1.1); we denote by
Z =
exp(
F
(S))
and
Z
=
C
S?V
S?C exp( F (S)). The simplest example of ?C is the
uniform distribution over sets in C, where F (S) is constant. In general, F may be highly nonlinear.
We sample from ?C using MCMC, i.e., we run a Markov Chain with state space C. All our chains
are ergodic. The mixing time of the chain indicates the number of iterations t that we must perform
(after starting from an arbitrary set X0 2 C) before we can consider Xt as a valid sample from ?C .
Formally, if X0 (t) is the total variation distance between the distribution of Xt and ?C after t steps,
then ?X0 (") = min{t : X0 (t0 ) ? ", 8t0
t} is the mixing time to sample from a distribution
?-close to ?C in terms of total variation distance. We say that the chain mixes fast if ?X0 is polynomial
in N . The mixing time can be bounded in terms of the eigenvalues of the transition matrix, as the
following classic result shows:
Theorem 1 (Mixing Time [10]). Let i be the eigenvalues of the transition matrix, and max =
max{ 2 , | N |} < 1. Then, the mixing time starting from an initial set X0 2 C is bounded as
1
?X0 (") ? (1
max )
1
(log ?C (X0 )
1
+ log "
1
).
The analysis in [24] is not correct since it relies on a wrong construction of path coupling.
2
Most of the effort in bounding mixing times hence is devoted to bounding this eigenvalue.
2
Sampling from Strongly Rayleigh Distributions
In this section, we consider sampling from strongly Rayleigh (SR) distributions. Such distributions
capture the strongest form of negative dependence properties, while enjoying a host of other remarkable properties [4]. For instance, they include the widely used
special case. A distribution
P D PPs as a Q
is SR if its generating polynomial p? : CN ! C, p? (z) = S?V ?(S) i2S zi is real stable. This
means if =(zi ) > 0 for all arguments zi of p? (z), then p? (z) > 0.
We show in particular that SR distributions are amenable to efficient Markov chain sampling. Our
starting point is the observation of [4] on closure properties of SR measures; of these we use symmetric
homogenization. Given a distribution ? on 2[N ] , its symmetric homogenization ?sh on 2[2N ] is
(
1
N
?(S \ [N ]) S\[N
if |S| = N ;
]
?sh (S) :=
0
otherwise.
If ? is SR, so is ?sh . We use this property below in our derivation of a fast-mixing chain.
We use here a recent result of Anari et al. [3], who show a Markov chain that mixes rapidly for
homogeneous SR distributions. These distributions are over all subsets S ? V of some fixed size
|S| = k, and hence do not include general D PPs. Concretely, for any k-homogeneous SR distribution
? : {0, 1}N ! R+ , a Gibbs-exchange sampler has mixing time
?X0 (") ? 2k(N
k)(log ?(X0 )
1
+ log "
1
).
This sampler uniformly samples one item in the current set, and one outside the current set, and
swaps them with an appropriate probability. Using these ideas we show how to obtain fast mixing
chains for any general SR distribution ? on [N ]. First, we construct its symmetric homogenization
?sh , and sample from ?sh using a Gibbs-exchange sampler. This chain is fast mixing, thus we will
efficiently get a sample T ? ?sh . The corresponding sample for ? can be then obtained by computing
S = T \ V . Theorem 2, proved in the appendix, formally establishes the validity of this idea.
Theorem 2. If ? is SR, then the mixing time of a Gibbs-exchange sampler for ?sh is bounded as
? ? N ?
?
?X0 (") ? 2N 2 log
+ log(?(X0 )) 1 + log " 1 .
(2.1)
|X0 |
For Theorem 2 we may choose the initial set such that X0 makes the first term in the sum logarithmic
in N (X0 = T0 \ V in Algorithm 1).
Algorithm 1 Markov Chain for Strongly Rayleigh Distributions
Require: SR distribution ?
Initialize T ? [2N ] where |T | = N and take S = T \ V
while not mixed do
Draw q ? Unif [0, 1]
Draw t 2 V \S and s 2 S uniformly at random
2
if q 2 [0, (N2N|S|)
) then
2
S = S [ {t} with probability min{1, ?(S[{t})
?
?(S)
[ (N2N|S|)
2
2
N |S|
)
2N
else if q 2
,
then
S = S [ {t}\{s} with probability min{1,
2
(N
[ N2N|S| , |S| +N
2N 2
|S|)
|S|+1
}
N |S|
?(S[{t}\{s})
}
?(S)
else if q 2
) then
S = S\{s} with probability min{1, ?(S\{s})
?
?(S)
else
Do nothing
end if
end while
N
|S|
}
|S|+1
. Add t
. Exchange s with t
. Delete s
Efficient Implementation. Directly running a chain to sample N items from a (doubled) set of size
2N adds some computational overhead. Hence, we construct an equivalent, more space-efficient
3
chain (Algorithm 1) on the initial ground set V = [N ] that only manintains S ? V . Interestingly,
this sampler is a mixture of add-delete and Gibbs-exchange samplers. This combination makes sense
intuitively, too: add-delete moves (also shown in Alg. 3) are needed since the exchange sampler
cannot change the cardinality of S. But a pure add-delete chain can stall if the sets concentrate
around a fixed cardinality (low probability of a larger or smaller set). Exchange moves will not
suffer the same high rejection rates. The key idea underlying Algorithm 1 is that the elements
in {N + 1, . . . , 2N } are indistinguishable, so it suffices to maintain merely the cardinality of the
currently selected subset instead of all its indices. Appendix C contains a detailed proof.
Corollary 3. The bound (2.1) applies to the mixing time of Algorithm 1.
Remarks. By assuming ? is SR, we obtain a clean bound for fast mixing. Compared to the bound
in [19], our result avoids the somewhat opaque factor exp( ?F ) that depends on F .
In certain cases, the above chain may mix slower in practice than a pure add-delete chain that was
used in previous works [19, 24], since its probability of doing nothing is higher. In other cases, it
mixes much faster than the pure add-delete chain; we observe both phenomena in our experiments in
Sec. 4. Contrary to a simple add-delete chain, in all cases, it is guaranteed to mix well.
3
Sampling from Matroid-Constrained Distributions
In this section we consider sampling from an explicitly-constrained distribution ?C where C specifies
certain matroid base constraints (?3.1) or a uniform matroid of a given rank (?3.2).
3.1
Matroid Base Constraints
We begin with constraints that are special cases of matroid bases2 :
1. Uniform matroid: C = {S ? V | |S| = k},
Sk
2. Partition matroid: Given a partition V = i=1 Pi , we allow sets that contain exactly one
element from each Pi : C = {S ? V | |S \ Pi | = 1 for all 1 ? i ? k}.
An important special case of a distribution with a uniform matroid constraint is the k-DPP [27].
Partition matroids are used in multilabel problems [38], and also in probabilistic diversity models [21].
Algorithm 2 Gibbs Exchange Sampler for Matroid Bases
Require: set function F , , matroid C ? 2V
Initialize S 2 C
while not mixed do
Let b = 1 with probability 0.5
if b = 1 then
Draw s 2 S and t 2 V \S (t 2 P(s) \ {s}) uniformly at random
if S [ {t}\{s} 2 C then
?C (S[{t}\{s})
S
S [ {t}\{s} with probability ?C (S)+?
C (S[{t}\{s})
end if
end if
end while
The sampler is shown in Algorithm 2. At each iteration, we randomly select an item s 2 S and
t 2 V \S such that the new set S [ {t}\{s} satisfies C, and swap them with certain probability. For
uniform matroids, this means t 2 V \S; for partition matroids, t 2 P(s) \ {s} where P(s) is the part
that s resides in. The fact that the chain has stationary distribution ?C can be inferred via detailed
balance. Similar to the analysis in [19] for unconstrained sampling, the mixing time depends on
a quantity that measures how much F deviates from linearity: ?F = maxS,T 2C |F (S) + F (T )
F (S \ T ) F (S [ T )|. Our proof, however, differs from that of [19]. While they use canonical
paths [10], we use multicommodity flows, which are more effective in our constrained setting.
Theorem 4. Consider the chain in Algorithm 2. For the uniform matroid, ?X0 (") is bounded as
?X0 (") ? 4k(N
2
k) exp( (2?F ))(log ?C (X0 )
1
+ log "
1
Drawing even a uniform sample from the bases of an arbitrary matroid can be hard.
4
);
(3.1)
For the partition matroid, the mixing time is bounded as
?X0 (") ? 4k 2 max |Pi | exp( (2?F ))(log ?C (X0 )
i
1
+ log "
1
(3.2)
).
Observe that if Pi ?s form an equipartition, i.e., |Pi | = N/k for all i, then the second bound becomes
e
e ) on N . For
O(kN
). For k = O(log N ), the mixing times depend as O(N polylog(N )) = O(N
uniform matroids, the time is equally small if k is close to N . Finally, the time depends on the
initialization, ?C (X0 ). If F is monotone increasing, one may run a simple greedy algorithm to ensure
that ?C (X0 ) is large. If F is monotone submodular, this ensures that log ?C (X0 ) 1 = O(log N ).
Our proof uses a multicommodity flow to upper bound the largest eigenvalue of the transition
matrix. Concretely, let H be the set of all simple paths between states in the state graph of Markov
chain, we construct a flow f : H ! R+ that assigns a nonnegative flow value to any simple
path between any two states (sets) X, Y 2 C. Each edge e = (S, T ) in the graph has a capacity
Q(e) = ?C (S)P (S, T ) where P (S, T ) is the transition probability from S to T . The total flow sent
from P
X to Y must be ?C (X)?C (Y ): if HXY is the set of all simple paths from X to Y , then we
need p2HXY f (p) = ?C (X)?C (Y ). Intuitively, the mixing time relates to the congestion in any
edge, and the length of the paths. If there are many short paths X
Y across which flow can be
distributed, then mixing is fast. This intuition is captured in a fundamental theorem:
Theorem 5 (Multicommodity Flow [35]). Let E be the set of edges in the transition graph, and
P (X, Y ) the transition probability. Define
X
1
?(f ) = max Q(e)
f (p)len(p),
p3e
e2E
where len(p) the length of the path p. Then
max
?1
1/?(f ).
With this property of multicommodity flow, we are ready to prove Thm. 4.
Proof. (Theorem 4) We sketch the proof for partition matroids; the full proofs is in Appendix A.
For any two sets X, Y 2 C, we distribute the flow equally across all shortest paths X
Y in the
transition graph and bound the amount of flow through any edge e 2 E.
Consider two arbitrary sets X, Y 2 C with symmetric difference |X Y | = 2m ? 2k, i.e., m
elements need to be exchanged to reach from X to Y . However, these m steps are a valid path in the
transition graph only if every set S along the way is in C. The exchange property of matroids implies
that this requirement is indeed true, so any shortest path X
Y has length m. Moreover, there are
exactly m! such paths, since we can exchange the elements in X \ Y in any order to reach at Y . Note
that once we choose s 2 X \ Y to swap out, there is only one choice t 2 Y \ X to swap in, where t
lies in the same part as s in the partition matroid, otherwise the constraint will be violated. Since the
total flow is ?C (X)?C (Y ), each path receives ?C (X)?C (Y )/m! flow.
Next, let e = (S, T ) be any edge on some shortest path X
Y ; so S, T 2 C and T = S [ {j}\{i}
for some i, j 2 V . Let 2r = |X S| < 2m be the length of the shortest path X
S, i.e., r elements
need to be exchanged to reach from X to S. Similarly, m r 1 elements are exchanged to reach
from T to Y . Since there is a path for every permutation of those elements, the ratio of the total flow
we (X, Y ) that edge e receives from pair X, Y , and Q(e), becomes
we (X, Y )
2r!(m 1 r)!kL
?
exp(2 ?F )(exp( F (
Q(e)
m!ZC
S (X, Y
))) + exp( F (
T (X, Y
)))),
(3.3)
where we define S (X, Y ) = X Y S = (X \ Y \ S) [ (X \ (Y [ S)) [ (Y \ (X [ S)). To bound
the total flow, we must count the pairs X, Y such that e is on their shortest path(s), and bound the
flow they send. We do this in two steps, first summing over all (X, Y )?s that share the upper bound
(3.3) since they have the same difference sets US = S (X, Y ) and UT = T (X, Y ), and then we
sum over all possible US and UT . For fixed US , UT , there are mr 1 pairs that share those difference
sets, since the only freedom we have is to assign r of the m 1 elements in S \ (X \ Y \ S) to Y ,
and the rest to X. Hence, for fixed US , UT . Appropriate summing and canceling then yields
X
(X,Y ): S (X,Y )=US ,
T (X,Y )=UT
we (X, Y )
2kL
?
exp(2 ?F )(exp( F (US )) + exp( F (UT ))).
Q(e)
ZC
5
(3.4)
Finally, wePsum over all valid US (UT is determined by US ). One can show that any valid US 2 C,
and hence US exp( F (US )) ? ZC , and likewise for UT . Hence, summing the bound (3.4) over all
possible choices of US yields
?(f ) ? 4kL exp(2 ?F ) max len(p) ? 4k 2 L exp(2 ?F ),
p
where we upper bound the length of any shortest path by k, since m ? k. Hence
?X0 (") ? 4k 2 L exp(2 ?F )(log ?(X0 ) 1 + log " 1 ).
For more restrictive constraints, there are fewer paths, and the bounds can become larger. Appendix A
shows the general dependence on k (as k!). It is also interesting to compare the bound on uniform
matroid in Eq. (3.1) to that shown in [3] for a sub-class of distributions that satisfy the property
of being homogeneous strongly Rayleigh3 . If ?C is homogeneous strongly Rayleigh, we have
?X0 (") ? 2k(N k)(log ?C (X0 ) 1 + log " 1 ). In our analysis, without additional assumptions on
?C , we pay a factor of 2 exp(2 ?F )) for generality. This factor is one for some strongly Rayleigh
distributions (e.g., if F is modular), but not for all.
3.2
Uniform Matroid Constraint
We consider constraints that is a uniform matroid of certain rank: C = {S : |S| ? k}. We employ the
lazy add-delete Markov chain in Algo. 3, where in each iteration, with probability 0.5 we uniformly
randomly sample one element from V and either add it to or delete it from the current set, while
respecting constraints. To show fast mixing, we consider using path coupling, which essentially says
that if we have a contraction of two (coupling) chains then we have fast mixing. We construct path
coupling (S, T ) ! (S 0 , T 0 ) on a carefully generated graph with edges E (from a proper metric).
With all details in Appendix B we end up with the following theorem:
Theorem 6. Consider the chain shown in Algorithm 3. Let ? = max(S,T )2E {?1 , ?2 } where ?1 and
?2 are functions of edges (S, T ) 2 E and are defined as
X
p (S, i)|+ J|S| < kK
(p+ (S, i)
i2[N ]\S
X
?2 = min{p (S, s), p (T, t)}
|p (S, i) p (T, i)|+
?1 =1
X
i2T
|p (T, i)
i2R
J|S| < kK(min{p+ (S, t), p+ (T, s)}
X
i2[N ]\(S[T )
|p+ (S, i)
p+ (T, i))+ ;
p+ (T, i)|),
where (x)+ = max(0, x). The summations over absolute differences quantify the sensitivity of
transition probabilities to adding/deleting elements in neighboring (S, T ). Assuming ? < 1, we get
? (") ?
2N log(N "
1 ?
1
)
Algorithm 3 Gibbs Add-Delete Markov Chain for Uniform Matroid
Require: F the set function, the inverse temperature, V the ground set, k the rank of C
Ensure: S sampled from ?C
Initialize S 2 C
while not mixed do
Let b = 1 with probability 0.5
if b = 1 then
Draw s 2 V uniformly randomly
if s 2
/ S and |S [ {s}| ? k then
?C (S[{s})
S
S [ {s} with probability p+ (S, s) = ?C (S)+?
C (S[{s})
else
?C (S\{s})
S
S\{s} with probability p (S, s) = ?C (S)+?C (S\{s})
end if
end if
end while
Remarks. If ? is less than 1 and independent of N , then the mixing time is nearly linear in N .
The condition is conceptually similar to those in [29, 34]. The fast mixing requires both ?1 and ?2 ,
specifically, the change in probability when adding or deleting single element to neighboring subsets,
to be small. Such notion is closely related to the curvature of discrete set functions.
3
Appendix C contains details about strongly Rayleigh distributions.
6
4
Experiments
We next empirically study the dependence of sampling times on key factors that govern our theoretical
bounds. In particular, we run Markov chains on chain-structured Ising models on a partition matroid
base and DPPs on a uniform matroid, and consider estimating marginal and conditional probabilities
of a single variable. To monitor the convergence of Markov chains, we use potential scale reduction
factor (PSRF) [7, 18] that runs several chains in parallel and compares within-chain variances to
between-chain variances. Typically, PSRF is greater than 1 and will converge to 1 in the limit; if it is
close to 1 we empirically conclude that chains have mixed well. Throughout experiments we run 10
chains in parallel for estimations, and declare ?convergence? at a PSRF of 1.05.
We first focus on small synthetic examples where we can compute exact marginal and conditional
probabilities. We construct a 20-variable chain-structured Ising model as
?C (S) / exp
? ?? X19
i=1
?
si+1 ) + (1
wi (si
)|S|
??
JS 2 CK,
where the si are 0-1 encodings of S, and the wi are drawn uniformly randomly from [0, 1]. The
parameters ( , ) govern bounds on the mixing time via exp(2 ?F ); the smaller , the smaller ?F .
C is a partition matroid of rank 5. We estimate conditional probabilities of one random variable
conditioned on 0, 1 and 2 other variables and compare against the ground truth. We set ( , ) to be
(1, 1), (3, 1) and (3, 0.5) and results are shown in Fig. 1. All marginals and conditionals converge to
their true values, but with different speed. Comparing Fig. 1a against 1b, we observe that with fixed
, increase in slows down the convergence, as expected. Comparing Fig. 1b against 1c, we observe
that with fixed , decrease in speeds up the convergence, also as expected given our theoretical
results. Appendix D.1 and D.2 illustrate the convergence of estimations under other ( , ) settings.
Convergence for Inference
0.15
Convergence for Inference
0.15
Marg
Cond-1
Cond-2
0.1
0.1
Error
Error
Error
0.05
0
0
0
0.5
1
# Iter
1.5
(a) ( , ) = (1, 1)
2
?10
4
Marg
Cond-1
Cond-2
0.1
0.05
0.05
Convergence for Inference
0.15
Marg
Cond-1
Cond-2
0
0
1
2
3
4
# Iter
(b) ( , ) = (3, 1)
5
?10 4
0
1
2
3
# Iter
4
5
?10 4
(c) ( , ) = (3, 0.5)
Figure 1: Convergence of marginal (Marg) and conditional (Cond-1 and Cond-2, conditioned on
1 and 2 other variables) probabilities of a single variable in a 20-variable Ising model with different
( , ). Full lines show the means and dotted lines the standard deviations of estimations.
We also check convergence on larger models. We use a DPP on a uniform matroid of rank 30 on
the Ailerons data (http://www.dcc.fc.up.pt/657~ltorgo/Regression/DataSets.
html) of size 200. Here, we do not have access to the ground truth, and hence plot the estimation
mean with standard deviations among 10 chains in 3a. We observe that the chains will eventually
converge, i.e., the mean becomes stable and variance small. We also use PSRF to approximately
judge the convergence. More results can be found in Appendix D.3.
Furthermore, the mixing time depends on the size N of the ground set. We use a DPP on Ailerons and
vary N from 50 to 1000. Fig. 2a shows the PSRF from 10 chains for each setting. By thresholding
PSRF at 1.05 in Fig. 2b we see a clearer dependence on N . At this scale, the mixing time grows
almost linearly with N , indicating that this chain is efficient at least at small to medium scale.
Finally, we empirically study how fast our sampler on strongly Rayleigh distribution converges. We
compare the chain in Algorithm 1 (Mix) against a simple add-delete chain (Add-Delete). We use a
DPP on Ailerons data4 of size 200, and the corresponding PSRF is shown in Fig. 3b. We observe that
Mix converges slightly slower than Add-Delete since it is lazier. However, the Add-Delete chain
does not always mix fast. Fig. 3c illustrates a different setting, where we modify the eigenspectrum
of the kernel matrix: the first 100 eigenvalues are 500 and others 1/500. Such a kernel corresponds to
4
http://www.dcc.fc.up.pt/657~ltorgo/Regression/DataSets.html
7
Potential Scale Reduction Factor
1.5
1.4
1.3
1.2
?10 4 Approximate Mixing Time
6
5
# Iters
PSRF
7
N=50
N=100
N=200
N=300
N=500
N=1000
4
3
2
1.1
1
1
0
0
1
2
3
4
5
# Iter
6
7
8
9
10
?10 4
0
200
400
600
Data Size
800
1000
(b)
(a)
Figure 2: Empirical mixing time analysis when varying dataset sizes, (a) PSRF?s for each set of
chains, (b) Approximate mixing time obtained by thresholding PSRF at 1.05.
almost an elementary DPP, where the size of the observed subsets sharply concentrates around 100.
Here, Add-Delete moves very slowly. Mix, in contrast, has the ability of exchanging elements
and thus converges way faster than Add-Delete.
Convergence for Inference
0.6
0.5
Potential Scale Reduction Factor
1.3
Marg
Cond-5
Cond-10
Potential Scale Reduction Factor
1.3
Add-Delete
Mix
1.25
1.25
1.2
1.2
Add-Delete
Mix
PSRF
Val
0.3
0.2
0.1
PSRF
0.4
1.15
1.15
1.1
1.1
1.05
1.05
0
-0.1
1
-0.2
0
0.5
1
# Iter
1.5
2
?10
1
0
2
4
(a)
4
6
# Iter
(b)
8
10
?10 4
0
2
4
6
# Iter
8
10
?10 4
(c)
Figure 3: (a) Convergence of marginal and conditional probabilities by DPP on uniform matroid,
(b,c) comparison between add-delete chain (Algorithm 3) and projection chain (Algorithm 1) for two
instances: slowly decaying spectrum and sharp step in the spectrum.
5
Discussion and Open Problems
We presented theoretical results on Markov chain sampling for discrete probabilistic models subject
to implicit and explicit constraints. In particular, under an implicit constraint that the probability
measure is strongly Rayleigh, we obtain an unconditional fast mixing guarantee. For distributions
with various explicit constraints we showed sufficient conditions for fast mixing. We show empirically
that the dependencies of mixing times on various factors are consistent with our theoretical analysis.
There still exist many open problems in both implicitly- and explicitly-constrained settings. Many
bounds that we show depend on structural quantities (?F or ?) that may not always be easy to quantify
in practice. It will be valuable to develop chains on special classes of distributions (like we did for
strongly Rayleigh) whose mixing time is independent of these factors. Moreover, we only considered
matroid bases or uniform matroids, while several important settings such as knapsack constraints
remain open. In fact, even uniform sampling with a knapsack constraint is not easy; a mixing time
of O(N 4.5 ) is known [33]. We defer the development of similar or better bounds, potentially with
structural factors like exp( ?F ), on specialized discrete probabilistic models as our future work.
Acknowledgements. This research was partially supported by NSF CAREER 1553284 and a Google
Research Award.
8
References
[1] D. J. Aldous. Some inequalities for reversible Markov chains. Journal of the London Mathematical Society,
pages 564?576, 1982.
[2] N. Anari and S. O. Gharan. Effective-resistance-reducing flows and asymmetric tsp. In FOCS, 2015.
[3] N. Anari, S. O. Gharan, and A. Rezaei. Monte Carlo Markov chain algorithms for sampling strongly
Rayleigh distributions and determinantal point processes. In COLT, 2016.
[4] J. Borcea, P. Br?nd?n, and T. Liggett. Negative dependence and the geometry of polynomials. Journal of
the American Mathematical Society, pages 521?567, 2009.
[5] A. Bouchard-C?t? and M. I. Jordan. Variational inference over combinatorial spaces. In NIPS, 2010.
[6] A. Broder. Generating random spanning trees. In FOCS, pages 442?447, 1989.
[7] S. P. Brooks and A. Gelman. General methods for monitoring convergence of iterative simulations. Journal
of computational and graphical statistics, pages 434?455, 1998.
[8] R. Bubley and M. Dyer. Path coupling: A technique for proving rapid mixing in Markov chains. In FOCS,
pages 223?231, 1997.
[9] N. Cesa-Bianchi and G. Lugosi. Combinatorial bandits. In COLT, 2009.
[10] P. Diaconis and D. Stroock. Geometric bounds for eigenvalues of Markov chains. The Annals of Applied
Probability, pages 36?61, 1991.
[11] J. Djolonga and A. Krause. From MAP to marginals: Variational inference in bayesian submodular models.
In NIPS, pages 244?252, 2014.
[12] M. Dyer and C. Greenhill. A more rapidly mixing Markov chain for graph colorings. Random Structures
and Algorithms, pages 285?317, 1998.
[13] M. Dyer, A. Frieze, and M. Jerrum. On counting independent sets in sparse graphs. In FOCS, 1999.
[14] S. Ermon, C. P. Gomes, A. Sabharwal, and B. Selman. Embed and project: Discrete sampling with
universal hashing. In NIPS, pages 2085?2093, 2013.
[15] T. Feder and M. Mihail. Balanced matroids. In STOC, pages 26?38, 1992.
[16] A. Frieze, N. Goyal, L. Rademacher, and S. Vempala. Expanders via random spanning trees. SIAM Journal
on Computing, 43(2):497?513, 2014.
[17] M. Gartrell, U. Paquet, and N. Koenigstein. Low-rank factorization of determinantal point processes for
recommendation. arXiv:1602.05436, 2016.
[18] A. Gelman and D. B. Rubin. Inference from iterative simulation using multiple sequences. Statistical
science, pages 457?472, 1992.
[19] A. Gotovos, H. Hassani, and A. Krause. Sampling from probabilistic submodular models. In NIPS, 2015.
[20] D. M. Greig, B. T. Porteous, and A. H. Seheult. Exact maximum a posteriori estimation for binary images.
Journal of the Royal Statistical Society, 1989.
[21] R. Iyer and J. Bilmes. Submodular point processes. In AISTATS, 2015.
[22] M. Jerrum and A. Sinclair. Polynomial-time approximation algorithms for the Ising model. SIAM J.
Computing, 1993.
[23] M. Jerrum, A. Sinclair, and E. Vigoda. A polynomial-time approximation algorithm for the permanent of a
matrix with nonnegative entries. JACM, 2004.
[24] B. Kang. Fast determinantal point process sampling with application to clustering. In NIPS, pages
2319?2327, 2013.
[25] T. Kathuria and A. Deshpande. On sampling from constrained diversity promoting point processes. 2016.
[26] M. Kojima and F. Komaki. Determinantal point process priors for Bayesian variable selection in linear
regression. arXiv:1406.2100, 2014.
[27] A. Kulesza and B. Taskar. k-DPPs: Fixed-size determinantal point processes. In ICML, pages 1193?1200,
2011.
[28] A. Kulesza and B. Taskar. Determinantal point processes for machine learning. arXiv preprint
arXiv:1207.6083, 2012.
[29] C. Li, S. Jegelka, and S. Sra. Fast DPP sampling for Nystr?m with application to kernel methods. In ICML,
2016.
[30] C. Li, S. Sra, and S. Jegelka. Gaussian quadrature for matrix inverse forms with applications. In ICML,
2016.
[31] C. J. Maddison, D. Tarlow, and T. Minka. A* sampling. In NIPS, 2014.
[32] Z. Mariet and S. Sra. Diversity networks. In ICLR, 2016.
[33] B. Morris and A. Sinclair. Random walks on truncated cubes and sampling 0-1 knapsack solutions. SIAM
journal on computing, pages 195?226, 2004.
[34] P. Rebeschini and A. Karbasi. Fast mixing for discrete point processes. In COLT, 2015.
[35] A. Sinclair. Improved bounds for mixing rates of Markov chains and multicommodity flow. Combinatorics,
probability and Computing, pages 351?370, 1992.
[36] D. Smith and J. Eisner. Dependency parsing by belief propagation. In EMNLP, 2008.
[37] D. Spielman and N. Srivastava. Graph sparsification by effective resistances. In STOC, 2008.
[38] J. Zhang, J. Djolonga, and A. Krause. Higher-order inference for multi-class log-supermodular models. In
ICCV, pages 1859?1867, 2015.
9
| 6182 |@word briefly:2 compression:1 polynomial:9 nd:1 unif:1 open:3 closure:1 simulation:2 contraction:1 multicommodity:5 nystr:1 reduction:4 initial:3 contains:2 interestingly:1 existing:1 current:3 comparing:2 si:3 yet:1 must:3 parsing:1 determinantal:9 partition:11 plot:1 stationary:1 greedy:1 selected:1 congestion:1 item:5 fewer:1 smith:1 short:1 tarlow:1 zhang:1 unbounded:1 mathematical:2 along:1 iverson:1 become:1 focs:4 prove:1 overhead:1 x0:27 expected:2 indeed:1 rapid:2 multi:1 cardinality:5 increasing:1 becomes:3 spain:1 begin:1 moreover:3 notation:1 bounded:5 underlying:1 linearity:1 estimating:1 medium:1 project:1 chengtao:1 sparsification:1 guarantee:2 every:2 exactly:2 wrong:1 unit:1 before:2 declare:1 modify:1 limit:2 x19:1 encoding:1 vigoda:1 path:22 approximately:1 lugosi:1 kojima:1 initialization:1 studied:1 limited:1 factorization:1 obeys:1 practical:3 practice:2 goyal:1 differs:1 empirical:1 universal:1 projection:1 get:2 cannot:3 close:3 doubled:1 gelman:2 selection:1 impossible:1 marg:5 restriction:1 equivalent:1 www:2 map:1 send:1 attention:1 starting:3 ergodic:1 identifying:1 assigns:1 pure:3 classic:1 handle:1 notion:1 variation:2 i2t:1 proving:1 annals:1 construction:1 pt:2 user:1 exact:2 homogeneous:4 us:1 element:13 pps:8 asymmetric:1 ising:5 observed:1 taskar:2 preprint:1 verifying:1 capture:1 ensures:1 decrease:1 valuable:2 substantial:1 intuition:1 balanced:1 govern:2 respecting:1 bubley:1 multilabel:1 depend:2 algo:1 upon:1 swap:4 matchings:1 easily:1 various:2 derivation:1 fast:22 effective:3 london:1 monte:1 gotovos:1 outside:1 quite:1 emerged:1 widely:1 larger:3 modular:1 say:3 drawing:1 otherwise:2 whose:1 ability:1 statistic:1 jerrum:3 paquet:1 tsp:1 sequence:1 eigenvalue:6 neighboring:2 rapidly:4 mixing:48 iff:1 convergence:14 requirement:1 rademacher:1 generating:2 tract:1 converges:3 i2s:2 object:1 koenigstein:1 illustrate:2 develop:3 derive:1 coupling:5 polylog:1 clearer:1 eq:1 strong:1 implies:1 judge:1 quantify:2 concentrate:2 sabharwal:1 ctli:1 closely:1 correct:1 ermon:1 exchange:11 require:3 assign:1 suffices:1 rebeschini:1 elementary:1 summation:1 around:2 considered:1 ground:6 exp:20 claim:1 vary:1 estimation:5 combinatorial:6 currently:1 gartrell:1 largest:1 establishes:1 tool:1 mit:6 always:2 gaussian:1 ck:2 varying:1 gharan:2 corollary:2 focus:4 rank:6 indicates:1 check:1 contrast:2 sense:1 posteriori:1 inference:10 typically:2 equipartition:1 bandit:2 provably:2 among:1 html:2 colt:3 development:1 constrained:8 special:7 initialize:3 iters:1 marginal:4 cube:1 construct:5 once:1 sampling:33 biology:1 icml:3 nearly:1 future:1 djolonga:2 others:1 employ:1 randomly:4 diaconis:1 frieze:2 geometry:1 maintain:1 freedom:1 interest:2 highly:1 mixture:1 bracket:1 sh:7 yielding:1 unconditional:1 devoted:1 chain:60 amenable:1 edge:8 byproduct:1 tree:3 enjoying:1 exchanged:3 walk:1 theoretical:6 delete:20 instance:6 modeling:1 stroock:1 exchanging:1 deviation:2 subset:7 lazier:1 entry:1 uniform:17 too:1 kn:1 answer:1 dependency:2 synthetic:1 fundamental:2 sensitivity:1 broder:1 siam:3 csail:1 standing:1 probabilistic:6 cesa:1 ltorgo:2 choose:2 slowly:2 emnlp:1 sinclair:4 american:1 li:3 distribute:1 potential:4 diversity:3 sec:1 permanent:2 combinatorics:2 satisfy:2 explicitly:2 depends:4 break:1 analyze:3 doing:1 len:3 decaying:1 parallel:2 bouchard:1 e2e:1 defer:1 contribution:3 variance:3 who:1 efficiently:1 likewise:1 yield:3 identify:1 conceptually:1 bayesian:2 carlo:1 monitoring:1 bilmes:1 ago:1 strongest:1 reach:4 canceling:1 against:4 pp:1 deshpande:1 minka:1 proof:6 sampled:1 proved:1 dataset:1 recall:1 knowledge:3 ut:8 hassani:1 carefully:1 coloring:1 higher:2 supermodular:2 dcc:2 hashing:1 improved:1 borcea:1 box:1 strongly:16 generality:1 furthermore:1 inception:1 governing:1 implicit:2 correlation:1 sketch:1 receives:2 nonlinear:2 reversible:1 propagation:1 google:1 perhaps:1 grows:1 validity:1 contain:1 true:2 hence:8 symmetric:4 i2:2 indistinguishable:1 temperature:2 ranging:1 wise:1 consideration:1 novel:1 recently:2 variational:3 image:1 kathuria:1 specialized:1 empirically:5 extend:1 marginals:2 homogenization:3 gibbs:6 dpps:6 unconstrained:1 similarly:1 submodular:5 language:1 had:1 stable:2 access:1 base:8 add:21 j:2 curvature:1 fortiori:1 recent:3 showed:1 aldous:1 apart:1 certain:5 suvrit:2 inequality:1 binary:1 captured:1 additional:2 somewhat:1 impose:1 mr:1 greater:1 converge:3 shortest:6 relates:1 full:2 mix:12 multiple:1 faster:2 long:1 host:1 equally:2 award:1 impact:1 regression:3 vision:1 essentially:1 metric:1 arxiv:4 iteration:3 kernel:3 background:2 want:1 conditionals:1 krause:3 else:4 rest:1 sr:18 strict:1 anari:3 induced:1 subject:1 sent:1 contrary:1 flow:17 jordan:1 structural:3 counting:1 easy:2 variety:2 matroid:28 zi:3 greig:1 stall:1 idea:3 cn:1 br:1 t0:3 feder:1 effort:1 suffer:1 resistance:2 remark:2 generally:1 useful:1 detailed:3 amount:1 morris:1 simplest:1 http:2 specifies:1 exist:1 canonical:1 nsf:1 dotted:1 discrete:8 group:1 key:4 four:1 iter:7 monitor:1 drawn:1 clean:1 graph:10 merely:1 monotone:2 sum:2 mihail:1 run:5 inverse:3 opaque:1 family:3 throughout:1 almost:2 draw:4 appendix:8 bound:24 pay:1 guaranteed:1 nonnegative:2 occur:2 constraint:31 sharply:1 phrased:1 encodes:1 speed:2 argument:1 min:6 vempala:1 structured:2 combination:1 smaller:3 across:2 slightly:1 remain:1 wi:4 intuitively:2 mariet:1 iccv:1 karbasi:1 resource:3 turn:2 describing:1 count:1 eventually:1 needed:1 dyer:3 tractable:2 serf:1 end:9 data4:1 promoting:1 observe:6 generic:1 appropriate:2 slower:2 knapsack:3 hampered:1 assumes:1 running:1 include:6 ensure:2 porteous:1 graphical:1 clustering:1 restrictive:1 eisner:1 society:3 move:3 question:1 quantity:2 dependence:6 iclr:1 distance:2 n2n:3 capacity:1 maddison:1 eigenspectrum:1 spanning:3 assuming:2 length:5 index:1 relationship:1 kk:2 ratio:1 balance:1 setup:1 potentially:1 stoc:2 negative:3 slows:1 design:1 implementation:1 proper:1 perform:1 bianchi:1 upper:4 observation:1 markov:22 datasets:2 hxy:1 truncated:1 sharp:2 arbitrary:3 thm:1 inferred:1 pair:3 kl:3 kang:1 barcelona:1 nip:7 brook:1 address:1 beyond:1 below:2 kulesza:2 challenge:1 summarize:1 including:1 max:10 royal:1 deleting:2 belief:1 natural:1 ready:1 stefanie:1 deviate:1 prior:4 geometric:1 acknowledgement:1 val:1 permutation:1 mixed:4 interesting:1 limitation:1 remarkable:1 greenhill:1 jegelka:3 sufficient:2 consistent:1 rubin:1 thresholding:2 pi:6 share:2 surprisingly:1 supported:1 zc:3 formal:1 allow:2 wide:1 expend:1 matroids:8 absolute:1 sparse:1 distributed:1 dpp:7 world:1 valid:4 transition:9 avoids:1 resides:1 author:1 concretely:2 selman:1 approximate:3 implicitly:1 psrf:12 rezaei:1 summing:3 conclude:1 gomes:1 spectrum:2 iterative:2 decade:1 sk:1 nature:1 sra:4 career:1 alg:1 necessarily:2 did:1 aistats:1 main:2 linearly:2 bounding:2 arise:3 expanders:1 nothing:2 quadrature:1 fig:7 referred:1 sub:5 explicit:2 lie:1 governed:1 theorem:13 down:1 embed:1 xt:2 liggett:1 incorporating:1 adding:2 gained:1 iyer:1 budget:2 illustrates:2 conditioned:2 rejection:1 rayleigh:15 logarithmic:1 fc:2 jacm:1 lazy:1 partially:1 recommendation:1 applies:2 corresponds:1 truth:2 satisfies:2 relies:1 conditional:5 hard:2 change:2 included:1 specifically:2 determined:1 uniformly:6 reducing:1 sampler:14 i2r:1 total:6 cond:10 stefje:1 indicating:1 pragmatic:1 formally:2 select:1 support:1 aileron:3 relevance:1 violated:1 spielman:1 mcmc:5 seheult:1 phenomenon:1 srivastava:1 |
5,728 | 6,183 | Search Improves Label for Active Learning
Alina Beygelzimer
Yahoo Research
New York, NY
[email protected]
Daniel Hsu
Columbia University
New York, NY
[email protected]
John Langford
Microsoft Research
New York, NY
[email protected]
Chicheng Zhang
UC San Diego
La Jolla, CA
[email protected]
Abstract
We investigate active learning with access to two distinct oracles: L ABEL (which
is standard) and S EARCH (which is not). The S EARCH oracle models the situation
where a human searches a database to seed or counterexample an existing solution.
S EARCH is stronger than L ABEL while being natural to implement in many situations. We show that an algorithm using both oracles can provide exponentially
large problem-dependent improvements over L ABEL alone.
1
Introduction
Most active learning theory is based on interacting with a L ABEL oracle: An active learner observes
unlabeled examples, each with a label that is initially hidden. The learner provides an unlabeled
example to the oracle, and the oracle responds with the label. Using L ABEL in an active learning
algorithm is known to give (sometimes exponentially large) problem-dependent improvements in
label complexity, even in the agnostic setting where no assumption is made about the underlying
distribution [e.g., Balcan et al., 2006, Hanneke, 2007, Dasgupta et al., 2007, Hanneke, 2014].
A well-known de?ciency of L ABEL arises in the presence of rare classes in classi?cation problems,
frequently the case in practice [Attenberg and Provost, 2010, Simard et al., 2014]. Class imbalance
may be so extreme that simply ?nding an example from the rare class can exhaust the labeling budget.
Consider the problem of learning interval functions in [0, 1]. Any L ABEL-only active learner needs at
least ?(1/?) L ABEL queries to learn an arbitrary target interval with error at most ? [Dasgupta, 2005].
Given any positive example from the interval, however, the query complexity of learning intervals
collapses to O(log(1/?)), as we can just do a binary search for each of the end points.
A natural approach used to overcome this hurdle in practice is to search for known examples of the
rare class [Attenberg and Provost, 2010, Simard et al., 2014]. Domain experts are often adept at
?nding examples of a class by various, often clever means. For instance, when building a hate speech
?lter, a simple web search can readily produce a set of positive examples. Sending a random batch of
unlabeled text to L ABEL is unlikely to produce any positive examples at all.
Another form of interaction common in practice is providing counterexamples to a learned predictor.
When monitoring the stream ?ltered by the current hate speech ?lter, a human editor may spot a
clear-cut example of hate speech that seeped through the ?lter. The editor, using all the search tools
available to her, may even be tasked with searching for such counterexamples. The goal of the
learning system is then to interactively restrict the searchable space, guiding the search process to
where it is most effective.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Counterexamples can be ineffective or misleading in practice as well. Reconsidering the intervals
example above, a counterexample on the boundary of an incorrect interval provides no useful
information about any other examples. What is a good counterexample? What is a natural way to
restrict the searchable space? How can the intervals problem be generalized?
We de?ne a new oracle, S EARCH, that provides counterexamples to version spaces. Given a set of
possible classi?ers H mapping unlabeled examples to labels, a version space V ? H is the subset
of classi?ers still under consideration by the algorithm. A counterexample to a version space is a
labeled example which every classi?er in the version space classi?es incorrectly. When there is no
counterexample to the version space, S EARCH returns nothing.
How can a counterexample to the version space be used? We consider a nested sequence of hypothesis
classes of increasing complexity, akin to Structural Risk Minimization (SRM) in passive learning
[see, e.g., Vapnik, 1982, Devroye et al., 1996]. When S EARCH produces a counterexample to the
version space, it gives a proof that the current hypothesis class is too simplistic to solve the problem
effectively. We show that this guided increase in hypothesis complexity results in a radically lower
L ABEL complexity than directly learning on the complex space. Sample complexity bounds for
model selection in L ABEL-only active learning were studied by Balcan et al. [2010], Hanneke [2011].
S EARCH can easily model the practice of seeding discussed earlier. If the ?rst hypothesis class
has just the constant always-negative classi?er h(x) = ?1, a seed example with label +1 is a
counterexample to the version space. Our most basic algorithm uses S EARCH just once before using
L ABEL, but it is clear from inspection that multiple seeds are not harmful, and they may be helpful if
they provide the proof required to operate with an appropriately complex hypothesis class.
De?ning S EARCH with respect to a version space rather than a single classi?er allows us to formalize
?counterexample far from the boundary? in a general fashion which is compatible with the way
L ABEL-based active learning algorithms work.
Related work. The closest oracle considered in the literature is the Class Conditional Query
(CCQ) [Balcan and Hanneke, 2012] oracle. A query to CCQ speci?es a ?nite set of unlabeled
examples and a label while returning an example in the subset with the speci?ed label, if one exists.
In contrast, S EARCH has an implicit query set that is an entire region of the input space rather than
a ?nite set. Simple searches over this large implicit domain can more plausibly discover relevant
counterexamples: When building a detector for penguins in images, the input to CCQ might be a
set of images and the label ?penguin?. Even if we are very lucky and the set happens to contain a
penguin image, a search amongst image tags may fail to ?nd it in the subset because it is not tagged
appropriately. S EARCH is more likely to discover counterexamples?surely there are many images
correctly tagged as having penguins.
Why is it natural to de?ne a query region implicitly via a version space? There is a practical
reason?it is a concise description of a natural region with an ef?ciently implementable membership
?lter [Beygelzimer et al., 2010, 2011, Huang et al., 2015]. (Compare this to an oracle call that has
to explicitly enumerate a large set of examples. The algorithm of Balcan and Hanneke [2012] uses
samples of size roughly d?/?2 .)
The use of S EARCH in this paper is also substantially different from the use of CCQ by Balcan and
Hanneke [2012]. Our motivation is to use S EARCH to assist L ABEL, as opposed to using S EARCH
alone. This is especially useful in any setting where the cost of S EARCH is signi?cantly higher
than the cost of L ABEL?we hope to avoid using S EARCH queries whenever it is possible to make
progress using L ABEL queries. This is consistent with how interactive learning systems are used in
practice. For example, the Interactive Classi?cation and Extraction system of Simard et al. [2014]
combines L ABEL with search in a production environment.
The ?nal important distinction is that we require S EARCH to return the label of the optimal predictor
in the nested sequence. For many natural sequences of hypothesis classes, the Bayes optimal
classi?er is eventually in the sequence, in which case it is equivalent to assuming that the label in a
counterexample is the most probable one, as opposed to a randomly-drawn label from the conditional
distribution (as in CCQ and L ABEL).
Is this a reasonable assumption? Unlike with L ABEL queries, where the labeler has no choice of
what to label, here the labeler chooses a counterexample. If a human editor ?nds an unquestionable
2
example of hate speech that seeped through the ?lter, it is quite reasonable to assume that this
counterexample is consistent with the Bayes optimal predictor for any sensible feature representation.
Organization. Section 2 formally introduces the setting. Section 3 shows that S EARCH is at least
as powerful as L ABEL. Section 4 shows how to use S EARCH and L ABEL jointly in the realizable
setting where a zero-error classi?er exists in the nested sequence of hypothesis classes. Section 5
handles the agnostic setting where L ABEL is subject to label noise, and shows an amortized approach
to combining the two oracles with a good guarantee on the total cost.
2
De?nitions and Setting
In active learning, there is an underlying distribution D over X ? Y, where X is the instance space
and Y := {?1, +1} is the label space. The learner can obtain independent draws from D, but the
label is hidden unless explicitly requested through a query to the L ABEL oracle. Let DX denote the
marginal of D over X .
We consider learning with a nested sequence of hypotheses classes H0 ? H1 ? ? ? ? ? Hk ? ? ? ,
where Hk ? Y X has VC dimension dk . For a set of labeled examples S ? X ? Y, let Hk (S) :=
{h ? Hk : ?(x, y) ? S ? h(x) = y} be the set of hypotheses in Hk consistent with S. Let
err(h) := Pr(x,y)?D [h(x) ?= y] denote the error rate of a hypothesis h with respect to distribution
D, and err(h, S) be the error rate of h on the labeled examples in S. Let h?k = arg minh?Hk err(h)
breaking ties arbitrarily and let k ? := arg mink?0 err(h?k ) breaking ties in favor of the smallest such
k. For simplicity, we assume the minimum is attained at some ?nite k ? . Finally, de?ne h? := h?k? ,
the optimal hypothesis in the sequence of classes. The goal of the learner is to learn a hypothesis
with error rate not much more than that of h? .
In addition to L ABEL, the learner can also query S EARCH with a version space.
?
Oracle S EARCHH (V ) (where H ? {Hk }k=0 )
input: Set of hypotheses V ? H
output: Labeled example (x, h? (x)) s.t. h(x) ?= h? (x) for all h ? V , or ? if there is no such
example.
Thus if S EARCHH (V ) returns an example, this example is a systematic mistake made by all hypotheses in V . (If V = ?, we expect S EARCH to return some example, i.e., not ?.)
Our analysis is given in terms of the disagreement coef?cient of Hanneke [2007], which has been
a central parameter for analyzing active learning algorithms. De?ne the region of disagreement of
a set of hypotheses V as Dis(V ) := {x ? X : ?h, h? ? V s.t. h(x) ?= h? (x)}. The disagreement
coef?cient of V at scale r is ?V (r) := suph?V,r? ?r PrDX [Dis(BV (h, r? ))]/r? , where BV (h, r? ) =
{h? ? V : Prx?DX [h? (x) ?= h(x)] ? r? } is the ball of radius r? around h.
? notation hides factors that are polylogarithmic in 1/? and quantities that do appear, where ?
The O(?)
is the usual con?dence parameter.
3
The Relative Power of the Two Oracles
Although S EARCH cannot always implement L ABEL ef?ciently, it is as effective at reducing the region
of disagreement. The clearest example is learning threshold classi?ers H := {hw : w ? [0, 1]}
in the realizable case, where hw (x) = +1 if w ? x ? 1, and ?1 if 0 ? x < w. A simple
binary search with L ABEL achieves an exponential improvement in query complexity over passive
learning. The agreement region of any set of threshold classi?ers with thresholds in [wmin , wmax ] is
[0, wmin )?[wmax , 1]. Since S EARCH is allowed to return any counterexample in the agreement region,
there is no mechanism for forcing S EARCH to return the label of a particular point we want. However,
this is not needed to achieve logarithmic query complexity with S EARCH: If binary search starts with
querying the label of x ? [0, 1], we can query S EARCHH (Vx ), where Vx := {hw ? H : w < x}
instead. If S EARCH returns ?, we know that the target w? ? x and can safely reduce the region of
disagreement to [0, x). If S EARCH returns a counterexample (x0 , ?1) with x0 ? x, we know that
w? > x0 and can reduce the region of disagreement to (x0 , 1].
3
This observation holds more generally. In the proposition below, we assume that L ABEL(x) = h? (x)
for simplicity. If L ABEL(x) is noisy, the proposition holds for any active learning algorithm that
doesn?t eliminate any h ? H : h(x) = L ABEL(x) from the version space.
Proposition 1. For any call x ? X to L ABEL such that L ABEL(x) = h? (x), we can construct a call
to S EARCH that achieves a no lesser reduction in the region of disagreement.
Proof. For any V ? H, let HS EARCH (V ) be the hypotheses in H consistent with the output of
S EARCHH (V ): if S EARCHH (V ) returns a counterexample (x, y) to V , then HS EARCH (V ) := {h ?
H : h(x) = y}; otherwise, HS EARCH (V ) := V . Let HL ABEL (x) := {h ? H : h(x) = L ABEL(x)}.
Also, let Vx := H+1 (x) := {h ? H : h(x) = +1}. We will show that Vx is such that
HS EARCH (Vx ) ? HL ABEL (x), and hence Dis(HS EARCH (Vx )) ? Dis(HL ABEL (x)).
There are two cases to consider: If h? (x) = +1, then S EARCHH (Vx ) returns ?. In this case,
HL ABEL (x) = HS EARCH (Vx ) = H+1 (x), and we are done. If h? (x) = ?1, S EARCH(Vx ) returns a
valid counterexample (possibly (x, ?1)) in the region of agreement of H+1 (x), eliminating all of
H+1 (x). Thus HS EARCH (Vx ) ? H \ H+1 (x) = HL ABEL (x), and the claim holds also.
As shown by the problem of learning intervals on the line, SEARCH can be exponentially more
powerful than LABEL.
4
Realizable Case
We now turn to general active learning algorithms that combine S EARCH and L ABEL. We focus
on algorithms using both S EARCH and L ABEL since L ABEL is typically easier to implement than
S EARCH and hence should be used where S EARCH has no signi?cant advantage. (Whenever S EARCH
is less expensive than L ABEL, Section 3 suggests a transformation to a S EARCH-only algorithm.)
This section considers the realizable case, in which we assume that the hypothesis h? = h?k? ? Hk?
has err(h? ) = 0. This means that L ABEL(x) returns h? (x) for any x in the support of DX .
4.1
Combining LABEL and SEARCH
Our algorithm (shown as Algorithm 1) is called L ARCH, because it combines L ABEL and S EARCH.
Like many selective sampling methods, L ARCH uses a version space to determine its L ABEL queries.
For concreteness, we use (a variant of) the algorithm of Cohn et al. [1994], denoted by CAL, as a
subroutine in L ARCH. The inputs to CAL are: a version space V , the L ABEL oracle, a target error
rate, and a con?dence parameter; and its output is a set of labeled examples (implicitly de?ning a new
version space). CAL is described in Appendix B; its essential properties are speci?ed in Lemma 1.
L ARCH differs from L ABEL-only active learners (like CAL) by ?rst calling S EARCH in Step 3. If
S EARCH returns ?, L ARCH checks to see if the last call to CAL resulted in a small-enough error,
halting if so in Step 6, and decreasing the allowed error rate if not in Step 8. If S EARCH instead
returns a counterexample, the hypothesis class Hk must be impoverished, so in Step 12, L ARCH
increases the complexity of the hypothesis class to the minimum complexity suf?cient to correctly
classify all known labeled examples in S. After the S EARCH, CAL is called in Step 14 to discover a
suf?ciently low-error (or at least low-disagreement) version space with high probability.
When L ARCH advances to index k (for any k ? k ? ), its set of labeled examples S may imply a
version space Hk (S) ? Hk that can be actively-learned more ef?ciently than the whole of Hk . In our
analysis, we quantify this through the disagreement coef?cient of Hk (S), which may be markedly
smaller than that of the full Hk .
The following theorem bounds the oracle query complexity of Algorithm 1 for learning with both
S EARCH and L ABEL in the realizable setting. The proof is in section 4.2.
Theorem 1. Assume that err(h? ) = 0. For each k ? ? 0, let ?k? (?) be the disagreement coef?cient
of Hk? (S[k? ] ), where S[k? ] is the set of labeled examples S in L ARCH at the ?rst time that k ? k ? .
?
Fix any ?, ? ? (0, 1). If L ARCH is run with inputs hypothesis classes {Hk }k=0 , oracles L ABEL and
S EARCH, and learning parameters ?, ?, then with probability at least 1 ? ?: L ARCH halts after at
most k ? + log2 (1/?) for-loop iterations and returns a classi?er with error rate at most ?; furthermore,
4
Algorithm 1 L ARCH
input: Nested hypothesis classes H0 ? H1 ? ? ? ? ; oracles L ABEL and S EARCH; learning parameters ?, ? ? (0, 1)
1: initialize S ? ?, (index) k ? 0, ? ? 0
2: for i = 1, 2, . . . do
3:
e ? S EARCHHk (Hk (S))
4:
if e = ? then
# no counterexample found
5:
if 2?? ? ? then
6:
return any h ? Hk (S)
7:
else
8:
???+1
9:
end if
10:
else
# counterexample found
11:
S ? S ? {e}
12:
k ? min{k ? : Hk? (S) ?= ?}
13:
end if
14:
S ? S ? CAL(Hk (S), L ABEL, 2?? , ?/(i2 + i))
15: end for
? ? dk? /?) unlabeled examples from DX , makes at most k ? + log2 (1/?) queries to
it draws at most O(k
?
?
? ( k ? + log(1/?) ? (maxk? ?k? ?k? (?)) ? dk? ? log2 (1/?)) queries to L ABEL.
S EARCH, and at most O
Union-of-intervals example. We now show an implication of Theorem 1 in the case where the
target hypothesis h? is the union of non-trivial intervals in X := [0, 1], assuming that DX is uniform.
For k ? 0, let Hk be the hypothesis class of the union of up to k intervals in [0, 1] with H0 containing
only the always-negative hypothesis. (Thus, h? is the union of k ? non-empty intervals.) The
disagreement coef?cient of H1 is ?(1/?), and hence L ABEL-only active learners like CAL are not
very effective at learning with such classes. However, the ?rst S EARCH query by L ARCH provides a
counterexample to H0 , which must be a positive example (x1 , +1). Hence, H1 (S[1] ) (where S[1] is
de?ned in Theorem 1) is the class of intervals that contain x1 with disagreement coef?cient ?1 ? 4.
Now consider the inductive case. Just before L ARCH advances its index to a value k (for any k ? k ? ),
S EARCH returns a counterexample (x, h? (x)) to the version space; every hypothesis in this version
space (which could be empty) is a union of fewer than k intervals. If the version space is empty, then
S must already contain positive examples from at least k different intervals in h? and at least k ? 1
negative examples separating them. If the version space is not empty, then the point x is either a
positive example belonging to a previously uncovered interval in h? or a negative example splitting
an existing interval. In either case, S[k] contains positive examples from at least k distinct intervals
separated by at least k ? 1 negative examples. The disagreement coef?cient of the set of unions of k
intervals consistent with S[k] is at most 4k, independent of ?.
The VC dimension of Hk is O(k), so Theorem 1 implies that with high probability, L ARCH makes at
? ? )3 log(1/?) + (k ? )2 log3 (1/?)) queries to L ABEL.
most k ? + log(1/?) queries to S EARCH and O((k
4.2
Proof of Theorem 1
The proof of Theorem 1 uses the following lemma regarding the CAL subroutine, proved in Appendix B. It is similar to a result of Hanneke [2011], but an important difference here is that the input
version space V is not assumed to contain h? .
Lemma 1. Assume L ABEL(x) = h? (x) for every x in the support of DX . For any hypothesis set
V ? Y X with VC dimension d < ?, and any ?, ? ? (0, 1), the following holds with probability at
least 1 ? ?. CAL(V, L ABEL, ?, ?) returns labeled examples T ? {(x, h? (x)) : x ? X } such that for
?
any h in V (T ), Pr(x,y)?D [h(x) ?= y ? x ? Dis(V (T ))] ? ?; furthermore, it draws at most O(d/?)
2
?
unlabeled examples from DX , and makes at most O (?V (?) ? d ? log (1/?)) queries to L ABEL.
We now prove
? Theorem 1. By Lemma 1 and a union bound, there is an event with probability
at least 1 ? i?1 ?/(i2 + i) ? 1 ? ? such that each call to CAL made by L ARCH satis?es the
high-probability guarantee from Lemma 1. We henceforth condition on this event.
5
We ?rst establish the guarantee on the error rate of a hypothesis returned by L ARCH. By the
assumed properties of L ABEL and S EARCH, and the properties of CAL from Lemma 1, the labeled
examples S in L ARCH are always consistent with h? . Moreover, the return property of CAL
implies that at the end of any loop iteration, with the present values of S, k, and ?, we have
Pr(x,y)?D [h(x) ?= y ? x ? Dis(Hk (S))] ? 2?? for all h ? Hk (S). (The same holds trivially before
the ?rst loop iteration.) Therefore, if L ARCH halts and returns a hypothesis h ? Hk (S), then there is
no counterexample to Hk (S), and Pr(x,y)?D [h(x) ?= y?x ? Dis(Hk (S))] ? ?. These consequences
and the law of total probability imply err(h) = Pr(x,y)?D [h(x) ?= y ? x ? Dis(Hk (S))] ? ?.
We next consider the number of for-loop iterations executed by L ARCH. Let Si , ki , and ?i be,
respectively, the values of S, k, and ? at the start of the i-th for-loop iteration in L ARCH. We claim
that if L ARCH does not halt in the i-th iteration, then one of k and ? is incremented by at least one.
Clearly, if there is no counterexample to Hki (Si ) and 2??i > ?, then ? is incremented by one (Step 8).
If, instead, there is a counterexample (x, y), then Hki (Si ? {(x, y)}) = ?, and hence k is incremented
to some index larger than ki (Step 12). This proves that ki+1 + ?i+1 ? ki + ?i + 1. We also have
ki ? k ? , since h? ? Hk? is consistent with S, and ?i ? log2 (1/?), as long as L ARCH does not halt
in for-loop iteration i. So the total number of for-loop iterations is at most k ? + log2 (1/?). Together
with Lemma 1, this bounds the number of unlabeled examples drawn from DX .
Finally, we bound the number of queries to S EARCH and L ABEL. The number of queries to S EARCH
is the same as the number of for-loop iterations?this is at most k ? + log2 (1/?). By Lemma 1 and
the fact that V (S ? ? S ?? ) ? V (S ? ) for any hypothesis space V and sets of labeled examples S ? , S ?? ,
? k (?) ? dk ?
the number of L ABEL queries made by CAL in the i-th for-loop iteration is at most O(?
i
i
?2i ? polylog(i)). The claimed bound on the number of L ABEL queries made by L ARCH now readily
follows by taking a max over i, and using the facts that i ? k ? and dk? ? dk? for all k ? ? k.
4.3
An Improved Algorithm
L ARCH is somewhat conservative in its use of S EARCH, interleaving just one S EARCH query between
sequences of L ABEL queries (from CAL). Often, it is advantageous to advance to higher complexity
hypothesis classes quickly, as long as there is justi?cation to do so. Counterexamples from S EARCH
provide such justi?cation, and a ? result from S EARCH also provides useful feedback about the
current version space: outside of its disagreement region, the version space is in complete agreement
with h? (even if the version space does not contain h? ). Based on these observations, we propose an
improved algorithm for the realizable setting, which we call S EABEL. Due to space limitations, we
present it in Appendix C. We prove the following performance guarantee for S EABEL.
Theorem 2. Assume that err(h? ) = 0. Let ?k (?) denote the disagreement coef?cient of Viki
at the ?rst iteration i in S EABEL where ki ? k. Fix any ?, ? ? (0, 1). If S EABEL is run
?
with inputs hypothesis classes {Hk }k=0 , oracles S EARCH and L ABEL, and learning parameters ?, ? ? (0, 1), then with probability 1 ? ?: S EABEL halts and returns a classi?er with
? k? + log k ? )/?) unlabeled examples
error rate at most ?; furthermore, it draws at most O((d
?
?
from DX , makes at most k + O (log(dk /?) + log log k ? ) queries to S EARCH, and at most
? (maxk?k? ?k (2?) ? (dk? log2 (1/?) + log k ? )) queries to L ABEL.
O
It is not generally possible to directly compare Theorems 1 and 2 on account of the algorithmdependent disagreement coef?cient bounds. However, in cases where these disagreement coef?cients
are comparable (as in the union-of-intervals example), the S EARCH complexity in Theorem 2 is
slightly higher (by additive log terms), but the L ABEL complexity is smaller than that from Theorem 1
by roughly a factor of k ? . For the union-of-intervals example, S EABEL would learn target union of
? ? )2 log2 (1/?)) queries to L ABEL.
k ? intervals with k ? + O(log(k ? /?)) queries to S EARCH and O((k
5
Non-Realizable Case
In this section, we consider the case where the optimal hypothesis h? may have non-zero error rate,
i.e., the non-realizable (or agnostic) setting. In this case, the algorithm L ARCH, which was designed
for the realizable setting, is no longer applicable. First, examples obtained by L ABEL and S EARCH
are of different quality: those returned by S EARCH always agree with h? , whereas the labels given
by L ABEL need not agree with h? . Moreover, the version spaces (even when k = k ? ) as de?ned by
L ARCH may always be empty due to the noisy labels.
6
Another complication arises in our SRM setting that differentiates it from the usual agnostic active
learning setting. When working with a speci?c hypothesis class Hk in the nested sequence, we
may observe high error rates because (i) the ?nite sample error is too high (but additional labeled
examples could reduce it), or (ii) the current hypothesis class Hk is impoverished. In case (ii), the best
hypothesis in Hk may have a much larger error rate than h? , and hence lower bounds [K??ri?inen,
2006] imply that active learning on Hk instead of Hk? may be substantially more dif?cult.
These dif?culties in the SRM setting are circumvented by an algorithm that adaptively estimates the
error of h? . The algorithm, A-L ARCH (Algorithm 5), is presented in Appendix D.
Theorem 3. Assume err(h? ) = ?. Let ?k (?) denote the disagreement coef?cient of Viki at the ?rst
iteration i in A-L ARCH where ki ? k. Fix any ?, ? ? (0, 1). If A-L ARCH is run with inputs hypothe?
sis classes {Hk }k=0 , oracles S EARCH and L ABEL, learning parameter ?, and unlabeled example
? k? + log k ? )(? + ?)/?2 ), then with probability 1 ? ?: A-L ARCH returns a classi?er
budget O((d
with error rate ? ? + ?; it makes at most k ? + O (log(dk? /?) + log log k ? ) queries to S EARCH, and
? (maxk?k? ?k (2? + 2?) ? (dk? log2 (1/?) + log k ? ) ? (1 + ? 2 /?2 )) queries to L ABEL.
O
The proof is in Appendix D. The L ABEL query complexity is at least a factor of k ? better than
that in Hanneke [2011], and sometimes exponentially better thanks to the reduced disagreement
coef?cient of the version space when consistency constraints are incorporated.
5.1
AA-L ARCH: an Opportunistic Anytime Algorithm
In many practical scenarios, termination conditions based on quantities like a target excess error rate
? are undesirable. The target ? is unknown, and we instead prefer an algorithm that performs as well
as possible until a cost budget is exhausted. Fortunately, when the primary cost being considered are
L ABEL queries, there are many L ABEL-only active learning algorithms that readily work in such an
?anytime? setting [see, e.g., Dasgupta et al., 2007, Hanneke, 2014].
The situation is more complicated when we consider both S EARCH and L ABEL: we can often make
substantially more progress with S EARCH queries than with L ABEL queries (as the error rate of the
best hypothesis in Hk? for k ? > k can be far lower than in Hk ). AA-L ARCH (Algorithm 2) shows
that although these queries come at a higher cost, the cost can be amortized.
AA-L ARCH relies on several subroutines:
S AMPLE - AND -L ABEL, E RROR -C HECK,
P RUNE -V ERSION -S PACE and U PGRADE -V ERSION -S PACE (Algorithms 6, 7, 8, and 9).
The detailed descriptions are deferred to Appendix E. S AMPLE - AND -L ABEL performs standard
disagreement-based selective sampling using oracle L ABEL; labels of examples in the disagreement
region are queried, otherwise inferred. P RUNE -V ERSION -S PACE prunes the version space given the
labeled examples collected, based on standard generalization error bounds. E RROR -C HECK checks if
the best hypothesis in the version space has large error; S EARCH is used to ?nd a systematic mistake
for the version space; if either event happens, AA-L ARCH calls U PGRADE -V ERSION -S PACE to
increase k, the level of our working hypothesis class.
Theorem 4. Assume err(h? ) = ?. Let ?k? (?) denote the disagreement coef?cient of Vi at the ?rst
2 2
?
iteration i after which k ? k ? . Fix any ? ? (0, 1). Let n? = O(max
k?k? ?k (2? + 2?)dk? (1 + ? /? ))
?
?
and de?ne C? = 2(n? + k ? ). Run Algorithm 2 with a nested sequence of hypotheses {Hk }k=0 ,
oracles L ABEL and S EARCH, con?dence parameter ?, cost ratio ? ? 1, and upper bound N =
? has
? k? /?2 ). If the cost spent is at least C? , then with probability 1 ? ?, the current hypothesis h
O(d
error at most ? + ?.
The proof is in Appendix E. A comparison to Theorem 3 shows that AA-L ARCH is adaptive: for any
cost complexity C, the excess error rate ? is roughly at most twice that achieved by A-L ARCH.
6
Discussion
The S EARCH oracle captures a powerful form of interaction that is useful for machine learning. Our
theoretical analyses of L ARCH and variants demonstrate that S EARCH can substantially improve
L ABEL-based active learners, while being plausibly cheaper to implement than oracles like CCQ.
7
Algorithm 2 AA-L ARCH
input: Nested hypothesis set H0 ? H1 ? ? ? ? ; oracles L ABEL and S EARCH; learning parameter
? ? (0, 1); S EARCH-to-L ABEL cost ratio ? , dataset size upper bound N .
?
output: hypothesis h.
? ? ?,
1: Initialize: consistency constraints S ? ?, counter c ? 0, k ? 0, veri?ed labeled dataset L
working labeled dataset L0 ? ?, unlabeled examples processed i ? 0, Vi ? Hk (S).
2: loop
3:
Reset counter c ? 0.
4:
repeat
5:
if E RROR -C HECK(Vi , Li , ?i ) then
6:
(k, S, Vi ) ? U PGRADE -V ERSION -S PACE(k, S, ?)
? ?i )
7:
Vi ? P RUNE -V ERSION -S PACE(Vi , L,
?
8:
Li ? L
9:
continue loop
10:
end if
11:
i?i+1
12:
(Li , c) ? S AMPLE - AND -L ABEL(Vi?1 , L ABEL, Li?1 , c)
13:
Vi ? P RUNE -V ERSION -S PACE(Vi?1 , Li , ?i )
14:
until c = ? or li = N
15:
e ? S EARCHHk (Vi )
16:
if e ?= ? then
17:
(k, S, Vi ) ? U PGRADE -V ERSION -S PACE(k, S, {e})
? ?i )
18:
Vi ? P RUNE -V ERSION -S PACE(Vi , L,
?
19:
Li ? L
20:
else
? ? Li .
21:
Update veri?ed dataset L
? ?
? = arg min ?
22:
Store temporary solution h
h ?Vi err(h , L).
23:
end if
24: end loop
Are there examples where CCQ is substantially more powerful than S EARCH? This is a key question,
because a good active learning system should use minimally powerful oracles. Another key question
is: Can the bene?ts of S EARCH be provided in a computationally ef?cient general purpose manner?
References
Josh Attenberg and Foster J. Provost. Why label when you can search? alternatives to active learning
for applying human resources to build classi?cation models under extreme class imbalance. In
Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and
Data Mining, Washington, DC, USA, July 25-28, 2010, pages 423?432, 2010.
Maria-Florina Balcan and Steve Hanneke. Robust interactive learning. In COLT, 2012.
Maria-Florina Balcan, Alina Beygelzimer, and John Langford. Agnostic active learning. In ICML,
2006.
Maria-Florina Balcan, Steve Hanneke, and Jennifer Wortman Vaughan. The true sample complexity
of active learning. Machine learning, 80(2-3):111?139, 2010.
Alina Beygelzimer, Daniel Hsu, John Langford, and Tong Zhang. Agnostic active learning without
constraints. In Advances in Neural Information Processing Systems 23, 2010.
Alina Beygelzimer, Daniel Hsu, Nikos Karampatziakis, John Langford, and Tong Zhang. Ef?cient
active learning. In ICML Workshop on Online Trading of Exploration and Exploitation, 2011.
David A. Cohn, Les E. Atlas, and Richard E. Ladner. Improving generalization with active learning.
Machine Learning, 15(2):201?221, 1994.
8
Sanjoy Dasgupta. Coarse sample complexity bounds for active learning. In Advances in Neural
Information Processing Systems 18, 2005.
Sanjoy Dasgupta, Daniel Hsu, and Claire Monteleoni. A general agnostic active learning algorithm.
In Advances in Neural Information Processing Systems 20, 2007.
Luc Devroye, L?szl? Gy?r?, and Gabor Lugosi. A Probabilistic Theory of Pattern Recognition.
Springer Verlag, 1996.
Steve Hanneke. A bound on the label complexity of agnostic active learning. In ICML, pages
249?278, 2007.
Steve Hanneke. Rates of convergence in active learning. The Annals of Statistics, 39(1):333?361,
2011.
R in Machine
Steve Hanneke. Theory of disagreement-based active learning. Foundations and Trends?
Learning, 7(2-3):131?309, 2014. ISSN 1935-8237. doi: 10.1561/2200000037.
Tzu-Kuo Huang, Alekh Agarwal, Daniel Hsu, John Langford, and Robert E. Schapire. Ef?cient and
parsimonious agnostic active learning. In Advances in Neural Information Processing Systems 28,
2015.
Matti K??ri?inen. Active learning in the non-realizable case. In Algorithmic Learning Theory, 17th
International Conference, ALT 2006, Barcelona, Spain, October 7-10, 2006, Proceedings, pages
63?77, 2006.
Patrice Y. Simard, David Maxwell Chickering, Aparna Lakshmiratan, Denis Xavier Charles, L?on
Bottou, Carlos Garcia Jurado Suarez, David Grangier, Saleema Amershi, Johan Verwey, and Jina
Suh. ICE: enabling non-experts to build models interactively for large-scale lopsided problems.
CoRR, abs/1409.4814, 2014. URL http://arxiv.org/abs/1409.4814.
Vladimir N. Vapnik. Estimation of Dependences Based on Empirical Data. Springer-Verlag, 1982.
Vladimir N. Vapnik and Alexey Ya. Chervonenkis. On the uniform convergence of relative frequencies
of events to their probabilities. Theory of Probability and Its Applications, 16(2):264?280, 1971.
9
| 6183 |@word h:7 exploitation:1 version:30 eliminating:1 stronger:1 advantageous:1 nd:3 termination:1 concise:1 reduction:1 uncovered:1 contains:1 chervonenkis:1 daniel:5 existing:2 err:11 current:5 com:2 beygelzimer:5 si:4 dx:9 must:3 readily:3 john:5 additive:1 cant:1 seeding:1 designed:1 atlas:1 update:1 alone:2 fewer:1 inspection:1 cult:1 provides:5 coarse:1 complication:1 denis:1 org:1 zhang:3 incorrect:1 prove:2 combine:3 manner:1 x0:4 roughly:3 frequently:1 decreasing:1 increasing:1 spain:2 discover:3 underlying:2 notation:1 moreover:2 agnostic:9 provided:1 what:3 substantially:5 transformation:1 guarantee:4 safely:1 every:3 interactive:3 tie:2 returning:1 appear:1 positive:7 before:3 ice:1 mistake:2 consequence:1 analyzing:1 lugosi:1 might:1 alexey:1 twice:1 minimally:1 studied:1 suggests:1 dif:2 collapse:1 practical:2 practice:6 union:10 implement:4 differs:1 opportunistic:1 spot:1 nite:4 empirical:1 lucky:1 gabor:1 cannot:1 unlabeled:11 clever:1 selection:1 cal:15 undesirable:1 risk:1 applying:1 vaughan:1 equivalent:1 simplicity:2 splitting:1 rune:5 searching:1 handle:1 annals:1 diego:1 target:7 us:4 hypothesis:42 agreement:4 amortized:2 trend:1 expensive:1 recognition:1 cut:1 nitions:1 database:1 labeled:15 suarez:1 capture:1 region:13 counter:2 incremented:3 observes:1 environment:1 abel:83 complexity:19 learner:9 beygel:1 easily:1 various:1 separated:1 distinct:2 effective:3 doi:1 query:39 labeling:1 outside:1 h0:5 quite:1 larger:2 solve:1 otherwise:2 favor:1 statistic:1 jointly:1 noisy:2 patrice:1 online:1 sequence:10 advantage:1 propose:1 interaction:2 reset:1 relevant:1 combining:2 loop:12 cients:1 achieve:1 description:2 rst:9 convergence:2 empty:5 produce:3 spent:1 polylog:1 progress:2 c:2 signi:2 implies:2 come:1 quantify:1 trading:1 guided:1 ning:2 radius:1 vc:3 exploration:1 human:4 vx:10 require:1 fix:4 generalization:2 proposition:3 probable:1 hold:5 around:1 considered:2 seed:3 mapping:1 algorithmic:1 claim:2 achieves:2 smallest:1 purpose:1 estimation:1 applicable:1 label:25 djhsu:1 tool:1 minimization:1 hope:1 clearly:1 always:6 rather:2 avoid:1 l0:1 focus:1 improvement:3 maria:3 karampatziakis:1 check:2 hk:40 contrast:1 sigkdd:1 realizable:10 helpful:1 dependent:2 membership:1 unlikely:1 entire:1 eliminate:1 initially:1 hidden:2 her:1 typically:1 selective:2 subroutine:3 arg:3 colt:1 denoted:1 yahoo:2 initialize:2 uc:1 marginal:1 once:1 construct:1 having:1 extraction:1 sampling:2 labeler:2 washington:1 icml:3 richard:1 penguin:4 randomly:1 resulted:1 cheaper:1 microsoft:2 ab:2 organization:1 satis:1 amershi:1 earch:79 investigate:1 mining:1 deferred:1 introduces:1 szl:1 extreme:2 implication:1 lakshmiratan:1 unless:1 harmful:1 theoretical:1 instance:2 classify:1 earlier:1 cost:11 subset:3 rare:3 predictor:3 srm:3 uniform:2 wortman:1 too:2 chooses:1 adaptively:1 thanks:1 international:2 cantly:1 systematic:2 probabilistic:1 together:1 quickly:1 central:1 tzu:1 interactively:2 opposed:2 huang:2 possibly:1 containing:1 henceforth:1 expert:2 simard:4 return:22 actively:1 li:8 account:1 halting:1 de:11 gy:1 heck:3 exhaust:1 inc:1 explicitly:2 vi:14 stream:1 h1:5 start:2 bayes:2 carlos:1 complicated:1 chicheng:1 monitoring:1 hanneke:15 cation:5 detector:1 coef:13 monteleoni:1 whenever:2 ed:4 reconsidering:1 frequency:1 clearest:1 proof:8 con:3 hsu:5 proved:1 dataset:4 anytime:2 knowledge:1 improves:1 formalize:1 impoverished:2 steve:5 higher:4 attained:1 maxwell:1 improved:2 done:1 furthermore:3 just:5 implicit:2 arch:38 langford:5 until:2 working:3 web:1 wmax:2 cohn:2 quality:1 building:2 usa:1 contain:5 true:1 inductive:1 tagged:2 hence:6 xavier:1 i2:2 generalized:1 complete:1 demonstrate:1 performs:2 passive:2 balcan:8 image:5 consideration:1 ef:6 charles:1 common:1 exponentially:4 hki:2 discussed:1 counterexample:31 queried:1 trivially:1 consistency:2 grangier:1 access:1 longer:1 alekh:1 closest:1 hide:1 jolla:1 forcing:1 scenario:1 claimed:1 store:1 verlag:2 binary:3 arbitrarily:1 continue:1 minimum:2 additional:1 somewhat:1 fortunately:1 nikos:1 speci:4 prune:1 surely:1 determine:1 july:1 ii:2 multiple:1 full:1 long:2 halt:5 variant:2 simplistic:1 basic:1 florina:3 tasked:1 arxiv:1 iteration:13 sometimes:2 agarwal:1 achieved:1 addition:1 hurdle:1 want:1 whereas:1 interval:22 else:3 jcl:1 appropriately:2 operate:1 unlike:1 veri:2 ineffective:1 markedly:1 subject:1 searchable:2 ample:3 call:7 ciently:4 structural:1 presence:1 enough:1 restrict:2 reduce:3 regarding:1 lesser:1 assist:1 url:1 akin:1 returned:2 speech:4 york:3 enumerate:1 useful:4 generally:2 clear:2 detailed:1 adept:1 processed:1 reduced:1 schapire:1 http:1 correctly:2 pace:9 dasgupta:5 key:2 threshold:3 drawn:2 alina:4 nal:1 lter:5 concreteness:1 run:4 powerful:5 you:1 reasonable:2 parsimonious:1 draw:4 appendix:7 prefer:1 comparable:1 bound:13 ki:7 oracle:26 bv:2 constraint:3 ri:2 calling:1 dence:3 tag:1 min:2 hypothe:1 ned:2 circumvented:1 ball:1 belonging:1 smaller:2 slightly:1 happens:2 unquestionable:1 hl:5 pr:5 computationally:1 resource:1 agree:2 previously:1 jennifer:1 turn:1 eventually:1 fail:1 mechanism:1 needed:1 know:2 differentiates:1 saleema:1 end:8 sending:1 available:1 observe:1 disagreement:23 attenberg:3 batch:1 alternative:1 log2:9 plausibly:2 especially:1 establish:1 prof:1 build:2 already:1 quantity:2 question:2 primary:1 dependence:1 usual:2 responds:1 amongst:1 separating:1 sensible:1 considers:1 collected:1 trivial:1 reason:1 assuming:2 devroye:2 issn:1 index:4 providing:1 ratio:2 vladimir:2 executed:1 october:1 robert:1 negative:5 mink:1 unknown:1 imbalance:2 upper:2 observation:2 ladner:1 implementable:1 minh:1 enabling:1 t:1 incorrectly:1 situation:3 maxk:3 incorporated:1 dc:1 interacting:1 ucsd:1 provost:3 arbitrary:1 inferred:1 david:3 required:1 bene:1 learned:2 distinction:1 polylogarithmic:1 temporary:1 barcelona:2 nip:1 below:1 pattern:1 max:2 power:1 event:4 natural:6 hate:4 improve:1 misleading:1 imply:3 ne:5 nding:2 ltered:1 columbia:2 text:1 literature:1 discovery:1 relative:2 law:1 expect:1 suf:2 suph:1 limitation:1 querying:1 foundation:1 consistent:7 foster:1 editor:3 production:1 claire:1 compatible:1 repeat:1 last:1 dis:8 taking:1 wmin:2 overcome:1 boundary:2 dimension:3 valid:1 feedback:1 doesn:1 made:5 adaptive:1 san:1 far:2 log3:1 excess:2 implicitly:2 active:32 assumed:2 inen:2 search:15 suh:1 why:2 learn:3 matti:1 robust:1 ca:1 johan:1 improving:1 requested:1 culties:1 bottou:1 complex:2 domain:2 motivation:1 noise:1 whole:1 nothing:1 prx:1 allowed:2 x1:2 cient:16 fashion:1 ny:3 tong:2 guiding:1 ciency:1 exponential:1 breaking:2 chickering:1 justi:2 hw:3 interleaving:1 theorem:15 er:12 dk:11 alt:1 exists:2 essential:1 workshop:1 vapnik:3 effectively:1 corr:1 budget:3 exhausted:1 easier:1 logarithmic:1 garcia:1 simply:1 likely:1 josh:1 rror:3 springer:2 aa:6 nested:8 radically:1 relies:1 acm:1 conditional:2 goal:2 luc:1 reducing:1 classi:16 lemma:8 conservative:1 total:3 called:2 sanjoy:2 kuo:1 e:3 la:1 ya:1 formally:1 support:2 arises:2 |
5,729 | 6,184 | End-to-End Kernel Learning with
Supervised Convolutional Kernel Networks
Julien Mairal
Inria?
[email protected]
Abstract
In this paper, we introduce a new image representation based on a multilayer kernel
machine. Unlike traditional kernel methods where data representation is decoupled
from the prediction task, we learn how to shape the kernel with supervision. We
proceed by first proposing improvements of the recently-introduced convolutional
kernel networks (CKNs) in the context of unsupervised learning; then, we derive
backpropagation rules to take advantage of labeled training data. The resulting
model is a new type of convolutional neural network, where optimizing the filters
at each layer is equivalent to learning a linear subspace in a reproducing kernel
Hilbert space (RKHS). We show that our method achieves reasonably competitive
performance for image classification on some standard ?deep learning? datasets
such as CIFAR-10 and SVHN, and also for image super-resolution, demonstrating
the applicability of our approach to a large variety of image-related tasks.
1
Introduction
In the past years, deep neural networks such as convolutional or recurrent ones have become highly
popular for solving various prediction problems, notably in computer vision and natural language
processing. Conceptually close to approaches that were developed several decades ago (see, [13]),
they greatly benefit from the large amounts of labeled data that have been made available recently,
allowing to learn huge numbers of model parameters without worrying too much about overfitting.
Among other reasons explaining their success, the engineering effort of the deep learning community
and various methodological improvements have made it possible to learn in a day on a GPU complex
models that would have required weeks of computations on a traditional CPU (see, e.g., [10, 12, 23]).
Before the resurgence of neural networks, non-parametric models based on positive definite kernels
were one of the most dominant topics in machine learning [22]. These approaches are still widely
used today because of several attractive features. Kernel methods are indeed versatile; as long as a
positive definite kernel is specified for the type of data considered?e.g., vectors, sequences, graphs,
or sets?a large class of machine learning algorithms originally defined for linear models may be
used. This family include supervised formulations such as support vector machines and unsupervised
ones such as principal or canonical component analysis, or K-means and spectral clustering. The
problem of data representation is thus decoupled from that of learning theory and algorithms. Kernel
methods also admit natural mechanisms to control the learning capacity and reduce overfitting [22].
On the other hand, traditional kernel methods suffer from several drawbacks. The first one is their
computational complexity, which grows quadratically with the sample size due to the computation of
the Gram matrix. Fortunately, significant progress has been achieved to solve the scalability issue,
either by exploiting low-rank approximations of the kernel matrix [28, 31], or with random sampling
techniques for shift-invariant kernels [21]. The second disadvantage is more critical; by decoupling
?
Thoth team, Inria Grenoble, Laboratoire Jean Kuntzmann, CNRS, Univ. Grenoble Alpes, France.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
learning and data representation, kernel methods seem by nature incompatible with end-to-end
learning?that is, the representation of data adapted to the task at hand, which is the cornerstone of
deep neural networks and one of the main reason of their success. The main objective of this paper is
precisely to tackle this issue in the context of image modeling.
Specifically, our approach is based on convolutional kernel networks, which have been recently
introduced in [18]. Similar to hierarchical kernel descriptors [3], local image neighborhoods are
mapped to points in a reproducing kernel Hilbert space via the kernel trick. Then, hierarchical
representations are built via kernel compositions, producing a sequence of ?feature maps? akin to
convolutional neural networks, but of infinite dimension. To make the image model computationally
tractable, convolutional kernel networks provide an approximation scheme that can be interpreted as
a particular type of convolutional neural network learned without supervision.
To perform end-to-end learning given labeled data, we use a simple but effective principle consisting
of learning discriminative subspaces in RKHSs, where we project data. We implement this idea
in the context of convolutional kernel networks, where linear subspaces, one per layer, are jointly
optimized by minimizing a supervised loss function. The formulation turns out to be a new type of
convolutional neural network with a non-standard parametrization. The network also admits simple
principles to learn without supervision: learning the subspaces may be indeed achieved efficiently
with classical kernel approximation techniques [28, 31].
To demonstrate the effectiveness of our approach in various contexts, we consider image classification
benchmarks such as CIFAR-10 [12] and SVHN [19], which are often used to evaluate deep neural
networks; then, we adapt our model to perform image super-resolution, which is a challenging inverse
problem. On the SVHN and CIFAR-10 datasets, we obtain a competitive accuracy, with about 2% and
10% error rates, respectively, without model averaging or data augmentation. For image up-scaling,
we outperform recent approaches based on classical convolutional neural networks [7, 8].
We believe that these results are highly promising. Our image model achieves competitive performance in two different contexts, paving the way to many other applications. Moreover, our results are
also subject to improvements. In particular, we did not use GPUs yet, which has limited our ability
to exhaustively explore model hyper-parameters and evaluate the accuracy of large networks. We
also did not investigate classical regularization/optimization techniques such as Dropout [12], batch
normalization [11], or recent advances allowing to train very deep networks [10, 23]. To gain more
scalability and start exploring these directions, we are currently working on a GPU implementation,
which we plan to publicly release along with our current CPU implementation.
Related Deep and Shallow Kernel Machines. One of our goals is to make a bridge between kernel
methods and deep networks, and ideally reach the best of both worlds. Given the potentially attractive
features of such a combination, several attempts have been made in the past to unify these two schools
of thought. A first proof of concept was introduced in [5] with the arc-cosine kernel, which admits an
integral representation that can be interpreted as a one-layer neural network with random weights
and infinite number of rectified linear units. Besides, a multilayer kernel may be obtained by kernel
compositions [5]. Then, hierarchical kernel descriptors [3] and convolutional kernel networks [18]
extend a similar idea in the context of images leading to unsupervised representations [18].
Multiple kernel learning [24] is also related to our work since is it is a notable attempt to introduce
supervision in the kernel design. It provides techniques to select a combination of kernels from a predefined collection, and typically requires to have already ?good? kernels in the collection to perform
well. More related to our work, the backpropagation algorithm for the Fisher kernel introduced in [25]
learns the parameters of a Gaussian mixture model with supervision. In comparison, our approach
does not require a probabilistic model and learns parameters at several layers. Finally, we note that a
concurrent effort to ours is conducted in the Bayesian community with deep Gaussian processes [6],
complementing the Frequentist approach that we follow in our paper.
2
Learning Hierarchies of Subspaces with Convolutional Kernel Networks
In this section, we present the principles of convolutional kernel networks and a few generalizations
and improvements of the original approach of [18]. Essentially, the model builds upon four ideas that
are detailed below and that are illustrated in Figure 1 for a model with a single layer.
2
Idea 1: use the kernel trick to represent local image neighborhoods in a RKHS.
Given a set X , a positive definite kernel K : X ? X ? R implicitly defines a Hilbert space H, called
reproducing kernel Hilbert space (RKHS), along with a mapping ? : X ? H. This embedding is
such that the kernel value K(x, x0 ) corresponds to the inner product h?(x), ?(x0 )iH . Called ?kernel
trick?, this approach can be used to obtain nonlinear representations of local image patches [3, 18].
More precisely, consider an image I0 : ?0 ? Rp0 , where p0 is the number of channels, e.g., p0 = 3
for RGB, and ?0 ? [0, 1]2 is a set of pixel coordinates, typically a two-dimensional grid. Given two
2
image patches x, x0 of size e0 ? e0 , represented as vectors in Rp0 e0 , we define a kernel K1 as
D
E
0
x
K1 (x, x0 ) = kxk kx0 k ?1 kxk
if x, x0 6= 0 and 0 otherwise,
(1)
, kxx0 k
where k.k and h., .i denote the usual Euclidean norm and inner-product, respectively, and ?1 (h., .i) is
a dot-product kernel on the sphere. Specifically, ?1 should be smooth and its Taylor expansion have
non-negative coefficients to ensure positive definiteness [22]. For example, the arc-cosine [5] or the
Gaussian (RBF) kernels may be used: given two vectors y, y0 with unit `2 -norm, choose for instance
?1
0
0 2
? (hy, y0 i) = e?1 (hy,y i?1) = e? 2 ky?y k2 .
(2)
1
2
Then, we have implicitly defined the RKHS H1 associated to K1 and a mapping ?1 : Rp0 e0 ? H1 .
Idea 2: project onto a finite-dimensional subspace of the RKHS with convolution layers.
The representation of patches in a RKHS requires finite-dimensional approximations to be computationally manageable. The original model of [18] does that by exploiting an integral form of the RBF
kernel. Specifically, given two patches x and x0 , convolutional kernel networks provide two vectors
?1 (x), ?1 (x0 ) in Rp1 such that the kernel value h?1 (x), ?1 (x0 )iH1 is close to the Euclidean inner
product h?1 (x), ?1 (x0 )i. After applying this transformation to all overlapping patches of the input
image I0 , a spatial map M1 : ?0 ? Rp1 may be obtained such that for all z in ?0 , M1 (z) = ?1 (xz ),
where xz is the e0 ? e0 patch from I0 centered at pixel location z.2 With the approximation scheme
of [18], M1 can be interpreted as the output feature map of a one-layer convolutional neural network.
A conceptual drawback of [18] is that data points ?1 (x1 ), ?1 (x2 ), . . . are approximated by vectors
that do not live in the RKHS H1 . This issue can be solved by using variants of the Nystr?m
method [28], which consists of projecting data onto a subspace of H1 with finite dimension p1 .
For this task, we have adapted the approach of [31]: we build a database of n patches x1 , . . . , xn
randomly extracted from various images and normalized to have unit `2 -norm, and perform a spherical
K-means algorithm to obtain p1 centroids z1 , . . . , zp1 with unit `2 -norm. Then, a new patch x is
approximated by its projection onto the p1 -dimensional subspace F1 = Span(?(z1 ), . . . , ?(zp1 )).
The projection of ?1 (x) onto F1 admits a natural parametrization ?1 (x) in Rp1 . The explicit formula
is classical (see [28, 31] and Appendix A), leading to
>
?1/2
> x
if x 6= 0 and 0 otherwise,
(3)
?1 (x) := kxk?1 (Z Z)
?1 Z
kxk
where we have introduced the matrix Z = [z1 , . . . , zp1 ], and, by an abuse of notation, the function ?1
is applied pointwise to its arguments. Then, the spatial map M1 : ?0 ? Rp1 introduced above can
be obtained by (i) computing the quantities Z> x for all patches x of the image I (spatial convolution
after mirroring the filters zj ); (ii) contrast-normalization involving the norm kxk; (iii) applying the
pointwise non-linear function ?1 ; (iv) applying the linear transform ?1 (Z> Z)?1/2 at every pixel
location (which may be seen as 1?1 spatial convolution); (v) multiplying by the norm kxk making ?1
homogeneous. In other words, we obtain a particular convolutional neural network, with non-standard
parametrization. Note that learning requires only performing a K-means algorithm and computing
the inverse square-root matrix ?1 (Z> Z)?1/2 ; therefore, the training procedure is very fast.
Then, it is worth noting that the encoding function ?1 with kernel (2) is reminiscent of radial basis
function networks (RBFNs) [4], whose hidden layer resembles (3) without the matrix ?1 (Z> Z)?1/2
and with no normalization. The difference between RBFNs and our model is nevertheless significant.
The RKHS mapping, which is absent from RBFNs, is indeed a key to the multilayer construction
that will be presented shortly: a network layer takes points from the RKHS?s previous layer as input
and use the corresponding RKHS inner-product. To the best of our knowledge, there is no similar
multilayer and/or convolutional construction in the radial basis function network literature.
2
To simplify, we use zero-padding when patches are close to the image boundaries, but this is optional.
3
I1
?1 (x0 )
linear pooling
?1 (x)
?1 (x)
Hilbert space H1
?1 (x0 )
M1
x0
projection on F1
F1
kernel trick
I0
x
Figure 1: Our variant of convolutional kernel networks, illustrated between layers 0 and 1. Local
patches (receptive fields) are mapped to the RKHS H1 via the kernel trick and then projected to
the finite-dimensional subspace F1 = Span(?(z1 ), . . . , ?(zp1 )). The small blue crosses on the right
represent the points ?(z1 ), . . . , ?(zp1 ). With no supervision, optimizing F1 consists of minimizing
projection residuals. With supervision, the subspace is optimized via back-propagation. Going from
layer k to layer k + 1 is achieved by stacking the model described here and shifting indices.
Idea 3: linear pooling in F1 is equivalent to linear pooling on the finite-dimensional map M1 .
The previous steps transform an image I0 : ?0 ? Rp0 into a map M1 : ?0 ? Rp1 , where each
vector M1 (z) in Rp1 encodes a point in F1 representing information of a local image neighborhood
centered at location z. Then, convolutional kernel networks involve a pooling step to gain invariance
to small shifts, leading to another finite-dimensional map I1 : ?1 ? Rp1 with smaller resolution:
X
0
2
I1 (z) =
M1 (z 0 )e??1 kz ?zk2 .
(4)
z 0 ??0
The Gaussian weights act as an anti-aliasing filter for downsampling the map M1 and ?1 is set
according to the desired subsampling factor (see [18]), which does not need to be integer. Then, every
point I1 (z) in Rp1 may be interpreted as a linear combination of points in F1 , which is itself in F1
since F1 is a linear subspace. Note that the linear pooling step was originally motivated in [18] as an
approximation scheme for a match kernel, but this point of view is not critically important here.
Idea 4: build a multilayer image representation by stacking and composing kernels.
By following the first three principles described above, the input image I0 : ?0 ? Rp0 is transformed
into another one I1 : ?1 ? Rp1 . It is then straightforward to apply again the same procedure to
obtain another map I2 : ?2 ? Rp2 , then I3 : ?3 ? Rp3 , etc. By going up in the hierarchy, the
vectors Ik (z) in Rpk represent larger and larger image neighborhoods (aka. receptive fields) with
more invariance gained by the pooling layers, akin to classical convolutional neural networks.
The multilayer scheme produces a sequence of maps (Ik )k?0 , where each vector Ik (z) encodes
a point?say fk (z)?in the linear subspace Fk of Hk . Thus, we implicitly represent an image at
layer k as a spatial map fk : ?k ? Hk such that hIk (z), Ik0 (z 0 )i = hfk (z), fk0 (z 0 )iHk for all z, z 0 .
As mentioned previously, the mapping to the RKHS is a key to the multilayer construction. Given Ik ,
larger image neighborhoods are represented by patches of size ek ? ek that can be mapped to a
point in the Cartesian product space Hkek ?ek endowed with its natural inner-product; finally, the
kernel Kk+1 defined on these patches can be seen as a kernel on larger image neighborhoods than Kk .
3
End-to-End Kernel Learning with Supervised CKNs
In the previous section, we have described a variant of convolutional kernel networks where linear
subspaces are learned at every layer. This is achieved without supervision by a K-means algorithm
leading to small projection residuals. It is thus natural to introduce also a discriminative approach.
4
3.1
Backpropagation Rules for Convolutional Kernel Networks
We now consider a prediction task, where we are given a training set of images I01 , I02 , . . . , I0n
with respective scalar labels y1 , . . . , yn living either in {?1; +1} for binary classification and R
for regression. For simplicity, we only present these two settings here, but extensions to multiclass
classification and multivariate regression are straightforward. We also assume that we are given a
smooth convex loss function L : R ? R ? R that measures the fit of a prediction to the true label y.
Given a positive definite kernel K on images, the classical empirical risk minimization formulation
consists of finding a prediction function in the RKHS H associated to K by minimizing the objective
n
1X
?
L(yi , f (I0i )) + kf k2H ,
f ?H n
2
i=1
min
(5)
where the parameter ? controls the smoothness of the prediction function f with respect to the
geometry induced by the kernel, hence regularizing and reducing overfitting [22]. After training a
convolutional kernel network with k layers, such a positive definite kernel may be defined as
X
X
KZ (I0 , I00 ) =
hfk (z), fk0 (z)iHk =
hIk (z), Ik0 (z)i,
(6)
z??k
z??k
Ik , Ik0
where
are the k-th finite-dimensional feature maps of I0 , I00 , respectively, and fk , fk0 the
corresponding maps in ?k ? Hk , which have been defined in the previous section. The kernel is
also indexed by Z, which represents the network parameters?that is, the subspaces F1 , . . . , Fk , or
equivalently the set of filters Z1 , . . . , Zk from Eq. (3). Then, formulation (5) becomes equivalent to
n
min
W?Rpk ?|?k |
1X
?
L(yi , hW, Iki i) + kWk2F ,
n i=1
2
(7)
where k.kF is the Frobenius norm that extends the Euclidean norm to matrices, and, with an abuse of
notation, the maps Iki are seen as matrices in Rpk ?|?k | . Then, the supervised convolutional kernel
network formulation consists of jointly minimizing (7) with respect to W in Rpk ?|?k | and with respect
to the set of filters Z1 , . . . , Zk , whose columns are constrained to be on the Euclidean sphere.
Computing the derivative with respect to the filters Z1 , . . . , Zk .
Since we consider a smooth loss function L, e.g., logistic, squared hinge, or square loss, optimizing (7)
with respect to W can be achieved with any gradient-based method. Moreover, when L is convex,
we may also use fast dedicated solvers, (see, e.g., [16], and references therein). Optimizing with
respect to the filters Zj , j = 1, . . . , k is more involved because of the lack of convexity. Yet, the
objective function is differentiable, and there is hope to find a ?good? stationary point by using
classical stochastic optimization techniques that have been successful for training deep networks.
For that, we need to compute the gradient by using the chain rule?also called ?backpropagation? [13].
We instantiate this rule in the next lemma, which we have found useful to simplify the calculation.
Lemma 1 (Perturbation view of backpropagration.)
Consider an image I0 represented here as a matrix in Rp0 ?|?0 | , associated to a label y in R and
call IkZ the k-th feature map obtained by encoding I0 with the network parameters Z. Then, consider
a perturbation E = {?1 , . . . , ?k } of the set of filters Z. Assume that we have for all j ? 0,
IjZ+E = IjZ + ?IjZ,E + o(kEk),
where kEk is equal to
of the same size,
Pk
l=1
(8)
k?l kF , and ?IjZ,E is a matrix in Rpj ?|?j | such that for all matrices U
Z,E
h?IjZ,E , Ui = h?j , gj (U)i + h?Ij?1
, hj (U)i,
(9)
where the inner-product is the Frobenius?s one and gj , hj are linear functions. Then,
0
?Zj L(y, hW, IkZ i) = L0 (y, hW, IkZ i) gj (hj+1 (. . . hk (W)),
(10)
where L denote the derivative of the smooth function L with respect to its second argument.
The proof of this lemma is straightforward and follows from the definition of the Fr?chet derivative.
Nevertheless, it is useful to derive the closed form of the gradient in the next proposition.
5
Proposition 1 (Gradient of the loss with respect to the the filters Z1 , . . . , Zk .)
Consider the quantities introduced in Lemma 1, but denote IjZ by Ij for simplicity. By construction,
we have for all j ? 1,
?1
Ij = Aj ?j (Z>
(11)
j Ej (Ij?1 )Sj )Sj Pj ,
where Ij is seen as a matrix in Rpj ?|?j | ; Ej is the linear operator that extracts all overlapping
ej?1 ? ej?1 patches from a map such that Ej (Ij?1 ) is a matrix of size pj?1 e2j?1 ? |?j?1 |; Sj is a
diagonal matrix whose diagonal entries carry the `2 -norm of the columns of Ej (Ij?1 ); Aj is short
?1/2
for ?j (Z>
; and Pj is a matrix of size |?j?1 | ? |?j | performing the linear pooling operation.
j Zj )
Then, the gradient of the loss with respect to the filters Zj , j = 1, . . . , k is given by (10) with
1
0
>
>
gj (U) = Ej (Ij?1 )B>
j ? Zj ?j (Zj Zj ) (Cj + Cj )
2
(12)
>
>
>
hj (U) = E?j Zj Bj + Ej (Ij?1 ) S?2
M
UP
?
E
(I
)
Z
B
,
j
j?1
j
j
j
j
j
?1
where U is any matrix of the same size as Ij , Mj = Aj ?j (Z>
j Ej (Ij?1 )Sj )Sj is the j-th feature
?
map before the pooling step, is the Hadamart (elementwise) product, Ej is the adjoint of Ej , and
1/2
3/2
?1
Bj = ?0j Z>
Aj UP>
and Cj = Aj Ij U> Aj .
(13)
j Ej (Ij?1 )Sj
j
The proof is presented in Appendix B. Most quantities that appear above admit physical interpretations:
multiplication by Pj performs downsampling; multiplication by P>
j performs upsampling; multiplication of Ej (Ij?1 ) on the right by S?1
performs
`
-normalization
of
the columns; Z>
2
j Ej (Ij?1 ) can
j
be seen as a spatial convolution of the map Ij?1 by the filters Zj ; finally, E?j ?combines? a set of
patches into a spatial map by adding to each pixel location the respective patch contributions.
Computing the gradient requires a forward pass to obtain the maps Ij through (11) and a backward
pass that composes the functions gj , hj as in (10). The complexity of the forward step is dominated
by the convolutions Z>
j Ej (Ij?1 ), as in convolutional neural networks. The cost of the backward
pass is the same as the forward one up to a constant factor. Assuming pj ? |?j?1 |, which is typical
for lower layers that require more computation than upper ones, the most expensive cost is due to
1/2
3/2
>
Ej (Ij?1 )B>
and Aj
j and Zj Bj which is the same as Zj Ej (Ij?1 ). We also pre-compute Aj
by eigenvalue decompositions, whose cost is reasonable when performed only once per minibatch.
>
Off-diagonal elements of Mj> UP>
j ? Ej (Ij?1 ) Zj Bj are also not computed since they are set to
zero after elementwise multiplication with a diagonal matrix. In practice, we also replace Aj by
?1/2
(?j (Z>
with ? = 0.001, which corresponds to performing a regularized projection
j Zj ) + ?I)
onto Fj (see Appendix A). Finally, a small offset of 0.00001 is added to the diagonal entries of Sj .
Optimizing hyper-parameters for RBF kernels. When using the kernel (2), the objective is
differentiable with respect to the hyper-parameters ?j . When large amounts of training data are
available and overfitting is not a issue, optimizing the training loss by taking gradient steps with
respect to these parameters seems appropriate instead of using a canonical parameter value. Otherwise,
more involved techniques may be needed; we plan to investigate other strategies in future work.
3.2
Optimization and Practical Heuristics
The backpropagation rules of the previous section have set up the stage for using a stochastic gradient
descent method (SGD). We now present a few strategies to accelerate it in our context.
Hybrid convex/non-convex optimization. Recently, many incremental optimization techniques
have been proposed for solving convex optimization problems of the form (7) when n is large but
finite (see [16] and references therein). These methods usually provide a great speed-up over the
stochastic gradient descent algorithm without suffering from the burden of choosing a learning rate.
The price to pay is that they rely on convexity, and they require storing into memory the full training
set. For solving (7) with fixed network parameters Z, it means storing the n maps Iki , which is often
reasonable if we do not use data augmentation. To partially leverage these fast algorithms for our
non-convex problem, we have adopted a minimization scheme that alternates between two steps: (i)
fix Z, then make a forward pass on the data to compute the n maps Iki and minimize the convex
problem (7) with respect to W using the accelerated MISO algorithm [16]; (ii) fix W, then make one
pass of a projected stochastic gradient algorithm to update the k set of filters Zj . The set of network
parameters Z is initialized with the unsupervised learning method described in Section 2.
6
Preconditioning on the sphere. The kernels ?j are defined on the sphere; therefore, it is natural
to constrain the filters?that is, the columns of the matrices Zj ?to have unit `2 -norm. As a result,
a classical stochastic gradient descent algorithm updates at iteration t each filter z as follows z ?
Projk.k2 =1 [z??t ?z Lt ], where ?z Lt is an estimate of the gradient computed on a minibatch and ?t is
a learning rate. In practice, we found that convergence could be accelerated by preconditioning, which
consists of optimizing after a change of variable to reduce the correlation of gradient entries. For
unconstrained optimization, this heuristic involves choosing a symmetric positive definite matrix Q
and replacing the update direction ?z Lt by Q?z Lt , or, equivalently, performing the change of
variable z = Q1/2 z0 and optimizing over z0 . When constraints are present, the case is not as simple
since Q?z Lt may not be a descent direction. Fortunately, it is possible to exploit the manifold
structure of the constraint set (here, the sphere) to perform an appropriate update [1]. Concretely, (i)
we choose a matrix Q per layer that is equal to the inverse covariance matrix of the patches from the
same layer computed after the initialization of the network parameters. (ii) We perform stochastic
gradient descent steps on the sphere manifold after the change of variable z = Q1/2 z0 , leading to the
update z ? Projk.k2 =1 [z ? ?t (I ? (1/z> Qz)Qzz> )Q?z Lt ]. Because this heuristic is not a critical
component, but simply an improvement of SGD, we relegate mathematical details in Appendix C.
Automatic learning rate tuning. Choosing the right learning rate in stochastic optimization is
still an important issue despite the large amount of work existing on the topic, see, e.g., [13] and
references therein. In our paper, we use the following basic heuristic: the initial learning rate ?t
is chosen ?large enough?; then, the training loss is evaluated after each update of the weights W.
When the training loss increases between two epochs, we simply divide the learning rate by two, and
perform ?back-tracking? by replacing the current network parameters by the previous ones.
Active-set heuristic. For classification tasks, ?easy? samples have often negligible contribution to
the gradient (see, e.g., [13]). For instance, for the squared hinge loss L(y, y?) = max(0, 1 ? y y?)2 , the
gradient vanishes when the margin y y? is greater than one. This motivates the following heuristic: we
consider a set of active samples, initially all of them, and remove a sample from the active set as soon
as we obtain zero when computing its gradient. In the subsequent optimization steps, only active
samples are considered, and after each epoch, we randomly reactivate 10% of the inactive ones.
4
Experiments
We now present experiments on image classification and super-resolution. All experiments were
conducted on 8-core and 10-core 2.4GHz Intel CPUs using C++ and Matlab.
4.1
Image Classification on ?Deep Learning? Benchmarks
We consider the datasets CIFAR-10 [12] and SVHN [19], which contain 32 ? 32 images from 10
classes. CIFAR-10 is medium-sized with 50 000 training samples and 10 000 test ones. SVHN is
larger with 604 388 training examples and 26 032 test ones. We evaluate the performance of a 9-layer
network, designed with few hyper-parameters: for each layer, we learn 512 filters and choose the RBF
kernels ?j defined in (2) with initial ?
parameters ?j = 1/(0.52 ). Layers 1, 3, 5, 7, 9 use 3?3 patches
and a subsampling pooling factor of 2 except for layer 9 where the factor is 3; Layers 2, 4, 6, 8 use
simply 1 ? 1 patches and no subsampling. For CIFAR-10, the parameters ?j are kept fixed during
training, and for SVHN, they are updated in the same way as the filters. We use the squared hinge
loss in a one vs all setting to perform multi-class classification (with shared filters Z between classes).
The input of the network is pre-processed with the local whitening procedure described in [20]. We
use the optimization heuristics from the previous section, notably the automatic learning rate scheme,
and a gradient momentum with parameter 0.9, following [12]. The regularization parameter ? and
the number of epochs are set by first running the algorithm on a 80/20 validation split of the training
set. ? is chosen near the canonical parameter ? = 1/n, in the range 2i /n, with i = ?4, . . . , 4, and
the number of epochs is at most 100. The initial learning rate is 10 with a minibatch size of 128.
We present our results in Table 1 along with the performance achieved by a few recent methods
without data augmentation or model voting/averaging. In this context, the best published results are
obtained by the generalized pooling scheme of [14]. We achieve about 2% test error on SVHN and
about 10% on CIFAR-10, which positions our method as a reasonably ?competitive? one, in the same
ballpark as the deeply supervised nets of [15] or network in network of [17].
7
Table 1: Test error in percents reported by a few recent publications on the CIFAR-10 and SVHN
datasets without data augmentation or model voting/averaging.
CIFAR-10
SVHN
Stoch P. [29]
15.13
2.80
MaxOut [9]
11.68
2.47
NiN [17]
10.41
2.35
DSN [15]
9.69
1.92
Gen P. [14]
7.62
1.69
SCKN (Ours)
10.20
2.04
Due to lack of space, the results reported here only include a single supervised model. Preliminary
experiments with no supervision show also that one may obtain competitive accuracy with wide
shallow architectures. For instance, a two-layer network with (1024-16384) filters achieves 14.2%
error on CIFAR-10. Note also that our unsupervised model outperforms original CKNs [18]. The best
single model from [18] gives indeed 21.7%. Training the same architecture with our approach is two
orders of magnitude faster and gives 19.3%. Another aspect we did not study is model complexity.
Here as well, preliminary experiments are encouraging. Reducing the number of filters to 128 per
layer yields indeed 11.95% error on CIFAR-10 and 2.15% on SVHN. A more precise comparison
with no supervision and with various network complexities will be presented in another venue.
4.2
Image Super-Resolution from a Single Image
Image up-scaling is a challenging problem, where convolutional neural networks have obtained
significant success [7, 8, 27]. Here, we follow [8] and replace traditional convolutional neural
networks by our supervised kernel machine. Specifically, RGB images are converted to the YCbCr
color space and the upscaling method is applied to the luminance channel only to make the comparison
possible with previous work. Then, the problem is formulated as a multivariate regression one. We
build a database of 200 000 patches of size 32 ? 32 randomly extracted from the BSD500 dataset [2]
after removing image 302003.jpg, which overlaps with one of the test images. 16 ? 16 versions of the
patches are build using the Matlab function imresize, and upscaled back to 32 ? 32 by using bicubic
interpolation; then, the goal is to predict high-resolution images from blurry bicubic interpolations.
The blurry estimates are processed by a 9-layer network, with 3 ? 3 patches and 128 filters at every
layer without linear pooling and zero-padding. Pixel values are predicted with a linear model applied
to the 128-dimensional vectors present at every pixel location of the last layer, and we use the square
loss to measure the fit. The optimization procedure and the kernels ?j are identical to the ones used
for processing the SVHN dataset in the classification task. The pipeline also includes a pre-processing
step, where we remove from input images a local mean component obtained by convolving the images
with a 5 ? 5 averaging box filter; the mean component is added back after up-scaling.
For the evaluation, we consider three datasets: Set5 and Set14 are standard for super-resolution;
Kodim is the Kodak Image database, available at http://r0k.us/graphics/kodak/, which contains high-quality images with no compression or demoisaicing artefacts. The evaluation procedure
follows [7, 8, 26, 27] by using the code from the author?s web page. We present quantitative results
in Table 2. For x3 upscaling, we simply used twice our model learned for x2 upscaling, followed by a
3/4 downsampling. This is clearly suboptimal since our model is not trained to up-scale by a factor 3,
but this naive approach still outperforms other baselines [7, 8, 27] that are trained end-to-end. Note
that [27] also proposes a data augmentation scheme at test time that slightly improves their results. In
Appendix D, we also present a visual comparison between our approach and [8], whose pipeline is
the closest to ours, up to the use of a supervised kernel machine instead of CNNs.
Table 2: Reconstruction accuracy for super-resolution in PSNR (the higher, the better). All CNN
approaches are without data augmentation at test time. See Appendix D for the SSIM quality measure.
Fact. Dataset
Set5
x2
Set14
Kodim
Set5
x3
Set14
Kodim
Bicubic SC [30] ANR [26] A+[26] CNN1 [7] CNN2 [8] CSCN [27]
33.66
35.78
35.83
36.54
36.34
36.66
36.93
30.23
31.80
31.79
32.28
32.18
32.45
32.56
30.84
32.19
32.23
32.71
32.62
32.80
32.94
30.39
31.90
31.92
32.58
32.39
32.75
33.10
27.54
28.67
28.65
29.13
29.00
29.29
29.41
28.43
29.21
29.21
29.57
29.42
29.64
29.76
Acknowledgments
This work was supported by ANR (MACARON project ANR-14-CE23-0003-01).
8
SCKN
37.07
32.76
33.21
33.08
29.50
29.88
References
[1] P.-A. Absil, R. Mahony, and R. Sepulchre. Optimization algorithms on matrix manifolds. Princeton
University Press, 2009.
[2] P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik. Contour detection and hierarchical image segmentation.
IEEE T. Pattern Anal., 33(5):898?916, 2011.
[3] L. Bo, K. Lai, X. Ren, and D. Fox. Object recognition with hierarchical kernel descriptors. In CVPR, 2011.
[4] D. S. Broomhead and D. Lowe. Radial basis functions, multi-variable functional interpolation and adaptive
networks. Technical report, DTIC Document, 1988.
[5] Y. Cho and L. K. Saul. Kernel methods for deep learning. In Adv. NIPS, 2009.
[6] A. Damianou and N. Lawrence. Deep Gaussian processes. In Proc. AISTATS, 2013.
[7] C. Dong, C. C. Loy, K. He, and X. Tang. Learning a deep convolutional network for image super-resolution.
In Proc. ECCV. 2014.
[8] C. Dong, C. C. Loy, K. He, and X. Tang. Image super-resolution using deep convolutional networks. IEEE
T. Pattern Anal., 38(2):295?307, 2016.
[9] I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio. Maxout networks. In Proc.
ICML, 2013.
[10] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proc. CVPR, 2016.
[11] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal
covariate shift. In Proc. ICML, 2015.
[12] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural
networks. In Adv. NIPS, 2012.
[13] Y. Le Cun, L. Bottou, G. B. Orr, and K.-R. M?ller. Efficient backprop. In Neural Networks, Tricks of the
Trade, Lecture Notes in Computer Science LNCS 1524. 1998.
[14] C.-Y. Lee, P. W. Gallagher, and Z. Tu. Generalizing pooling functions in convolutional neural networks:
Mixed, gated, and tree. In Proc. AISTATS, 2016.
[15] C.-Y. Lee, S. Xie, P. W. Gallagher, Z. Zhang, and Z. Tu. Deeply-supervised nets. In Proc. AISTATS, 2015.
[16] H. Lin, J. Mairal, and Z. Harchaoui. A universal catalyst for first-order optimization. In Adv. NIPS, 2015.
[17] M. Lin, Q. Chen, and S. Yan. Network in network. In Proc. ICLR, 2013.
[18] J. Mairal, P. Koniusz, Z. Harchaoui, and C. Schmid. Convolutional kernel networks. In Adv. NIPS, 2014.
[19] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng. Reading digits in natural images with
unsupervised feature learning. In NIPS workshop on deep learning, 2011.
[20] M. Paulin, M. Douze, Z. Harchaoui, J. Mairal, F. Perronin, and C. Schmid. Local convolutional features
with unsupervised training for image retrieval. In Proc. ICCV, 2015.
[21] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In Adv. NIPS, 2007.
[22] B. Sch?lkopf and A. J. Smola. Learning with kernels: support vector machines, regularization, optimization,
and beyond. MIT press, 2002.
[23] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In
Proc. ICLR, 2015.
[24] S. Sonnenburg, G. R?tsch, C. Sch?fer, and B. Sch?lkopf. Large scale multiple kernel learning. J. Mach.
Learn. Res., 7:1531?1565, 2006.
[25] V. Sydorov, M. Sakurada, and C. Lampert. Deep Fisher kernels ? end to end learning of the Fisher kernel
GMM parameters. In Proc. CVPR, 2014.
[26] R. Timofte, V. Smet, and L. van Gool. Anchored neighborhood regression for fast example-based superresolution. In Proc. ICCV, 2013.
[27] Z. Wang, D. Liu, J. Yang, W. Han, and T. Huang. Deep networks for image super-resolution with sparse
prior. In Proc. ICCV, 2015.
[28] C. Williams and M. Seeger. Using the Nystr?m method to speed up kernel machines. In Adv. NIPS, 2001.
[29] M. D. Zeiler and R. Fergus. Stochastic pooling for regularization of deep convolutional neural networks.
In Proc. ICLR, 2013.
[30] R. Zeyde, M. Elad, and M. Protter. On single image scale-up using sparse-representations. In Curves and
Surfaces, pages 711?730. 2010.
[31] K. Zhang, I. W. Tsang, and J. T. Kwok. Improved Nystr?m low-rank approximation and error analysis. In
Proc. ICML, 2008.
9
| 6184 |@word cnn:1 version:1 manageable:1 compression:1 norm:10 seems:1 iki:4 rgb:2 decomposition:1 p0:2 covariance:1 q1:2 set5:3 sgd:2 nystr:3 sepulchre:1 versatile:1 carry:1 initial:3 liu:1 contains:1 rkhs:13 ours:3 document:1 past:2 kx0:1 existing:1 current:2 outperforms:2 yet:2 reminiscent:1 gpu:2 ckns:3 subsequent:1 shape:1 remove:2 designed:1 update:6 v:1 stationary:1 instantiate:1 complementing:1 rp1:9 parametrization:3 bissacco:1 short:1 core:2 paulin:1 provides:1 ih1:1 location:5 i0n:1 zhang:3 mathematical:1 along:3 become:1 ik:5 dsn:1 consists:5 combine:1 introduce:3 x0:12 notably:2 indeed:5 p1:3 xz:2 aliasing:1 multi:2 spherical:1 cpu:3 encouraging:1 solver:1 becomes:1 spain:1 project:3 moreover:2 notation:2 medium:1 superresolution:1 ballpark:1 interpreted:4 developed:1 proposing:1 finding:1 transformation:1 quantitative:1 every:5 act:1 voting:2 tackle:1 k2:3 control:2 unit:5 yn:1 producing:1 appear:1 before:2 positive:7 engineering:1 local:8 negligible:1 despite:1 encoding:2 mach:1 ijz:6 interpolation:3 abuse:2 inria:3 twice:1 therein:3 resembles:1 initialization:1 challenging:2 limited:1 range:1 practical:1 acknowledgment:1 hfk:2 practice:2 definite:6 implement:1 backpropagation:5 x3:2 digit:1 procedure:5 maire:1 lncs:1 empirical:1 universal:1 yan:1 thought:1 projection:6 word:1 radial:3 pre:3 onto:5 close:3 mahony:1 operator:1 context:8 applying:3 live:1 risk:1 equivalent:3 map:22 straightforward:3 williams:1 convex:7 resolution:11 unify:1 simplicity:2 alpes:1 rule:5 embedding:1 i02:1 coordinate:1 updated:1 hierarchy:2 today:1 construction:4 homogeneous:1 goodfellow:1 trick:6 element:1 approximated:2 expensive:1 recognition:3 labeled:3 database:3 solved:1 wang:2 tsang:1 imresize:1 rpk:4 adv:6 sun:1 sonnenburg:1 trade:1 deeply:2 mentioned:1 vanishes:1 convexity:2 complexity:4 zp1:5 ui:1 ideally:1 tsch:1 warde:1 chet:1 exhaustively:1 trained:2 solving:3 upon:1 basis:3 preconditioning:2 rpj:2 accelerate:1 various:5 represented:3 train:1 univ:1 fast:4 effective:1 ikz:3 sc:1 hyper:4 neighborhood:7 choosing:3 jean:1 whose:5 widely:1 solve:1 larger:5 say:1 heuristic:7 otherwise:3 anr:3 cvpr:3 ability:1 simonyan:1 jointly:2 transform:2 itself:1 advantage:1 sequence:3 differentiable:2 eigenvalue:1 net:2 reconstruction:1 douze:1 product:9 fr:2 fer:1 tu:2 gen:1 achieve:1 adjoint:1 frobenius:2 scalability:2 ky:1 exploiting:2 convergence:1 sutskever:1 nin:1 produce:1 incremental:1 object:1 derive:2 recurrent:1 ij:21 school:1 progress:1 eq:1 predicted:1 involves:1 direction:3 artefact:1 drawback:2 filter:21 stochastic:8 cnns:1 centered:2 backprop:1 require:3 f1:12 generalization:1 fix:2 preliminary:2 proposition:2 exploring:1 extension:1 considered:2 k2h:1 great:1 mapping:4 week:1 bj:4 predict:1 lawrence:1 achieves:3 projk:2 proc:15 label:3 currently:1 miso:1 bridge:1 concurrent:1 minimization:2 hope:1 mit:1 clearly:1 gaussian:5 super:9 i3:1 hj:5 ej:18 publication:1 release:1 l0:1 stoch:1 improvement:5 methodological:1 rank:2 aka:1 greatly:1 seeger:1 contrast:1 centroid:1 baseline:1 absil:1 hk:4 ihk:2 cnrs:1 i0:10 typically:2 initially:1 hidden:1 going:2 france:1 i1:5 reactivate:1 transformed:1 pixel:6 issue:5 classification:10 among:1 proposes:1 plan:2 spatial:7 constrained:1 field:2 equal:2 once:1 ng:1 sampling:1 identical:1 represents:1 unsupervised:7 icml:3 future:1 report:1 mirza:1 simplify:2 grenoble:2 few:5 randomly:3 geometry:1 consisting:1 attempt:2 detection:1 huge:1 highly:2 investigate:2 evaluation:2 mixture:1 farley:1 chain:1 predefined:1 bicubic:3 integral:2 netzer:1 respective:2 decoupled:2 fox:1 indexed:1 iv:1 euclidean:4 taylor:1 initialized:1 desired:1 divide:1 re:1 e0:6 tree:1 instance:3 column:4 modeling:1 disadvantage:1 applicability:1 stacking:2 cost:3 entry:3 krizhevsky:1 successful:1 conducted:2 too:1 graphic:1 reported:2 cho:1 recht:1 venue:1 probabilistic:1 off:1 dong:2 lee:2 augmentation:6 again:1 squared:3 choose:3 huang:1 admit:2 ek:3 derivative:3 leading:5 convolving:1 szegedy:1 converted:1 orr:1 includes:1 coefficient:1 notable:1 performed:1 h1:6 root:1 view:2 closed:1 lowe:1 competitive:5 start:1 contribution:2 minimize:1 square:3 publicly:1 accuracy:4 convolutional:36 descriptor:3 kek:2 efficiently:1 yield:1 conceptually:1 ik0:3 lkopf:2 bayesian:1 critically:1 ren:2 multiplying:1 worth:1 rectified:1 published:1 ago:1 composes:1 reach:1 damianou:1 definition:1 i01:1 involved:2 proof:3 associated:3 gain:2 dataset:3 popular:1 broomhead:1 knowledge:1 color:1 improves:1 psnr:1 hilbert:5 cj:3 segmentation:1 fk0:3 back:4 originally:2 higher:1 supervised:10 day:1 follow:2 xie:1 zisserman:1 improved:1 formulation:5 evaluated:1 box:1 stage:1 smola:1 correlation:1 hand:2 working:1 web:1 replacing:2 nonlinear:1 overlapping:2 propagation:1 lack:2 minibatch:3 defines:1 logistic:1 aj:9 quality:2 grows:1 believe:1 concept:1 normalized:1 true:1 contain:1 regularization:4 hence:1 symmetric:1 i2:1 illustrated:2 attractive:2 r0k:1 during:1 cosine:2 generalized:1 hik:2 demonstrate:1 performs:3 dedicated:1 svhn:11 fj:1 percent:1 image:54 recently:4 functional:1 jpg:1 physical:1 extend:1 interpretation:1 m1:10 elementwise:2 he:3 significant:3 composition:2 smoothness:1 automatic:2 unconstrained:1 grid:1 fk:5 tuning:1 language:1 dot:1 han:1 supervision:10 surface:1 gj:5 etc:1 whitening:1 dominant:1 multivariate:2 closest:1 recent:4 optimizing:8 elad:1 binary:1 success:3 yi:2 seen:5 fortunately:2 greater:1 ller:1 living:1 ii:3 multiple:2 full:1 harchaoui:3 rahimi:1 smooth:4 technical:1 match:1 adapt:1 calculation:1 cross:1 long:1 cifar:11 sphere:6 faster:1 lai:1 lin:2 retrieval:1 prediction:6 variant:3 involving:1 regression:4 multilayer:7 vision:1 essentially:1 basic:1 iteration:1 kernel:82 normalization:5 represent:4 achieved:6 laboratoire:1 sch:3 unlike:1 subject:1 pooling:13 induced:1 seem:1 effectiveness:1 integer:1 call:1 near:1 yang:1 noting:1 leverage:1 iii:1 enough:1 easy:1 split:1 variety:1 bengio:1 fit:2 architecture:2 suboptimal:1 reduce:2 idea:7 inner:6 multiclass:1 ce23:1 shift:3 absent:1 inactive:1 motivated:1 accelerating:1 padding:2 effort:2 akin:2 suffer:1 proceed:1 matlab:2 deep:23 cornerstone:1 mirroring:1 useful:2 detailed:1 involve:1 amount:3 processed:2 http:1 outperform:1 canonical:3 zj:16 coates:1 per:4 upscaling:3 blue:1 key:2 four:1 demonstrating:1 nevertheless:2 pj:5 gmm:1 kept:1 backward:2 luminance:1 graph:1 worrying:1 zeyde:1 year:1 inverse:3 extends:1 family:1 reasonable:2 wu:1 timofte:1 patch:22 incompatible:1 appendix:6 scaling:3 dropout:1 layer:30 pay:1 followed:1 courville:1 adapted:2 precisely:2 constraint:2 constrain:1 x2:3 encodes:2 rp2:1 hy:2 dominated:1 aspect:1 speed:2 argument:2 span:2 min:2 performing:4 cscn:1 gpus:1 according:1 alternate:1 combination:3 smaller:1 slightly:1 y0:2 rp0:6 shallow:2 cun:1 making:1 projecting:1 invariant:1 iccv:3 pipeline:2 computationally:2 previously:1 turn:1 mechanism:1 needed:1 tractable:1 end:12 zk2:1 adopted:1 available:3 operation:1 endowed:1 apply:1 kwok:1 hierarchical:5 spectral:1 appropriate:2 blurry:2 kodak:2 fowlkes:1 frequentist:1 paving:1 rkhss:1 batch:2 shortly:1 original:3 clustering:1 include:2 ensure:1 subsampling:3 running:1 zeiler:1 hinge:3 exploit:1 kuntzmann:1 k1:3 build:5 classical:8 objective:4 malik:1 already:1 quantity:3 added:2 parametric:1 receptive:2 strategy:2 usual:1 traditional:4 diagonal:5 gradient:18 iclr:3 subspace:14 mapped:3 arbelaez:1 capacity:1 upsampling:1 topic:2 manifold:3 reason:2 assuming:1 besides:1 code:1 pointwise:2 index:1 kk:2 minimizing:4 downsampling:3 upscaled:1 equivalently:2 loy:2 potentially:1 ycbcr:1 negative:1 resurgence:1 implementation:2 design:1 motivates:1 anal:2 gated:1 perform:8 allowing:2 upper:1 convolution:5 ssim:1 datasets:5 benchmark:2 arc:2 finite:8 descent:5 anti:1 optional:1 hinton:1 team:1 precise:1 y1:1 perturbation:2 reproducing:3 community:2 introduced:7 required:1 specified:1 optimized:2 z1:9 imagenet:1 quadratically:1 learned:3 barcelona:1 nip:8 beyond:1 below:1 usually:1 i00:2 pattern:2 reading:1 built:1 max:1 memory:1 shifting:1 gool:1 critical:2 overlap:1 natural:7 hybrid:1 regularized:1 rely:1 residual:3 representing:1 scheme:8 julien:2 extract:1 naive:1 schmid:2 set14:3 epoch:4 literature:1 prior:1 kf:3 multiplication:4 protter:1 catalyst:1 loss:12 lecture:1 mixed:1 validation:1 principle:4 storing:2 eccv:1 supported:1 last:1 soon:1 explaining:1 wide:1 taking:1 saul:1 sparse:2 benefit:1 ghz:1 curve:1 boundary:1 dimension:2 xn:1 gram:1 world:1 contour:1 kz:2 forward:4 made:3 collection:2 projected:2 concretely:1 author:1 adaptive:1 sj:7 smet:1 implicitly:3 overfitting:4 active:4 ioffe:1 mairal:5 conceptual:1 koniusz:1 discriminative:2 fergus:1 decade:1 anchored:1 table:4 qz:1 learn:6 reasonably:2 nature:1 cnn2:1 promising:1 decoupling:1 channel:2 composing:1 zk:4 mj:2 expansion:1 bottou:1 complex:1 did:3 pk:1 main:2 aistats:3 kwk2f:1 lampert:1 suffering:1 x1:2 intel:1 definiteness:1 momentum:1 position:1 explicit:1 learns:2 hw:3 tang:2 formula:1 z0:3 removing:1 covariate:1 offset:1 admits:3 macaron:1 burden:1 workshop:1 ih:1 adding:1 gained:1 magnitude:1 gallagher:2 cartesian:1 margin:1 dtic:1 chen:1 generalizing:1 lt:6 simply:4 explore:1 relegate:1 visual:1 kxk:6 tracking:1 partially:1 scalar:1 bo:1 van:1 corresponds:2 extracted:2 goal:2 sized:1 formulated:1 rbf:4 maxout:2 replace:2 fisher:3 price:1 change:3 shared:1 specifically:4 infinite:2 reducing:3 typical:1 averaging:4 except:1 principal:1 lemma:4 called:3 pas:5 invariance:2 select:1 internal:1 support:2 accelerated:2 cnn1:1 evaluate:3 princeton:1 regularizing:1 |
5,730 | 6,185 | Bayesian latent structure discovery from
multi-neuron recordings
Scott W. Linderman
Columbia University
[email protected]
Ryan P. Adams
Harvard University and Twitter
[email protected]
Jonathan W. Pillow
Princeton University
[email protected]
Abstract
Neural circuits contain heterogeneous groups of neurons that differ in type, location,
connectivity, and basic response properties. However, traditional methods for
dimensionality reduction and clustering are ill-suited to recovering the structure
underlying the organization of neural circuits. In particular, they do not take
advantage of the rich temporal dependencies in multi-neuron recordings and fail
to account for the noise in neural spike trains. Here we describe new tools for
inferring latent structure from simultaneously recorded spike train data using a
hierarchical extension of a multi-neuron point process model commonly known as
the generalized linear model (GLM). Our approach combines the GLM with flexible
graph-theoretic priors governing the relationship between latent features and neural
connectivity patterns. Fully Bayesian inference via P?lya-gamma augmentation
of the resulting model allows us to classify neurons and infer latent dimensions of
circuit organization from correlated spike trains. We demonstrate the effectiveness
of our method with applications to synthetic data and multi-neuron recordings in
primate retina, revealing latent patterns of neural types and locations from spike
trains alone.
1
Introduction
Large-scale recording technologies are revolutionizing the field of neuroscience [e.g., 1, 5, 15]. These
advances present an unprecedented opportunity to probe the underpinnings of neural computation,
but they also pose an extraordinary statistical and computational challenge: how do we make sense
of these complex recordings? To address this challenge, we need methods that not only capture
variability in neural activity and make accurate predictions, but also expose meaningful structure
that may lead to novel hypotheses and interpretations of the circuits under study. In short, we need
exploratory methods that yield interpretable representations of large-scale neural data.
For example, consider a population of distinct retinal ganglion cells (RGCs). These cells only respond
to light within their small receptive field. Moreover, decades of painstaking work have revealed a
plethora of RGC types [16]. Thus, it is natural to characterize these cells in terms of their type and
the location of their receptive field center. Rather than manually searching for such a representation
by probing with different visual stimuli, here we develop a method to automatically discover this
structure from correlated patterns of neural activity.
Our approach combines latent variable network models [6, 10] with generalized linear models of
neural spike trains [11, 19, 13, 20] in a hierarchical Bayesian framework. The network serves as a
bridge, connecting interpretable latent features of interest to the temporal dynamics of neural spike
trains. Unlike many previous studies [e.g., 2, 3, 17], our goal here is not necessarily to recover true
synaptic connectivity, nor is our primary emphasis on prediction. Instead, our aim is to explore
and compare latent patterns of functional organization, integrating over possible networks. To do
so, we develop an efficient Markov chain Monte Carlo (MCMC) inference algorithm by leveraging
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Firing Rate
Spike Train
cell 2
cell 3
cell 2
cell N
cell N
cell 3
weight
(c)
cell 1
(b)
Network
cell 1
(a)
time
time
(d)
A ? Dense
W ? Gaussian
(e)
(f)
A ? Bernoulli
W ? Distance
A ? SBM
W ? SBM
(g)
A ? Distance
W ? SBM
Figure 1: Components of the generative model. (a) Neurons influence one another via a sparse weighted network
of interactions. (b) The network parameterizes an autoregressive model with a time-varying activation. (c)
Spike counts are randomly drawn from a discrete distribution with a logistic link function. Each spike induces
an impulse response on the activation of downstream neurons. (d) Standard GLM analyses correspond to a
fully-connected network with Gaussian or Laplace distributed weights, depending on the regularization. (e-g) In
this work, we consider structured models like the stochastic block model (SBM), in which neurons have discrete
latent types (e.g. square or circle), and the latent distance model, in which neurons have latent locations that
determine their probability of connection, capturing intuitive and interpretable patterns of connectivity.
P?lya-gamma augmentation to derive collapsed Gibbs updates for the network. We illustrate the
robustness and scalability of our algorithm with synthetic data examples, and we demonstrate the
scientific potential of our approach with an application to retinal ganglion cell recordings, where we
recover the true underlying cell types and locations from spike trains alone, without reference to the
stimulus.
2
Probabilistic Model
Figure 1 illustrates the components of our framework. We begin with a prior distribution on networks
that generates a set of weighted connections between neurons (Fig. 1a). A directed edge indicates a
functional relationship between the spikes of one neuron and the activation of its downstream neighbor.
Each spike induces a weighted impulse response on the activation of the downstream neuron (Fig. 1b).
The activation is converted into a nonnegative firing rate from which spikes are stochastically sampled
(Fig. 1c). These spikes then feed back into the subsequent activation, completing an autoregressive
loop, the hallmark of the GLM [11, 19]. Models like these have provided valuable insight into
complex population recordings [13]. We detail the three components of this model in the reverse
order, working backward from the observed spike counts through the activation to the underlying
network.
2.1
Logistic Spike Count Models
Generalized linear models assume a stochastic spike generation mechanism. Consider a matrix of
spike counts, S ? NT ?N , for T time bins and N neurons. The expected number of spikes fired by
the n-th neuron in the t-th time bin, E[st,n ], is modeled as a nonlinear function of the instantaneous
activation, ?t,n , and a static, neuron-specific parameter, ?n . Table 1 enumerates the three spike count
models considered in this paper, all of which use the logistic function, ?(?) = e? (1 + e? )?1 , to
rectify the activation. The Bernoulli distribution is appropriate for binary spike counts, whereas the
2
Distribution
Bern(?(?))
Bin(?, ?(?))
NB(?, ?(?))
p(s | ?, ?)
s
Standard Form
1?s
?(?) ?(??)
?
s
??s
s ?(?) ?(??)
?+s?1
?(?)s ?(??)?
s
(e? )s
1+e?
(e? )s
?
s (1+e? )?
(e? )s
?+s?1
s
(1+e? )?+s
E[s]
Var(s)
?(?)
?(?) ?(??)
??(?)
??(?) ?(??)
?
?e
?e? /?(??)
Table 1: Table of conditional spike count distributions, their parameterizations, and their properties.
binomial and negative binomial have support for s ? [0, ?] and s ? [0, ?), respectively. Notably
lacking from this list is the Poisson distribution, which is not directly amenable to the augmentation
schemes we derive below; however, both the binomial and negative binomial distributions converge to
the Poisson under certain limits. Moreover, these distributions afford the added flexibility of modeling
under- and over-dispersed spike counts, a biologically significant feature of neural spiking data [4].
Specifically, while the Poisson has unit dispersion (its mean is equal to its variance), the binomial
distribution is always under-dispersed, since its mean always exceeds its variance, and the negative
binomial is always over-dispersed, with variance greater than its mean.
Importantly, all of these distributions can be written in a standard form, as shown in Table 1. We
exploit this fact to develop an efficient Markov chain Monte Carlo (MCMC) inference algorithm
described in Section 3.
2.2
Linear Activation Model
The instantaneous activation of neuron n at time t is modeled as a linear, autoregressive function of
preceding spike counts of neighboring neurons,
?t,n , bn +
N ?t
max
X
X
m=1 ?t=1
hm?n [?t] ? st??t,m ,
(1)
where bn is the baseline activation of neuron n and hm?n : {1, . . . , ?tmax } ? R is an impulse
response function that models the influence spikes on neuron m have on the activation of neuron n
at a delay of ?t. To model the impulse response, we use a spike-and-slab formulation [8],
hm?n [?t] = am?n
K
X
(k)
wm?n
?k [?t].
(2)
k=1
Here, am?n ? {0, 1} is a binary variable indicating the presence or absence of a connection
(1)
(K)
from neuron m to neuron n, the weight wm?n = [wm?n , ..., wm?n ] denotes the strength of the
K
connection, and {?k }k=1 is a collection of fixed basis functions. In this paper, we consider scalar
weights (K = 1) and use an exponential basis function, ?1 [?t] = e??t/? , with time constant
of ? = 15ms. Since the basis function and the spike train are fixed, we precompute the convolution of
P?tmax
(k)
the spike train and the basis function to obtain sbt,m = ?t=1
?k [?t] ? st??t,m . Finally, we combine
the connections, weights, and filtered spike trains and write the activation as,
?t,n = (an wn )T b
st ,
(3)
(1)
(K)
where an = [1, a1?n 1K , ..., aN ?n 1K ], wn = [bn , w1?n , ..., wN ?n ], and b
st = [1, sbt,1 , ..., sbt,N ].
Here, denotes the Hadamard (elementwise) product and 1K is length-K vector of ones. Hence, all
of these vectors are of size 1 + N K. The difference between our formulation and the standard GLM
is that we have explicitly modeled the sparsity of the weights in am?n . In typical formulations [e.g.,
13], all connections are present and the weights are regularized with `1 and `2 penalties to promote
sparsity. Instead, we consider structured approaches to modeling the sparsity and weights.
2.3
Random Network Models
Patterns of functional interaction can provide great insight into the computations performed by neural
circuits. Indeed, many circuits are informally described in terms of ?types? of neurons that perform
a particular role, or the ?features? that neurons encode. Random network models formalize these
3
Name
Dense Model
Independent Model
Stochastic Block Model
Latent Distance Model
?(um , un , ?)
1
?
?um ?un
?(?||un ? v m ||22 + ?0 )
?(v m , v n , ?)
?
?
?vm ?vn
?||v n ? v m ||22 + ?0
?(v m , v n , ?)
?
?
?vm ?vn
?2
Table 2: Random network models for the binary adjacency matrix or the Gaussian weight matrix.
intuitive descriptions. Types and features correspond to latent variables in a probabilistic model that
governs how likely neurons are to connect and how strongly they influence each other.
Let A = {{am?n }} and W = {{wm?n }} denote the binary adjacency matrix and the real-valued
N
array of weights, respectively. Now suppose {un }N
n=1 and {v n }n=1 are sets of neuron-specific
latent variables that govern the distributions over A and W . Given these latent variables and global
parameters ?, the entries in A are conditionally independent Bernoulli random variables, and the
entries in W are conditionally independent Gaussians. That is,
p(A, W | {un , v n }N
n=1 , ?) =
N Y
N
Y
m=1 n=1
Bern (am?n | ?(um , un , ?))
? N (wm?n | ?(v m , v n , ?), ?(v m , v n , ?)) , (4)
where ?(?), ?(?), and ?(?) are functions that output a probability, a mean vector, and a covariance
matrix, respectively. We recover the standard GLM when ?(?) ? 1, but here we can take advantage
of structured priors like the stochastic block model (SBM) [9], in which each neuron has a discrete
type, and the latent distance model [6], in which each neuron has a latent location. Table 2 outlines
the various models considered in this paper.
We can mix and match these models as shown in Figure 1(d-g). For example, in Fig. 1g, the adjacency
matrix is distance-dependent and the weights are block structured. Thus, we have a flexible language
for expressing hypotheses about patterns of interaction. In fact, the simple models enumerated above
are instances of a rich family of exchangeable networks known as Aldous-Hoover random graphs,
which have been recently reviewed by Orbanz and Roy [10].
3
Bayesian Inference
Generalized linear models are often fit via maximum a posteriori (MAP) estimation [11, 19, 13, 20].
However, as we scale to larger populations of neurons, there will inevitably be structure in the
posterior that is not reflected with a point estimate. Technological advances are expanding the number
of neurons that can be recorded simultaneously, but ?high-throughput? recording of many individuals
is still a distant hope. Therefore we expect the complexities of our models to expand faster than the
available distinct data sets to fit them. In this situation, accurately capturing uncertainty is critical.
Moreover, in the Bayesian framework, we also have a coherent way to perform model selection
and evaluate hypotheses regarding complex underlying structure. Finally, after introducing a binary
adjacency matrix and hierarchical network priors, the log posterior is no longer a concave function of
model parameters, making direct optimization challenging (though see Soudry et al. [17] for recent
advances in tackling similar problems). These considerations motivate a fully Bayesian approach.
Computation in rich Bayesian models is often challenging, but through thoughtful modeling decisions
it is sometimes possible to find representations that lead to efficient inference. In this case, we have
carefully chosen the logistic models of the preceding section in order to make it possible to apply
the P?lya-gamma augmentation scheme [14]. The principal advantage of this approach is that, given
the P?lya-gamma auxiliary variables, the conditional distribution of the weights is Gaussian, and
hence is amenable to efficient Gibbs sampling. Recently, Pillow and Scott [12] used this technique to
develop inference algorithms for negative binomial factor analysis models of neural spike trains. We
build on this work and show how this conditionally Gaussian structure can be exploited to derive
efficient, collapsed Gibbs updates.
4
3.1
Collapsed Gibbs updates for Gaussian observations
Suppose the observations were actually Gaussian distributed, i.e. st,n ? N (?t,n , ?n ). The most
challenging aspect of inference is then sampling the posterior distribution over discrete connections, A. There may be many posterior modes corresponding to different patterns of connectivity.
Moreover, am?n and wm?n are often highly correlated, which leads to poor mixing of na?ve Gibbs
sampling. Fortunately, when the observations are Gaussian, we may integrate over possible weights
and sample the binary adjacency matrix from its collapsed conditional distribution.
We combine the conditionally independent Gaussian priors on {wm?n } and bn into a joint Gaussian
distribution, wn | {v n }, ? ? N (wn | ?n , ?n ), where ?n is a block diagonal covariance matrix.
Since ?t,n is linear in wn (see Eq. 3), a Gaussian likelihood is conjugate with this Gaussian prior,
b = {b
given an and S
st }Tt=1 . This yields the following closed-form conditional:
b an , ?n , ?n ) ? N (wn | ?n , ?n )
p(wn | S,
en =
?
h
??1
n
b
+ S
T
b
(?n?1 I)S
T
Y
t=1
i?1
(an aT
n)
e n ),
e n, ?
N (st,n | (an wn )T b
st , ?n ) ? N (wn | ?
h
T
i
e n ??1 ?n + S
b (? ?1 I)s:,n an .
en = ?
?
n
n
,
Now, consider the conditional distribution of an , integrating out the corresponding weights.
The prior distribution over an is a product of Bernoulli distributions with parameters ?n = {?(um , un , ?)}N
m=1 . The conditional distribution is proportional to the ratio of the prior
and posterior partition functions,
Z
b ? , ? , ?n ) = p(an , wn | S,
b ? , ? , ?n ) dwn
p(an | S,
n
n
n
n
o
n
? 12
?1
?n exp ? 1 ?T
?
?
n
2 n n
o.
n
= p(an | ?n ) ? 1
?1
T
1
2
e n exp ? ?
?
e e e
2 n ?n ?n
Thus, we perform a joint update of an and wn by collapsing out the weights to directly sample the
binary entries of an . We iterate over each entry, am?n , and sample it from its conditional distribution
given {am0 ?n }m0 6=m . Having sampled an , we sample wn from its Gaussian conditional.
3.2
P?lya-gamma augmentation for discrete observations
Now, let us turn to the non-conjugate case of discrete count observations. The P?lya-gamma augmentation [14] introduces auxiliary variables, ?t,n , conditioned upon which the discrete likelihood
appears Gaussian and our collapsed Gibbs updates apply. The integral identity underlying this scheme
is
Z ?
2
(e? )a
?b ??
c
=
c
2
e
e??? /2 pPG (? | b, 0) d?,
(5)
?
b
(1 + e )
0
where ? = a ? b/2 and p(? | b, 0) is the density of the P?lya-gamma distribution PG(b, 0), which
does not depend on ?. Notice that the discrete likelihoods in Table 1 can all be rewritten like
the left-hand side of (5), for some a, b, and c that are functions of s and ?. Using (5) along with
priors p(?) and p(?), we write the joint density of (?, s, ?) as
Z ?
2
p(s, ?, ?) =
p(?) p(?) c(s, ?) 2?b(s,?) e?(s,?)? e??? /2 pPG (? | b(s, ?), 0) d?.
(6)
0
The integrand of Eq. 6 defines a joint density on (s, ?, ?, ?) which admits p(s, ?, ?) as a marginal
density. Conditioned on the auxiliary variable, ?, the likelihood as a function of ? is,
2
p(s | ?, ?, ?) ? e?(s,?)? e??? /2 ? N ? ?1 ?(s, ?) | ?, ? ?1 .
Thus, after conditioning on s, ?, and ?, we effectively have a linear Gaussian likelihood for ?.
We apply this augmentation scheme to the full model, introducing auxiliary variables, ?t,n for each
spike count, st,n . Given these variables, the conditional distribution of wn can be computed in closed
5
(a)
True
(b)
(d)
True A
(e)
(c)
MAP W
(f)
MCMC
, MAP W
Figure 2: Weighted adjacency matrices showing inferred networks and connection probabilities for synthetic
data. (a,d) True network. (b,e) Posterior mean using joint inference of network GLM. (c,f) MAP estimation.
form, as before. Let ?n = [?(s1,n , ?n ), . . . , ?(sT,n , ?n )] and ?n = diag([?1,n , . . . , ?T,n ]). Then
b an , ?n , ?n , ? n , ?n ) ? N (wn | ?
e n ), where
e n, ?
we have p(wn | sn , S,
h
T
i?1
h
T
i
e n = ??1 + S
b ?n S
b (an aT )
e n ??1 ?n + S
b ?n an .
en = ?
?
, ?
n
n
n
Having introduced auxiliary variables, we must now derive Markov transitions to update them as
well. Fortunately, the P?lya-gamma distribution is designed such that the conditional distribution of
the auxiliary variables is simply a ?tilted? P?lya-gamma distribution,
p(?t,n | st,n , ?n , ?t,n ) = pPG (?t,n | b(st,n , ?n ), ?t,n ).
These auxiliary variables are conditionally independent given the activation and hence can be
sampled in parallel. Moreover, efficient algorithms are available to generate P?lya-gamma random
variates [21]. Our Gibbs updates for the remaining parameters and latent variables (?n , un , v n , and ?)
are described in the supplementary material. A Python implementation of our inference algorithm is
available at https://github.com/slinderman/pyglm.
4
Synthetic Data Experiments
The need for network models is most pressing in recordings of large populations where the network
is difficult to estimate and even harder to interpret. To assess the robustness and scalability of our
framework, we apply our methods to simulated data with known ground truth. We simulate a one
minute recording (1ms time bins) from a population of 200 neurons with discrete latent types that
govern the connection strength via a stochastic block model and continuous latent locations that
govern connection probability via a latent distance model. The spikes are generated from a Bernoulli
observation model.
First, we show that our approach of jointly inferring the network and its latent variables can provide
dramatic improvements over alternative approaches. For comparison, consider the two-step procedure
of Stevenson et al. [18] in which the network is fit with an `1 -regularized GLM and then a probabilistic
network model is fit to the GLM connection weights. The advantage of this strategy is that the
expensive GLM fitting is only performed once. However, when the data is limited, both the network
and the latent variables are uncertain. Our Bayesian approach finds a very accurate network (Fig. 2b)
6
(a)
(b)
(c)
Figure 3: Scalability of our inference algorithm as a function of: (a) the number of time bins, T ; (b) the number
of neurons, N ; and (c) the average sparsity of the network, ?. Wall-clock time is divided into time spent sampling
auxiliary variables (?Obs.?) and time spent sampling the network (?Net.?).
by jointly sampling networks and latent variables. In contrast, the standard GLM does not account
for latent structure and finds strong connections as well as spuriously correlated neurons (Fig. 2c).
Moreover, our fully Bayesian approach finds a set of latent locations that mimics the true locations
and therefore accurately estimates connection probability (Fig. 2e). In contrast, subsequently fitting a
latent distance model to the adjacency matrix of a thresholded GLM network finds an embedding
that has no resemblance to the true locations, which is reflected in its poor estimate of connection
probability (Fig. 2f).
Next, we address the scalability of our MCMC algorithm. Three major parameters govern the
complexity of inference: the number of time bins, T ; the number of neurons, N ; and the level of
sparsity, ?. The following experiments were run on a quad-core Intel i5 with 6GB of RAM. As shown
in Fig. 3a, the wall clock time per iteration scales linearly with T since we must resample N T auxiliary
variables. We scale at least quadratically with N due to the network, as shown in Fig. 3b. However,
the total cost could actually be worse than quadratic since the cost of updating each connection could
depend on N . Fortunately, the complexity of our collapsed Gibbs sampling algorithm only depends
on the number of incident connections, d, or equivalently, the sparsity ? = d/N . Specifically, we
must solve a linear system of size d, which incurs a cubic cost, as seen in Fig. 3c.
5
Retinal Ganglion Cells
Finally, we demonstrate the efficacy of this approach with an application to spike trains simultaneously
recorded from a population of 27 retinal ganglion cells (RGCs), which have previously been studied
by Pillow et al. [13]. Retinal ganglion cells respond to light shown upon their receptive field. Thus, it
is natural to characterize these cells by the location of their receptive field center. Moreover, retinal
ganglion cells come in a variety of types [16]. This population is comprised of two types of cells, on
and off cells, which are characterized by their response to visual stimuli. On cells increase their firing
when light is shone upon their receptive field; off cells decrease their firing rate in response to light in
their receptive field. In this case, the population is driven by a binary white noise stimulus. Given
the stimulus, the cell locations and types are readily inferred. Here, we show how these intuitive
representations can be discovered in a purely unsupervised manner given one minute of spiking data
alone and no knowledge of the stimulus.
Figure 4 illustrates the results of our analysis. Since the data are binned at 1ms resolution, we have
at most one spike per bin and we use a Bernoulli observation model. We fit the 12 network models
of Table 2 (4 adjacency models and 3 weight models), and we find that, in terms of predictive log
likelihood of held-out neurons, a latent distance model of the adjacency matrix and SBM of the
weight matrix performs best (Fig. 4a). See the supplementary material for a detailed description of
this comparison. Looking into the latent locations underlying the adjacency matrix our network GLM
(NGLM), we find that the inferred distances between cells are highly correlated with the distances
between the true locations. For comparison, we also fit a 2D Bernoulli linear dynamical system
(LDS) ? the Bernoulli equivalent of the Poisson LDS [7] ? and we take rows of the N ?2 emission
matrix as locations. In contrast to our network GLM, the distances between LDS locations are nearly
uncorrelated with the true distances (Fig. 4b) since the LDS does not capture the fact that distance
only affects the probability of connection, not the weight. Not only are our distances accurate, the
inferred locations are nearly identical to the true locations, up to affine transformation. In Fig. 4c,
semitransparent markers show the inferred on cell locations, which have been rotated and scaled to
7
(b)
Weights
Inferred distance [a.u.]
(a)
Pairwise Distances
NGLM
LDS
(c)
On Cell Locations
True distance [a.u.]
(d)
(e)
(f)
Figure 4: Using our framework, retinal ganglion cell types and locations can be inferred from spike trains alone.
(a) Model comparison. (b) True and inferred distances between cells. (c) True and inferred cell locations. (d-f)
Inferred network, connection probability, and mean weight, respectively. See main text for further details.
best align with the true locations shown by the outlined marks. Based solely on patterns of correlated
spiking, we have recovered the receptive field arrangements.
Fig. 4d shows the inferred network, A W , under a latent distance model of connection probability
and a stochastic block model for connection weight. The underlying connection probabilities from
the distance model are shown in Fig. 4e. Finally, Fig. 4f shows that we have discovered not only
the cell locations, but also their latent types. With an SBM, the mean weight is a function of latent
type, and under the posterior, the neurons are clearly clustered into the two true types that exhibit the
expected within-class excitation and between-class inhibition.
6
Conclusion
Our results with both synthetic and real neural data provide compelling evidence that our methods can
find meaningful structure underlying neural spike trains. Given the extensive work on characterizing
retinal ganglion cell responses, we have considerable evidence that the representation we learn from
spike trains alone is indeed the optimal way to summarize this population of cells. This lends us
confidence that we may trust the representations learned from spike trains recorded from more
enigmatic brain areas as well. While we have omitted stimulus from our models and only used
it for confirming types and locations, in practice we could incorporate it into our model and even
capture type- and location-dependent patterns of stimulus dependence with our hierarchical approach.
Likewise, the network GLM could be combined with the PLDS as in Vidne et al. [20] to capture
sources of low dimensional, shared variability.
Latent functional networks underlying spike trains can provide unique insight into the structure of
neural populations. Looking forward, methods that extract interpretable representations from complex
neural data, like those developed here, will be key to capitalizing on the dramatic advances in neural
recording technology. We have shown that networks provide a natural bridge to connect neural types
and features to spike trains, and demonstrated promising results on both real and synthetic data.
Acknowledgments. We thank E. J. Chichilnisky, A. M. Litke, A. Sher and J. Shlens for retinal data. SWL is
supported by the Simons Foundation SCGB-418011. RPA is supported by NSF IIS-1421780 and the Alfred P.
Sloan Foundation. JWP was supported by grants from the McKnight Foundation, Simons Collaboration on the
Global Brain (SCGB AWD1004351), NSF CAREER Award (IIS-1150186), and NIMH grant MH099611.
8
References
[1] M. B. Ahrens, M. B. Orger, D. N. Robson, J. M. Li, and P. J. Keller. Whole-brain functional imaging at
cellular resolution using light-sheet microscopy. Nature methods, 10(5):413?420, 2013.
[2] D. R. Brillinger, H. L. Bryant Jr, and J. P. Segundo. Identification of synaptic interactions. Biological
Cybernetics, 22(4):213?228, 1976.
[3] F. Gerhard, T. Kispersky, G. J. Gutierrez, E. Marder, M. Kramer, and U. Eden. Successful reconstruction of
a physiological circuit with known connectivity from spiking activity alone. PLoS Computational Biology,
9(7):e1003138, 2013.
[4] R. L. Goris, J. A. Movshon, and E. P. Simoncelli. Partitioning neuronal variability. Nature Neuroscience,
17(6):858?865, 2014.
[5] B. F. Grewe, D. Langer, H. Kasper, B. M. Kampa, and F. Helmchen. High-speed in vivo calcium imaging
reveals neuronal network activity with near-millisecond precision. Nature methods, 7(5):399?405, 2010.
[6] P. D. Hoff. Modeling homophily and stochastic equivalence in symmetric relational data. Advances in
Neural Information Processing Systems 20, 20:1?8, 2008.
[7] J. H. Macke, L. Buesing, J. P. Cunningham, M. Y. Byron, K. V. Shenoy, and M. Sahani. Empirical models
of spiking in neural populations. In Advances in neural information processing systems, pages 1350?1358,
2011.
[8] T. J. Mitchell and J. J. Beauchamp. Bayesian Variable Selection in Linear Regression. Journal of the
American Statistical Association, 83(404):1023?-1032, 1988.
[9] K. Nowicki and T. A. B. Snijders. Estimation and prediction for stochastic blockstructures. Journal of the
American Statistical Association, 96(455):1077?1087, 2001.
[10] P. Orbanz and D. M. Roy. Bayesian models of graphs, arrays and other exchangeable random structures.
Pattern Analysis and Machine Intelligence, IEEE Transactions on, 37(2):437?461, 2015.
[11] L. Paninski. Maximum likelihood estimation of cascade point-process neural encoding models. Network:
Computation in Neural Systems, 15(4):243?262, Jan. 2004.
[12] J. W. Pillow and J. Scott. Fully bayesian inference for neural models with negative-binomial spiking. In
F. Pereira, C. Burges, L. Bottou, and K. Weinberger, editors, Advances in Neural Information Processing
Systems 25, pages 1898?1906. 2012.
[13] J. W. Pillow, J. Shlens, L. Paninski, A. Sher, A. M. Litke, E. Chichilnisky, and E. P. Simoncelli. Spatiotemporal correlations and visual signalling in a complete neuronal population. Nature, 454(7207):995?999,
2008.
[14] N. G. Polson, J. G. Scott, and J. Windle. Bayesian inference for logistic models using P?lya?gamma latent
variables. Journal of the American Statistical Association, 108(504):1339?1349, 2013.
[15] R. Prevedel, Y.-G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schr?del, R. Raskar, M. Zimmer,
E. S. Boyden, et al. Simultaneous whole-animal 3d imaging of neuronal activity using light-field microscopy.
Nature methods, 11(7):727?730, 2014.
[16] J. R. Sanes and R. H. Masland. The types of retinal ganglion cells: current status and implications for
neuronal classification. Annual review of neuroscience, 38:221?246, 2015.
[17] D. Soudry, S. Keshri, P. Stinson, M.-h. Oh, G. Iyengar, and L. Paninski. A shotgun sampling solution for
the common input problem in neural connectivity inference. arXiv preprint arXiv:1309.3724, 2013.
[18] I. H. Stevenson, J. M. Rebesco, N. G. Hatsopoulos, Z. Haga, L. E. Miller, and K. P. K?rding. Bayesian
inference of functional connectivity and network structure from spikes. Neural Systems and Rehabilitation
Engineering, IEEE Transactions on, 17(3):203?213, 2009.
[19] W. Truccolo, U. T. Eden, M. R. Fellows, J. P. Donoghue, and E. N. Brown. A point process framework for
relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects. Journal
of Neurophysiology, 93(2):1074?1089, 2005. doi: 10.1152/jn.00697.2004.
[20] M. Vidne, Y. Ahmadian, J. Shlens, J. W. Pillow, J. Kulkarni, A. M. Litke, E. Chichilnisky, E. Simoncelli,
and L. Paninski. Modeling the impact of common noise inputs on the network activity of retinal ganglion
cells. Journal of computational neuroscience, 33(1):97?121, 2012.
[21] J. Windle, N. G. Polson, and J. G. Scott. Sampling P?lya-gamma random variates: alternate and approximate techniques. arXiv preprint arXiv:1405.0506, 2014.
9
| 6185 |@word neurophysiology:1 semitransparent:1 bn:4 covariance:2 pg:1 incurs:1 dramatic:2 harder:1 reduction:1 efficacy:1 recovered:1 com:1 nt:1 current:1 activation:15 tackling:1 written:1 must:3 readily:1 tilted:1 subsequent:1 distant:1 partition:1 confirming:1 designed:1 interpretable:4 update:7 alone:6 generative:1 intelligence:1 boyden:1 signalling:1 short:1 painstaking:1 core:1 filtered:1 parameterizations:1 beauchamp:1 location:26 along:1 direct:1 combine:4 fitting:2 manner:1 pairwise:1 rding:1 notably:1 indeed:2 expected:2 nor:1 multi:4 brain:3 automatically:1 quad:1 spain:1 discover:1 underlying:9 moreover:7 circuit:7 begin:1 provided:1 developed:1 transformation:1 brillinger:1 temporal:2 fellow:1 concave:1 bryant:1 um:4 scaled:1 partitioning:1 exchangeable:2 unit:1 grant:2 shenoy:1 before:1 engineering:1 limit:1 soudry:2 encoding:1 kampa:1 firing:4 solely:1 tmax:2 emphasis:1 studied:1 kasper:1 equivalence:1 challenging:3 limited:1 directed:1 unique:1 acknowledgment:1 practice:1 block:7 procedure:1 jan:1 area:1 empirical:1 cascade:1 revealing:1 confidence:1 integrating:2 selection:2 sheet:1 nb:1 collapsed:6 influence:3 equivalent:1 map:4 demonstrated:1 center:2 keller:1 resolution:2 insight:3 sbm:7 array:2 importantly:1 shlens:3 oh:1 population:12 searching:1 exploratory:1 embedding:1 laplace:1 suppose:2 gerhard:1 hypothesis:3 harvard:2 roy:2 expensive:1 updating:1 observed:1 role:1 yoon:1 preprint:2 capture:4 connected:1 plo:1 decrease:1 technological:1 valuable:1 hatsopoulos:1 govern:4 complexity:3 nimh:1 dynamic:1 motivate:1 depend:2 predictive:1 purely:1 upon:3 basis:4 joint:5 various:1 train:19 distinct:2 describe:1 ahmadian:1 monte:2 doi:1 larger:1 valued:1 supplementary:2 solve:1 jointly:2 advantage:4 pressing:1 unprecedented:1 net:1 reconstruction:1 wetzstein:1 interaction:4 product:2 neighboring:1 loop:1 hadamard:1 kato:1 mixing:1 fired:1 flexibility:1 intuitive:3 description:2 scalability:4 plethora:1 sea:1 adam:1 rotated:1 spent:2 derive:4 illustrate:1 develop:4 pose:1 depending:1 eq:2 strong:1 orger:1 recovering:1 auxiliary:9 come:1 differ:1 stochastic:8 subsequently:1 material:2 bin:7 adjacency:10 truccolo:1 clustered:1 hoover:1 wall:2 biological:1 ryan:1 enumerated:1 extension:1 considered:2 ground:1 revolutionizing:1 exp:2 great:1 slab:1 m0:1 major:1 omitted:1 resample:1 pak:1 estimation:4 robson:1 expose:1 bridge:2 mh099611:1 gutierrez:1 helmchen:1 tool:1 weighted:4 hope:1 clearly:1 iyengar:1 gaussian:15 always:3 aim:1 rather:1 varying:1 encode:1 emission:1 improvement:1 bernoulli:8 indicates:1 likelihood:7 contrast:3 litke:3 baseline:1 sense:1 am:7 posteriori:1 inference:15 twitter:1 dependent:2 cunningham:1 expand:1 rpa:2 classification:1 ill:1 flexible:2 animal:1 hoff:1 marginal:1 field:9 equal:1 once:1 having:2 sampling:9 manually:1 identical:1 biology:1 unsupervised:1 throughput:1 nearly:2 promote:1 mimic:1 stimulus:8 retina:1 randomly:1 simultaneously:3 gamma:12 ve:1 individual:1 organization:3 interest:1 highly:2 introduces:1 light:6 held:1 chain:2 amenable:2 accurate:3 implication:1 underpinnings:1 edge:1 integral:1 segundo:1 circle:1 uncertain:1 instance:1 classify:1 modeling:5 compelling:1 cost:3 introducing:2 entry:4 comprised:1 delay:1 successful:1 characterize:2 dependency:1 connect:2 spatiotemporal:1 synthetic:6 combined:1 st:13 density:4 probabilistic:3 vm:2 off:2 connecting:1 na:1 connectivity:8 augmentation:7 w1:1 recorded:4 collapsing:1 worse:1 stochastically:1 american:3 macke:1 li:1 account:2 potential:1 converted:1 stevenson:2 retinal:11 explicitly:1 sloan:1 depends:1 performed:2 closed:2 wm:8 recover:3 parallel:1 simon:2 vivo:1 ass:1 square:1 variance:3 likewise:1 miller:1 yield:2 correspond:2 ensemble:1 lds:5 bayesian:14 identification:1 buesing:1 accurately:2 carlo:2 cybernetics:1 history:1 simultaneous:1 synaptic:2 static:1 sampled:3 mitchell:1 enumerates:1 knowledge:1 dimensionality:1 formalize:1 carefully:1 actually:2 back:1 appears:1 feed:1 reflected:2 response:8 formulation:3 though:1 strongly:1 governing:1 stinson:1 clock:2 correlation:1 working:1 hand:1 trust:1 nonlinear:1 marker:1 del:1 defines:1 logistic:5 mode:1 impulse:4 scientific:1 resemblance:1 name:1 effect:1 contain:1 rgcs:2 true:15 brown:1 regularization:1 hence:3 symmetric:1 nowicki:1 white:1 conditionally:5 excitation:1 m:3 generalized:4 outline:1 theoretic:1 demonstrate:3 tt:1 complete:1 performs:1 hallmark:1 instantaneous:2 novel:1 recently:2 consideration:1 common:2 functional:6 spiking:8 homophily:1 conditioning:1 association:3 interpretation:1 elementwise:1 interpret:1 relating:1 significant:1 expressing:1 gibbs:8 outlined:1 language:1 rectify:1 longer:1 inhibition:1 align:1 posterior:7 recent:1 orbanz:2 aldous:1 driven:1 reverse:1 certain:1 binary:8 exploited:1 seen:1 greater:1 fortunately:3 preceding:2 lya:12 determine:1 converge:1 ii:2 full:1 mix:1 simoncelli:3 infer:1 snijders:1 exceeds:1 match:1 faster:1 characterized:1 divided:1 goris:1 award:1 a1:1 impact:1 prediction:3 basic:1 regression:1 heterogeneous:1 poisson:4 arxiv:4 iteration:1 sometimes:1 raskar:1 microscopy:2 cell:34 whereas:1 source:1 ppg:3 unlike:1 recording:11 byron:1 leveraging:1 effectiveness:1 near:1 presence:1 revealed:1 wn:16 iterate:1 variety:1 fit:6 variate:2 affect:1 jwp:1 regarding:1 parameterizes:1 donoghue:1 gb:1 shotgun:1 penalty:1 movshon:1 afford:1 governs:1 informally:1 detailed:1 induces:2 generate:1 http:1 nsf:2 millisecond:1 notice:1 ahrens:1 neuroscience:4 windle:2 per:2 extrinsic:1 alfred:1 discrete:9 write:2 group:1 key:1 eden:2 drawn:1 thresholded:1 backward:1 ram:1 graph:3 imaging:3 downstream:3 langer:1 run:1 uncertainty:1 respond:2 i5:1 family:1 blockstructures:1 vn:2 decision:1 ob:1 capturing:2 completing:1 quadratic:1 nonnegative:1 activity:7 annual:1 strength:2 binned:1 marder:1 generates:1 aspect:1 integrand:1 simulate:1 speed:1 structured:4 alternate:1 precompute:1 poor:2 mcknight:1 conjugate:2 jr:1 primate:1 biologically:1 making:1 s1:1 rehabilitation:1 glm:15 sbt:3 previously:1 turn:1 count:11 fail:1 mechanism:1 serf:1 capitalizing:1 available:3 plds:1 linderman:1 gaussians:1 rewritten:1 probe:1 apply:4 hierarchical:4 appropriate:1 alternative:1 robustness:2 weinberger:1 jn:1 vidne:2 binomial:8 clustering:1 denotes:2 remaining:1 opportunity:1 exploit:1 rebesco:1 build:1 added:1 arrangement:1 spike:41 hoffmann:1 receptive:7 primary:1 strategy:1 dependence:1 traditional:1 diagonal:1 exhibit:1 lends:1 distance:21 link:1 thank:1 simulated:1 spuriously:1 cellular:1 length:1 modeled:3 relationship:2 ratio:1 thoughtful:1 equivalently:1 difficult:1 keshri:1 negative:5 polson:2 implementation:1 calcium:1 perform:3 neuron:37 dispersion:1 markov:3 convolution:1 observation:7 inevitably:1 situation:1 relational:1 variability:3 looking:2 schr:1 discovered:2 inferred:11 introduced:1 chichilnisky:3 extensive:1 connection:21 coherent:1 quadratically:1 learned:1 barcelona:1 nip:1 address:2 below:1 pattern:11 scott:5 dynamical:1 sparsity:6 challenge:2 summarize:1 max:1 critical:1 natural:3 regularized:2 scheme:4 github:1 technology:2 grewe:1 hm:3 extract:1 columbia:2 sher:2 sn:1 sahani:1 text:1 prior:9 review:1 discovery:1 python:1 lacking:1 fully:5 expect:1 generation:1 proportional:1 masland:1 var:1 foundation:3 integrate:1 incident:1 affine:1 editor:1 uncorrelated:1 collaboration:1 row:1 awd1004351:1 supported:3 bern:2 side:1 burges:1 neighbor:1 characterizing:1 sparse:1 distributed:2 dimension:1 transition:1 pillow:7 rich:3 autoregressive:3 forward:1 commonly:1 collection:1 transaction:2 approximate:1 status:1 global:2 reveals:1 scgb:2 zimmer:1 un:8 latent:34 continuous:1 decade:1 swl:1 table:8 reviewed:1 promising:1 nature:5 learn:1 expanding:1 career:1 bottou:1 complex:4 necessarily:1 diag:1 dense:2 main:1 linearly:1 whole:2 noise:3 neuronal:5 fig:17 intel:1 en:3 cubic:1 extraordinary:1 probing:1 precision:1 inferring:2 pereira:1 sanes:1 exponential:1 minute:2 specific:2 covariate:1 showing:1 list:1 admits:1 physiological:1 evidence:2 effectively:1 illustrates:2 conditioned:2 suited:1 simply:1 explore:1 likely:1 ganglion:10 paninski:4 visual:3 scalar:1 truth:1 dispersed:3 conditional:10 haga:1 goal:1 identity:1 kramer:1 shared:1 absence:1 considerable:1 specifically:2 typical:1 principal:1 total:1 meaningful:2 indicating:1 rgc:1 support:1 mark:1 jonathan:1 kulkarni:1 incorporate:1 evaluate:1 mcmc:4 princeton:2 correlated:6 |
5,731 | 6,186 | Unsupervised Learning of Spoken Language with
Visual Context
David Harwath, Antonio Torralba, and James R. Glass
Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology
Cambridge, MA 02115
{dharwath, torralba, jrg}@csail.mit.edu
Abstract
Humans learn to speak before they can read or write, so why can?t computers
do the same? In this paper, we present a deep neural network model capable of
rudimentary spoken language acquisition using untranscribed audio training data,
whose only supervision comes in the form of contextually relevant visual images.
We describe the collection of our data comprised of over 120,000 spoken audio
captions for the Places image dataset and evaluate our model on an image search
and annotation task. We also provide some visualizations which suggest that our
model is learning to recognize meaningful words within the caption spectrograms.
1
1.1
Introduction
Problem Statement
Conventional automatic speech recognition (ASR) is performed by highly supervised systems which
utilize large amounts of training data and expert knowledge. These resources take the form of
audio with parallel transcriptions for training acoustic models, collections of text for training language models, and linguist-crafted lexicons mapping words to their pronunciations. The cost of
accumulating these resources is immense, so it is no surprise that very few of the more than 7,000
languages spoken across the world [1] support ASR (at the time of writing the Google Speech API
supports approximately 80). This highly supervised paradigm is not the only perspective on speech
processing. Glass [2] defines a spectrum of learning scenarios for speech and language systems.
Highly supervised approaches place less burden on the machine learning algorithms and more on
human annotators. With less annotation comes more learning difficulty, but more flexibility. At the
extreme end of the spectrum, a machine would need to make do with only sensory-level inputs, the
same way that humans do. It would need to infer the set of acoustic phonetic units, the subword
structures such as syllables and morphs, the lexical dictionaries of words and their pronunciations, as
well as higher level information about the syntactic and semantic elements of the language. While
such a machine does not exist today, there has been a significant amount of research in the speech
community in recent years working towards this goal. Sensor-based learning would allow a machine
to acquire language by observation and interaction, and could be applied to any language universally.
In this paper, we investigate novel neural network architectures for the purpose of learning high-level
semantic concepts across both audio and visual modalities. Contextually correlated streams of sensor
data from multiple modalities - in this case a visual image accompanied by a spoken audio caption
describing that image - are used to train networks capable of discovering patterns using otherwise
unlabelled training data. For example, these networks are able to pick out instances of the spoken
word ?water" from within continuous speech signals and associate them with images containing
bodies of water. The networks learn these associations directly from the data, without the use of
conventional speech recognition, text transcriptions, or any expert linguistic knowledge whatsoever.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
This represents a new direction in training neural networks that is a step closer to human learning,
where the brain must utilize parallel sensory input to reason about its environment.
1.2
Previous Work
In recent years, there has been much work in the speech community towards developing completely
unsupervised techniques that can learn the elements of a language solely from untranscribed audio
data. A seminal paper in this sub-field [3] introduced Segmental Dynamic Time Warping (S-DTW),
which enabled the discovery of repetitions of the same word-like units in an untranscribed audio
stream. Many works followed this, some focusing on improving the computational complexity of the
algorithm [4; 5], and others on applications [6; 7; 8; 9]. Other approaches have focused on inferring
the lexicon of a language from strings of phonemes [10; 11], as well as inferring phone-like and
higher level units directly from the speech audio itself [12; 13].
Completely separately, multimodal modeling of images and text has been an extremely popular
pursuit in the machine learning field during the past decade, with many approaches focusing on
accurately annotating objects and regions within images. For example, Barnard et al. [14] relied
on pre-segmented and labelled images to estimate joint distributions over words and objects, while
Socher [15] learned a latent meaning space covering images and words learned on non-parallel data.
Other work has focused on natural language caption generation. While a large number of papers
have been published on this subject, recent efforts using recurrent deep neural networks [16; 17] have
generated much interest in the field. In [16], Karpathy uses a refined version of the alignment model
presented in [18] to produce training exemplars for a caption-generating RNN language model that
can be conditioned on visual features. Through the alignment process, a semantic embedding space
containing both images and words is learned. Other works have also attempted to learn multimodal
semantic embedding spaces, such as Frome et al. [19] who trained separate deep neural networks for
language modeling as well as visual object classification. They then embedded the object classes into
a dense word vector space with the neural network language model, and fine-tuned the visual object
network to predict the embedding vectors of the words corresponding to the object classes. Fang et al.
[20] also constructed a model which learned a DNN-based multimodal similarity function between
images and text for the purpose of generating captions.
The work most similar to ours is Harwath and Glass [21], in which the authors attempted to associate
individual words from read spoken captions to relevant regions in images. While the authors did not
employ ASR to first transcribe the speech, they did use the oracle word boundaries to segment the
audio caption and used a CNN to embed each word into a high dimensional space. This CNN was
pretrained to perform supervised isolated word classification on a separate dataset. Additionally, the
authors used an off-the-shelf RCNN object detection network [22] to segment the image into regions
as well as provide embedding vectors for each region. A neural network alignment model matched
the words to regions, and the resulting network was used to perform image search and annotation. In
this paper, we eschew the RCNN object detection, the oracle audio caption segmentation, and the
pretrained audio CNN. Instead, we present a network which is able to take as input the raw audio
signal of the spoken caption and compute a similarity score against the entire image frame. The
network discovers semantically meaningful words and phrases directly from the audio waveform,
and is able to reliably localize them within captions. We use our network to perform image search
and annotation on a new dataset of free form spoken audio captions for the Places205 dataset [23].
2
Data Collection
Recent work on natural language image caption generation [17; 18] have used training data comprised
of parallel images and human generated text captions. There are several widely used image captioning
datasets such as Flickr8k [24], Flickr30k [25], and MSCOCO [26], but the captions for these datasets
are in text form. Since we desire spontaneously spoken audio captions, we collected a new corpus of
captions for the Places205 dataset [23]. Places205 contains over 2.5 million images categorized into
205 different scene classes, providing a rich variety of object types in many different contexts.
To collect audio captions, we turned to Amazon?s Mechanical Turk, an online service which allows
requesters to post ?Human Intelligence Tasks,? or HITs, which anonymous workers can then complete
for a small monetary compensation. We use a modified version of the Spoke JavaScript framework
[27] as the basis of our audio collection HIT. Spoke is a flexible framework for creating speech2
enabled websites, acting as a wrapper around the HTML5 getUserMedia API while also supporting
streaming audio from the client to a backend server via the Socket.io library. The Spoke client-side
framework also includes an interface to Google?s SpeechRecognition service, which can be used
to provide near-instantaneous feedback to the Turker. In our Mechanical Turk collection interface,
four randomly selected images are shown to the user, and a start/stop record button is paired with
each image. The user is instructed to record a free-form spoken caption for each image, describing
the salient objects in the scene. The backend sends the audio off to the Google speech recognition
service, which returns a text hypothesis of the words spoken. Because we do not have a ground truth
transcription to check against, we use the number of recognized words as a means of quality control.
If the Google recognizer was able to recognize at least eight words, we accept the caption. If not,
the Turker is notified in real-time that their caption cannot be accepted, and is given the option to
re-record their caption. Each HIT cannot be submitted until all 4 captions have been successfully
recorded. We paid the Turkers $0.03 per caption, and have to date collected approximately 120,000
captions from 1,163 unique turkers, equally sampled across the 205 Places scene categories. We plan
to make our dataset publicly available in the near future.
For the experiments in this paper, we split a subset of our captions into a 114,000 utterance training
set, a 2,400 utterance development set, and a 2,400 utterance testing set, covering a 27,891 word
vocabulary (as specified by the Google ASR). The average caption duration was 9.5 seconds, and
contained an average of 21.9 words. All the sets were randomly sampled, so many of the same
speakers will appear in all three sets. We do not have ground truth text transcriptions for analysis
purposes, so we use the Google speech recognition hypotheses as a proxy. Given the difficult nature
of our data, these hypothesis are by no means error free. To get an idea of the error rates offered by
the Google recognizer, we manually transcribed 100 randomly selected captions and found that the
Google transcriptions had an estimated word error rate of 23.17%, indicating that the transcriptions
are somewhat erroneous but generally reliable. To estimate the word start and end times for our
analysis figures, we used Kaldi [28] to train a speech recognizer using the standard Wall Street Journal
recipe, which was then used to force align the caption audio to the transcripts.
3
3.1
Multimodal Modeling of Images and Speech
Data Preprocessing
To preprocess our images we rely on the off-the-shelf VGG 16 layer network [29] pretrained on the
ImageNet ILSVRC 2014 task. The mean pixel value for the VGG network is first subtracted from
each image, and then we take the center 224 by 224 crop and feed it forward through the network.
We discard the classification layer and take the 4096-dimensional activations of the penultimate
layer to represent the input image features. We use a log mel-filterbank spectrogram to represent
the spoken audio caption associated with each image. Generating the spectrogram transforms the
1-dimensional waveform into a 2-dimensional signal with both frequency and time information. We
use a 25 millisecond window size and a 10 millisecond shift between consecutive frames, specifying
40 filters for the mel-scale filterbank. In order to take advantage of the additional computational
efficiency offered by performing gradient computation across batched input, we force every caption
spectrogram to have the same size. We do this by fixing the spectrogram size at L frames (1024 to
2048 in our experiments, respectively corresponding to approximately 10 and 20 seconds of audio).
We truncate any captions longer than L, and zero pad any shorter captions; approximately 66% of our
captions were found to be 10 seconds or shorter, while 97% were under 20 seconds. It is important
that the zero padding take place after any mean subtraction from the spectrograms, lest the padding
bias the mean.
3.2
Multimodal Network Description
In its simplest sense, our model is designed to calculate a similarity score for any given image and
caption pair, where the score should be high if the caption is relevant to the image and low otherwise.
It is similar in spirit to previously published models which attempt to learn a similarity measure
within one modality such as [30], but our model spans across multiple modalities. The specific
architecture we use is illustrated in Figure 1. Essentially, we use a two branched network, with one
branch devoted to modeling the image and the other devoted to modeling the spectrogram of the
audio caption. The final layer of each branch outputs a vector of activations, and the dot product of
3
Figure 1: The architecture of our audio/visual neural network with the embedding dimension denoted
by d and the caption length by L. Separate branches of the network model the image and the audio
spectrogram, and are subsequently tied together at the top level with a dot product node which
calculates a similarity score for any given image and audio caption pair.
these vectors is taken to represent the similarity between the image and the caption. As described in
Section 3.1, the VGG 16 layer network effectively forms the bulk of the image branch, but we need
to have a means of mapping the 4096-dimensional VGG embeddings into the multimodal embedding
space that the images and audio will share. For this purpose we employ a simple linear transform.
The audio branch of our network is convolutional in nature and treats the spectrogram as a 1-channel
image. However, our spectrograms have a few interesting properties that differentiate them from
images. While it is easy to imagine how visual objects in images can be translated along both the
vertical and horizontal axes, the same is not quite true for words in spectrograms. A time delay
manifests itself as a translation in the temporal (horizontal) direction, but a fixed pitch will always
be mapped to the same frequency bin on the vertical axis. The same phone pronounced by two
different speakers will not necessarily contain energy at exactly the same frequencies, but the physics
is more complex than simply shifting the entire phone up and down the frequency axis. Following the
example of [21], we size the filters of the first layer of the network to capture the entire 40-dimensional
frequency axis over a context window of 5 frames, or approximately 50 milliseconds. This means
that the vertical dimension is effectively collapsed out in the first layer, and so subsequent layers are
only convolutional in the temporal dimension. After the final layer, we pool across the entire caption
in the temporal dimension (using either mean or max pooling), and then apply L2 normalization to
the caption embedding before computing the similarity score.
3.3
Training Procedure
By taking the dot product of an image embedding vector with an audio caption embedding vector, we
obtain a similarity score S. We want this score to be high for ground-truth pairs, and low otherwise.
We train with stochastic gradient descent using an objective function which compares the similarity
scores between matched image/caption pairs and mismatched pairs. Each minibatch consists of
B ground truth pairs, each of which is paired with one impostor image and one impostor caption
randomly sampled from the same minibatch. Let Sjp denote the similarity score between the jth
ground truth pair, Sjc be the score between the original image and the impostor caption, and Sji be the
score between the original caption and the impostor image. The loss for the minibatch as a function
4
Figure 2: Examples of ground truth image/caption pairs along with the time-dependent similarity
profile showing which regions of the spectrogram the model believes are highly relevant to the
image. Overlaid on the similarity curve is the recognition text of the speech, along with vertical
lines to denote word boundaries. Note that the neural network model had no access to these (or any)
transcriptions during the training or testing phases.
of the network parameters ? is defined as:
L(?) =
B
X
max(0, Sjc ? Sjp + 1) + max(0, Sji ? Sjp + 1)
(1)
j=1
This loss function was encourages the model to assign a higher similarity score to a ground truth
image/caption pair than a mismatched pair by a margin of 1. In [18] the authors used a similar
objective function to align images with text captions, but every single mismatched pair of images and
captions within a minibatch was considered. Here, we only sample two negative training examples
for each positive training example. In practice, we set our minibatch size to 128, used a constant
momentum of 0.9, and ran SGD training for 50 epochs. Learning rates took a bit of tuning to get
right. In the end, we settled on an initial value of 1e-5, and employed a schedule which decreased the
learning rate by a factor between 2 and 5 every 5 to 10 epochs.
4
4.1
Experiments and Analysis
Image Query and Annotation
To objectively evaluate our models, we adopt an image search and annotation task similar to the one
used by [18; 21; 31]. We subsample a validation set of 1,000 image/caption pairs from the testing set
described in Section 2. To perform image search given a caption, we keep the caption fixed and use
our model to compute the similarity score between the caption and each of the 1,000 images in the
validation set. Image annotation works similarly, but instead the image is kept fixed and the network
is tasked with finding the caption which best fits the image. Some example search and annotation
results are displayed in Figures 3 and 4, and we report recall scores for the top 1, 5, and 10 hits in
Table 1. We experimented with many different variations on our model architecture, including varying
the number of hidden units, number of layers, filter sizes, embedding dimension, and embedding
normalization schemes. We found that an embedding dimension of d = 1024 worked well, and that
normalizing the caption embeddings prior to the similarity score computation helped. When only
the acoustic embedding vectors were L2 normalized, we saw a consistent increase in performance.
However, when the image embeddings were also L2 normalized (equivalent to replacing the dot
product similarity with a cosine similarity), the recall scores suffered. In Table 1, we show the impact
of various truncation lengths for the audio captions, as well as using a mean or max pooling scheme
5
Figure 3: Example search results from our system. Shown on the top is the spectrogram of the query
caption, along with its speech recognition hypothesis text. Below each caption are its five highest
scoring images from the test set.
Figure 4: Example annotation results from our system. Shown on the left is the query image, and
on the right are the Google speech recognition hypotheses of the five highest scoring audio captions
from the test set. We do not show the spectrograms here to avoid clutter.
6
Model Variant
Pooling Caption
type
length (s)
R@1
R@5
R@10
R@1
R@5
R@10
Mean
Mean
Max
Max
.056
.066
.069
.068
.192
.215
.192
.223
.289
.299
.278
.309
.051
.082
.068
.061
.194
.195
.190
.192
.283
.295
.274
.291
10
20
10
20
Search
Annotation
Table 1: Experimental results for image search and annotation on the Places audio caption data. All
models shown used an embedding dimension of 1024.
across the audio caption. We found that truncating the captions to 20 seconds instead of 10 only
slightly boosts the scores, and that mean and max pooling work about equally well. All models were
trained on an NVIDIA Titan X GPU, which usually took about 2 days.
4.2
Analysis of Image-Caption Pairs
In order to gain a better understanding of what kind of acoustic patterns are being learned by our
models, we computed time-dependent similarity profiles for each ground truth image/caption pair.
This was done by removing the final pooling layer from the spectrogram branch of a trained model,
leaving a temporal sequence of vectors reflecting the activations of the top-level convolutional units
with respect to time. We computed the dot product of the image embedding vector with each of these
vectors individually, rectified the signal to show only positive similarities, and then applied a 5th order
median smoothing filter. We time aligned the recognition hypothesis to the spectrogram, allowing
us to see exactly which words overlapped the audio regions that were highly similar to the image.
Figure 2 displays several examples of these similarity curves along with the overlaid recognition
text. In the majority of cases, the regions of the spectrogram which have the highest similarity to
the accompanying image turn out to be highly informative words or phrases, often making explicit
references to the salient objects in the image scenes. This suggests that our network is in fact learning
to recognize audio patterns consistent with words using zero linguistic supervision whatsoever, and
perhaps even more impressively is able to learn their semantics.
4.3
Analysis of Learned Acoustic Representations
To further examine the high-level acoustic representations learned by our networks, we extracted
spectrograms for 1645 instances of 14 different ground truth words from the development set by
force aligning the Google recognizer hypotheses to the audio. We did a forward pass of each of these
individual words through the audio branch of our network, leaving us with an embedding vector for
each spoken word instance. We performed t-SNE [32] analysis on these points, shown in Figure 5.
We observed that the points form pure clusters, indicating that the top-level activations of the audio
network carry information which is discriminative across different words.
We also examined the acoustic representations being learned at the bottom of the audio network
by using the first convolutional layer and its nonlinearity as a feature extractor for a query-byexample keyword spotting task on the TIMIT data [33]. We then concatenate delta and double
delta features (in the same style as the standard MFCC39 scheme), and finally apply PCA to
reduce the dimensionality of the resulting features to 60. Exemplars of 10 different keywords are
selected from the TIMIT training set, and frame-by-frame dynamic time warping using the cosine
distance measure is used to search for occurrences of those keywords in the TIMIT testing set.
Precision@N (P@N) and equal error rate (EER) are reported as evaluation metrics. We follow exactly
the same experimental setup, including the same keywords, described in detail by [9] and [12], and
compare against their published results in Table 2, along with a baseline system using standard
MFCC39 features. The features extracted by our network are competitive against the best previously
published baseline [12] in term of P@N, while outperforming it on EER. Because [12] and [9] are
unsupervised approaches trained only on the TIMIT training set, this experiment is not a completely
fair comparison, but serves to demonstrate that discriminative phonetic information is indeed being
7
modelled by our networks, even though we do not use any conventional linguistic supervision.
System
MFCC baseline
[9]
[12]
This work
P@N
0.50
0.53
0.63
0.62
EER
0.127
0.164
0.169
0.049
Table 2: Precision @ N and equal
error rate (EER) results for the
TIMIT keyword spotting task. The
10 keywords used for the task were:
development, organizations, money,
age, artists, surface, warm, year,
problem, children.
Figure 5: t-SNE visualization in 2 dimensions for 1645 spoken instances of 14 different word types taken from the development data.
5
Conclusion
In this paper, we have presented a deep neural network architecture capable of learning associations
between natural image scenes and accompanying free-form spoken audio captions. The networks
do not rely on any form of conventional speech recognition, text transcriptions, or expert linguistic
knowledge, but are able to learn to recognize semantically meaningful words and phrases at the
spectral feature level. Aside from the pre-training of the off-the-shelf VGG network used in the image
branch of the network, contextual information derived from the images is the only form of supervision
used. We presented experimental results in which the networks were used to perform image search
and annotation tasks, as well as some preliminary analysis geared towards understanding the kinds of
acoustic representations are being learned by the network.
There are many possible paths that future work might take. In the near term, the embeddings learned
by the networks could be used to perform acoustic segmentation and clustering, effectively learning a
lexicon of word-like units. The embeddings might be combined with other forms of word clustering
such as those based on dynamic time warping [3] to perform acoustically and semantically aware
word clustering. Further work should also more directly explore the regions of the images which
hold the highest affinity for different word or phrase-like units in the caption; this would enrich the
learned lexicon of units with visual semantics. One interesting long-term idea would be to collect
spoken captions for the same set of images across multiple languages and then train a network to
learn words across each of the languages. By identifying which words across the languages are highly
associated with the same kinds of visual objects, the network would have a means of performing
speech-to-speech translation. Yet another long-term idea would be to train networks capable of
synthesizing spoken captions for an arbitrary image, or alternatively synthesizing images given a
spoken description. Finally, it would be possible to apply our model to more generic forms audio
with visual context, such as environmental sounds.
References
[1] M.P. Lewis, G.F. Simon, and C.D. Fennig, Ethnologue: Languages of the World, Nineteenth edition, SIL
International. Online version: http://www.ethnologue.com, 2016.
[2] James Glass, ?Towards unsupervised speech processing,? in ISSPA Keynote, Montreal, 2012.
[3] A. Park and J. Glass, ?Unsupervised pattern discovery in speech,? in IEEE Transactions on Audio, Speech,
and Language Processing vol. 16, no.1, pp. 186-197, 2008.
8
[4] A. Jansen, K. Church, and H. Hermansky, ?Toward spoken term discovery at scale with zero resources,? in
Proceedings of Interspeech, 2010.
[5] A. Jansen and B. Van Durme, ?Efficient spoken term discovery using randomized algorithms,? in
Proceedings of IEEE Workshop on Automatic Speech Recognition and Understanding, 2011.
[6] I. Malioutov, A. Park, R. Barzilay, and J. Glass, ?Making sense of sound: Unsupervised topic segmentation
over acoustic input,? in Proceedings of the Association for Computational Linguistics (ACL), 2007.
[7] M. Dredze, A. Jansen, G. Coppersmith, and K. Church, ?NLP on spoken documents without ASR,? in
Proceedings of EMNLP, 2010.
[8] D. Harwath, T.J. Hazen, and J. Glass, ?Zero resource spoken audio corpus analysis,? in Proceedings of
ICASSP, 2012.
[9] Y. Zhang and J. Glass, ?Unsupervised spoken keyword spotting via segmental DTW on Gaussian
posteriorgrams,? in Proceedings ASRU, 2009.
[10] M. Johnson, ?Unsupervised word segmentation for sesotho using adaptor grammars,? in Proceedings of
ACL SIG on Computational Morphology and Phonology, 2008.
[11] S. Goldwater, T. Griffiths, and M. Johnson, ?A Bayesian framework for word segmentation: exploring the
effects of context,? in Cognition, vol. 112 pp.21-54, 2009.
[12] C. Lee and J. Glass, ?A nonparametric Bayesian approach to acoustic model discovery,? in Proceedings of
the 2012 meeting of the Association for Computational Linguistics, 2012.
[13] C. Lee, T.J. O?Donnell, and J. Glass, ?Unsupervised lexicon discovery from acoustic input,? in Transactions
of the Association for Computational Linguistics, 2015.
[14] K. Barnard, P. Duygulu, D. Forsyth, N. DeFreitas, D.M. Blei, and M.I. Jordan, ?Matching words and
pictures,? in Journal of Machine Learning Research, 2003.
[15] R. Socher and F. Li, ?Connecting modalities: Semi-supervised segmentation and annotation of images
using unaligned text corpora,? in Proceedings of CVPR, 2010.
[16] A. Karpathy and F. Li, ?Deep visual-semantic alignments for generating image descriptions,? in Proceedings of the 2015 Conference on Computer Vision and Pattern Recognition, 2015.
[17] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan, ?Show and tell: A neural image caption generator,? in
Proceedings of the 2015 Conference on Computer Vision and Pattern Recognition, 2015.
[18] A. Karpathy, A. Joulin, and F. Li, ?Deep fragment embeddings for bidirectional image sentence mapping,?
in Proceedings of the Neural Information Processing Society, 2014.
[19] A. Frome, G. Corrado, J. Shlens, S. Bengio, J. Dean, M. Ranzato, and T. Mikolov, ?Devise: A deep
visual-semantic embedding model,? in Proceedings of the Neural Information Processing Society, 2013.
[20] H. Fang, S. Gupta, F. Iandola, Srivastava R., L. Deng, P. Dollar, J. Gao, X. He, M. Mitchell, J.C. Platt, C.L.
Zitnick, and G. Zweig, ?From captions to visual concepts and back,? in Proceedings of CVPR, 2015.
[21] D. Harwath and J. Glass, ?Deep multimodal semantic embeddings for speech and images,? in Proceedings
of the IEEE Workshop on Automatic Speech Recognition and Understanding, 2015.
[22] R. Girshick, J. Donahue, T. Darrell, and J. Malik, ?Rich feature hierarchies for accurate object detection
and semantic segmentation,? in Proceedings of CVPR, 2013.
[23] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva, ?Learning deep features for scene recognition
using places database,? in Proceedings of the Neural Information Processing Society, 2014.
[24] C. Rashtchian, P. Young, M. Hodosh, and J. Hockenmaier, ?Collecting image annotations using amazon?s
mechanical turk,? in Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language
Data with Amazon?s Mechanical Turk, 2010.
[25] P. Young, A. Lai, M. Hodosh, and J. Hockenmaier, ?From image descriptions to visual denotations: New
similarity metrics for semantic inference over event descriptions,? in Transactions for the Association of
Computational Linguistics, 2014.
[26] T. Lin, M. Marie, S. Belongie, L. Bourdev, R. Girshick, P. Perona, D. Ramanan, C.L. Zitnick, and P. Dollar,
?Microsoft COCO: Common objects in context,? in arXiv:1405.0312, 2015.
[27] P. Saylor, ?Spoke: A framework for building speech-enabled websites,? M.S. thesis, Massachusetts Institute of Technology, 32 Vassar Street, Cambridge, MA 02139, 2015, Available at
https://github.com/psaylor/spoke.
[28] D. Povey, A. Ghoshal, G. Boulianne, L. Burget, O. Glembek, N. Goel, M. Hannemann, P. Motlicek,
Y. Qian, P. Schwarz, J. Silovsky, G. Stemmer, and K. Vesely, ?The Kaldi speech recognition toolkit,? in
IEEE 2011 Workshop on Automatic Speech Recognition and Understanding, 2011.
[29] K. Simonyan and A. Zisserman, ?Very deep convolutional networks for large-scale image recognition,?
CoRR, vol. abs/1409.1556, 2014.
[30] S. Chopra, R. Hadsell, and Y. LeCun, ?Learning a similarity metric discriminatively, with application to
face verification,? in Proceedings of CVPR, 2005.
[31] R. Socher, A. Karpathy, Q.V. Le, C.D. Manning, and A.Y. Ng, ?Grounded compositional semantics for
finding and describing images with sentences,? in Transactions of the Association for Computational
Linguistics, 2014.
[32] L. van der Maaten and G. Hinton, ?Visualizing high-dimensional data using t-sne,? in Journal of Machine
Learning Research, 2008.
[33] J. Garofolo, L. Lamel, W. Fisher, J. Fiscus, D. Pallet, N. Dahlgren, and V. Zue, ?The timit acoustic-phonetic
continuous speech corpus,? 1993.
9
| 6186 |@word cnn:3 version:3 pick:1 sgd:1 paid:1 carry:1 initial:1 wrapper:1 contains:1 score:17 fragment:1 tuned:1 ours:1 document:1 subword:1 past:1 contextual:1 com:2 activation:4 yet:1 must:1 gpu:1 subsequent:1 concatenate:1 informative:1 designed:1 aside:1 motlicek:1 intelligence:2 discovering:1 website:2 selected:3 record:3 blei:1 node:1 lexicon:5 zhang:1 five:2 along:6 constructed:1 consists:1 indeed:1 examine:1 morphology:1 brain:1 window:2 spain:1 lapedriza:1 matched:2 what:1 kind:3 string:1 spoken:24 whatsoever:2 finding:2 temporal:4 every:3 flickr8k:1 collecting:1 dahlgren:1 exactly:3 filterbank:2 hit:4 platt:1 control:1 unit:8 ramanan:1 appear:1 positive:2 before:2 service:3 treat:1 io:1 api:2 vassar:1 solely:1 path:1 approximately:5 might:2 acl:2 garofolo:1 examined:1 collect:2 specifying:1 suggests:1 contextually:2 unique:1 lecun:1 spontaneously:1 testing:4 practice:1 impostor:4 procedure:1 rnn:1 matching:1 burget:1 word:43 pre:2 eer:4 griffith:1 suggest:1 get:2 cannot:2 context:6 collapsed:1 seminal:1 writing:1 accumulating:1 www:1 conventional:4 equivalent:1 dean:1 lexical:1 center:1 boulianne:1 duration:1 backend:2 focused:2 truncating:1 hadsell:1 amazon:3 identifying:1 pure:1 sjc:2 qian:1 shlens:1 fang:2 enabled:3 embedding:17 variation:1 requester:1 imagine:1 today:1 hierarchy:1 user:2 speak:1 caption:73 us:1 hypothesis:7 sig:1 associate:2 element:2 overlapped:1 recognition:18 keynote:1 database:1 observed:1 bottom:1 capture:1 calculate:1 region:9 keyword:3 ranzato:1 highest:4 fiscus:1 ran:1 environment:1 complexity:1 dynamic:3 trained:4 segment:2 efficiency:1 completely:3 basis:1 translated:1 multimodal:7 joint:1 icassp:1 various:1 train:5 describe:1 artificial:1 query:4 tell:1 refined:1 pronunciation:2 whose:1 quite:1 widely:1 nineteenth:1 cvpr:4 otherwise:3 annotating:1 grammar:1 objectively:1 simonyan:1 syntactic:1 transform:1 itself:2 final:3 online:2 differentiate:1 advantage:1 sequence:1 took:2 interaction:1 product:5 unaligned:1 relevant:4 turned:1 monetary:1 date:1 aligned:1 hazen:1 flexibility:1 rashtchian:1 description:5 pronounced:1 recipe:1 cluster:1 double:1 darrell:1 produce:1 generating:4 captioning:1 object:15 recurrent:1 montreal:1 fixing:1 bourdev:1 exemplar:2 adaptor:1 durme:1 keywords:4 barzilay:1 transcript:1 frome:2 come:2 direction:2 waveform:2 filter:4 subsequently:1 stochastic:1 human:6 bin:1 assign:1 wall:1 anonymous:1 preliminary:1 exploring:1 accompanying:2 hold:1 around:1 considered:1 ground:9 overlaid:2 mapping:3 predict:1 kaldi:2 cognition:1 dictionary:1 torralba:3 consecutive:1 adopt:1 purpose:4 recognizer:4 saw:1 individually:1 schwarz:1 repetition:1 successfully:1 mit:1 sensor:2 always:1 gaussian:1 modified:1 avoid:1 shelf:3 zhou:1 varying:1 linguistic:4 ax:1 derived:1 check:1 baseline:3 sense:2 glass:11 dollar:2 inference:1 dependent:2 streaming:1 entire:4 accept:1 pad:1 hidden:1 perona:1 dnn:1 semantics:3 pixel:1 classification:3 flexible:1 denoted:1 development:4 plan:1 smoothing:1 enrich:1 jansen:3 field:3 equal:2 asr:5 aware:1 ng:1 manually:1 represents:1 park:2 hermansky:1 unsupervised:9 future:2 others:1 report:1 few:2 employ:2 randomly:4 recognize:4 individual:2 phase:1 microsoft:1 attempt:1 ab:1 detection:3 organization:1 interest:1 hannemann:1 highly:7 investigate:1 evaluation:1 alignment:4 extreme:1 devoted:2 immense:1 accurate:1 capable:4 closer:1 worker:1 shorter:2 re:1 isolated:1 girshick:2 instance:4 modeling:5 phrase:4 cost:1 subset:1 comprised:2 delay:1 johnson:2 reported:1 morphs:1 combined:1 international:1 randomized:1 csail:1 lee:2 off:4 physic:1 donnell:1 pool:1 acoustically:1 together:1 connecting:1 thesis:1 recorded:1 settled:1 containing:2 emnlp:1 transcribed:1 creating:2 expert:3 style:1 return:1 li:3 accompanied:1 includes:1 titan:1 forsyth:1 stream:2 performed:2 helped:1 start:2 relied:1 option:1 parallel:4 competitive:1 annotation:14 simon:1 timit:6 publicly:1 convolutional:5 phoneme:1 who:1 preprocess:1 socket:1 goldwater:1 modelled:1 raw:1 bayesian:2 accurately:1 artist:1 mfcc:1 rectified:1 published:4 malioutov:1 submitted:1 hlt:1 against:4 energy:1 acquisition:1 frequency:5 turk:4 james:2 pp:2 associated:2 stop:1 sampled:3 dataset:6 gain:1 popular:1 massachusetts:2 mitchell:1 manifest:1 knowledge:3 recall:2 dimensionality:1 segmentation:7 schedule:1 reflecting:1 back:1 focusing:2 feed:1 bidirectional:1 higher:3 supervised:5 day:1 follow:1 zisserman:1 done:1 though:1 until:1 working:1 horizontal:2 replacing:1 google:10 minibatch:5 defines:1 quality:1 perhaps:1 dredze:1 building:1 effect:1 naacl:1 concept:2 true:1 contain:1 normalized:2 read:2 laboratory:1 semantic:9 illustrated:1 visualizing:1 during:2 interspeech:1 encourages:1 covering:2 speaker:2 mel:2 cosine:2 complete:1 demonstrate:1 lamel:1 interface:2 rudimentary:1 image:85 meaning:1 instantaneous:1 novel:1 discovers:1 common:1 million:1 association:7 he:1 significant:1 cambridge:2 automatic:4 tuning:1 similarly:1 nonlinearity:1 language:21 had:2 dot:5 toolkit:1 access:1 geared:1 supervision:4 similarity:23 longer:1 money:1 align:2 segmental:2 aligning:1 surface:1 recent:4 perspective:1 phone:3 scenario:1 phonetic:3 discard:1 server:1 sji:2 nvidia:1 outperforming:1 coco:1 meeting:1 der:1 devise:1 scoring:2 additional:1 somewhat:1 spectrogram:18 goel:1 employed:1 deng:1 recognized:1 paradigm:1 subtraction:1 corrado:1 signal:4 semi:1 branch:8 multiple:3 sound:2 infer:1 segmented:1 unlabelled:1 long:2 zweig:1 lin:1 lai:1 post:1 equally:2 paired:2 calculates:1 pitch:1 impact:1 crop:1 variant:1 oliva:1 essentially:1 metric:3 tasked:1 vision:2 arxiv:1 represent:3 normalization:2 grounded:1 want:1 separately:1 fine:1 decreased:1 median:1 leaving:2 sends:1 modality:5 suffered:1 lest:1 subject:1 pooling:5 spirit:1 jordan:1 near:3 chopra:1 split:1 embeddings:7 easy:1 bengio:2 variety:1 fit:1 architecture:5 reduce:1 idea:3 vgg:5 shift:1 pca:1 padding:2 effort:1 speech:31 linguist:1 compositional:1 antonio:1 deep:10 generally:1 karpathy:4 amount:2 transforms:1 clutter:1 nonparametric:1 category:1 simplest:1 http:2 exist:1 millisecond:3 harwath:4 estimated:1 delta:2 per:1 bulk:1 write:1 vol:3 four:1 salient:2 localize:1 marie:1 povey:1 spoke:5 branched:1 utilize:2 kept:1 button:1 year:3 place:6 maaten:1 bit:1 layer:12 followed:1 syllable:1 display:1 oracle:2 denotation:1 worked:1 scene:6 toshev:1 extremely:1 span:1 duygulu:1 performing:2 mikolov:1 developing:1 truncate:1 manning:1 across:11 slightly:1 hodosh:2 hockenmaier:2 making:2 turker:2 taken:2 resource:4 visualization:2 previously:2 describing:3 jrg:1 turn:1 zue:1 end:3 serf:1 pursuit:1 flickr30k:1 available:2 eight:1 apply:3 spectral:1 generic:1 occurrence:1 subtracted:1 original:2 top:5 clustering:3 linguistics:5 nlp:1 phonology:1 society:3 warping:3 objective:2 malik:1 gradient:2 affinity:1 distance:1 separate:3 mapped:1 penultimate:1 street:2 majority:1 topic:1 collected:2 water:2 reason:1 toward:1 length:3 providing:1 acquire:1 difficult:1 setup:1 statement:1 sne:3 javascript:1 negative:1 synthesizing:2 reliably:1 perform:7 allowing:1 vertical:4 observation:1 datasets:2 compensation:1 descent:1 displayed:1 supporting:1 hinton:1 frame:6 arbitrary:1 community:2 david:1 introduced:1 pair:14 mechanical:4 specified:1 sentence:2 imagenet:1 acoustic:13 learned:11 barcelona:1 boost:1 nip:1 able:6 spotting:3 below:1 pattern:6 usually:1 coppersmith:1 eschew:1 reliable:1 max:7 belief:1 shifting:1 including:2 event:1 difficulty:1 natural:3 client:2 force:3 rely:2 warm:1 scheme:3 github:1 technology:2 library:1 picture:1 dtw:2 axis:3 church:2 utterance:3 text:14 epoch:2 prior:1 discovery:6 l2:3 understanding:5 embedded:1 loss:2 discriminatively:1 generation:2 interesting:2 impressively:1 sil:1 annotator:1 validation:2 rcnn:2 age:1 generator:1 offered:2 verification:1 proxy:1 consistent:2 xiao:1 share:1 translation:2 free:4 truncation:1 jth:1 side:1 allow:1 bias:1 institute:2 turkers:2 mismatched:3 taking:1 stemmer:1 face:1 ghoshal:1 van:2 curve:2 boundary:2 feedback:1 vocabulary:1 world:2 dimension:8 untranscribed:3 rich:2 sensory:2 author:4 collection:5 instructed:1 universally:1 preprocessing:1 forward:2 erhan:1 transaction:4 transcription:8 keep:1 corpus:4 belongie:1 glembek:1 discriminative:2 alternatively:1 spectrum:2 search:11 continuous:2 latent:1 decade:1 why:1 table:5 additionally:1 learn:8 nature:2 channel:1 improving:1 necessarily:1 complex:1 zitnick:2 did:3 joulin:1 dense:1 subsample:1 profile:2 edition:1 fair:1 child:1 categorized:1 body:1 crafted:1 batched:1 mscoco:1 precision:2 sub:1 inferring:2 momentum:1 explicit:1 tied:1 extractor:1 donahue:1 young:2 down:1 removing:1 embed:1 erroneous:1 specific:1 pallet:1 showing:1 experimented:1 gupta:1 normalizing:1 burden:1 socher:3 workshop:4 effectively:3 corr:1 conditioned:1 margin:1 surprise:1 simply:1 explore:1 gao:1 visual:16 vinyals:1 desire:1 contained:1 iandola:1 pretrained:3 srivastava:1 truth:9 environmental:1 lewis:1 extracted:2 ma:2 transcribe:1 goal:1 towards:4 labelled:1 barnard:2 asru:1 fisher:1 sjp:3 semantically:3 acting:1 pas:1 accepted:1 experimental:3 attempted:2 meaningful:3 indicating:2 ilsvrc:1 support:2 evaluate:2 audio:43 correlated:1 |
5,732 | 6,187 | Feature-distributed sparse regression: a
screen-and-clean approach
Jiyan Yang? Michael W. Mahoney? Michael A. Saunders? Yuekai Sun?
? Stanford University ? University of California at Berkeley ? University of Michigan
[email protected] [email protected]
[email protected] [email protected]
Abstract
Most existing approaches to distributed sparse regression assume the data is partitioned by samples. However, for high-dimensional data (D N ), it is more
natural to partition the data by features. We propose an algorithm to distributed
sparse regression when the data is partitioned by features rather than samples.
Our approach allows the user to tailor our general method to various distributed
computing platforms by trading-off the total amount of data (in bits) sent over the
communication network and the number of rounds of communication. We show
that an implementation of our approach is capable of solving `1 -regularized `2
regression problems with millions of features in minutes.
1
Introduction
Explosive growth in the size of modern datasets has fueled the recent interest in distributed statistical
learning. For examples, we refer to [2, 20, 9] and the references therein. The main computational
bottleneck in distributed statistical learning is usually the movement of data between compute notes,
so the overarching goal of algorithm design is the minimization of such communication costs.
Most work on distributed statistical learning assume the data is partitioned by samples. However, for
high-dimensional datasets, it is more natural to partition the data by features. Unfortunately, methods
that are suited to such feature-distributed problems are scarce. A possible explanation for the paucity
of methods is feature-distributed problems are harder than their sample-distributed counterparts. If
the data is distributed by samples, each machine has a complete view of the problem (albeit a partial
view of the dataset). Given only its local data, each machine can fit the full model. On the other hand,
if the data is distributed by features, each machine no longer has a complete view of the problem.
It can only fit a (generally mis-specified) submodel. Thus communication among the machines is
necessary to solve feature-distributed problems. In this paper, our goal is to develop algorithms that
minimize the amount of data (in bits) sent over the network across all rounds for feature-distributed
sparse linear regression.
The sparse linear model is
y = X? ? + ,
N ?D
(1)
?
N
D
where X ? R
are features, y ? R are responses, ? ? R are (unknown) regression
coefficients, and ? RN are unobserved errors. The model is sparse because ? ? is s-sparse; i.e., the
cardinality of S := supp(? ? ) is at most s. Although it is an idealized model, the sparse linear model
has proven itself useful in a wide variety of applications.
A popular way to fit a sparse linear model is the lasso [15, 3]:
?b ? arg mink?k1 ?1
1
2N ky
? X?k22 ,
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
where we assumed the problem is scaled so that k? ? k1 = 1. There is a well-developed theory of the
lasso that ensures the lasso estimator ?b is nearly as close to ? ? as an oracle estimator XS? y, where
S ? [D] is the support of ? ? [11]. Formally, under some conditions on the Gram matrix N1 X T X,
D
the (in-sample) prediction error of the lasso is roughly s log
N . Since the prediction error of the oracle
s
estimator is (roughly) N , the lasso estimator is almost as good as the oracle estimator. We refer to [8]
for the details.
We propose an approach to feature distributed sparse regression that attains the convergence rate of
the lasso estimator. Our approach, which we call S CREENA ND C LEAN, consists of two stages: a
screening stage where we reduce the dimensionality of the problem by discarding irrelevant features;
and a cleaning stage where we fit a sparse linear model to a sketched problem. The key features of
the proposed approach are:
? We reduce the best-known communication cost (in bits) of feature-distributed sparse regression from O(mN 2 ) to O(N ms) bits, where N is the sample size, m is the number of
machines, and s is the sparsity. To our knowledge, the proposed approach is the only one
that exploits sparsity to reduce communication cost.
? As a corollary, we show that constrained Newton-type methods converge linearly (up to a
statistical tolerance) on high-dimensional problems that are not strongly convex. Also, the
convergence rate is only weakly dependent on the condition number of the problem.
? Another benefit of our approach is it allows users to trade-off the amount of data (in bits)
sent over the network and the number of rounds of communication. At one extreme, it
e
is possible to reduce the amount of bits sent over the network to O(mN
s) (at the cost of
N
log s log
rounds
of
communication).
At
the
other
extreme,
it
is
possible
to reduce the
D
2
e
) bits over the network.
total number of iterations to a constant at the cost of sending O(mN
Related work. DECO [17] is a recently proposed method that addresses the same problem we
address. At a high level, DECO is based on the observation that if the features on separate machines
are uncorrelated, the sparse regression problem decouples across machines. To ensures the features
on separate machines are uncorrelated, DECO first decorrelates the features by a decorrelation step.
The method is communication efficient in that it only requires a single round of communication,
where O(mN 2 ) bits of data are sent over the network. We refer to [17] for the details of DECO.
As we shall see, in the cleaning stage of our approach, we utilize the sub-Gaussian sketches. In fact,
other sketches, e.g., sketches based on Hadamard transform [16] and sparse sketches [4] may also be
used. An overview of various sketching techniques can be found in [19].
The cleaning stage of our approach is operationally very similar to the iterative Hessian sketch
(IHS) by Pilanci and Wainwright for constrained least squares problems [12]. Similar Newton-type
methods that relied on sub-sampling rather than sketching were also studied by [14]. However, they
are chiefly concerned with the convergence of the iterates to the (stochastic) minimizer of the least
squares problem, while we are chiefly concerned with the convergence of the iterates to the unknown
regression coefficients ? ? . Further, their assumptions on the sketching matrix are stated in terms of
the transformed tangent cone at the minimizer of the least squares problem, while our assumptions
are stated in terms of the tangent cone at ? ? .
Finally, we wish to point out that our results are similar in spirit to those on the fast convergence
of first order methods [1, 10] on high-dimensional problems in the presence of restricted strong
convexity. However, those results are also chiefly concerned with the convergence of the iterates to
the (stochastic) minimizer of the least squares problem. Further, those results concern first-order,
rather than second-order methods.
2
A screen-and-clean approach
Our approach S CREENA ND C LEAN consists of two stages:
1. Screening Stage: reduce the dimension of the problem from D to d = O(N ) by discarding
irrelevant features.
2. Cleaning Stage: fit a sparse linear model to the O(N ) selected features.
We note that it is possible to avoid communication in the screening stage by using a method based on
the marginal correlations between the features and the response. Further, by exploiting sparsity, it is
2
possible to reduce the amount of communication to O(mN s) bits (ignoring polylogarithmic factors).
To the authors? knowledge, all existing one-shot approaches to feature-distributed sparse regression
that involve only a single round of communication require sending O(mN 2 ) bits over the network.
In the first stage of S CREENA ND C LEAN, the k-th machine selects a subset Sbk of potentially relevant
features, where |Sbk | = dk . N . To avoid discarding any relevant features, we use a screening
method that has the sure screening property:
P supp(?k? ) ? ?k?[m] Sbk ? 1,
(2)
where ?k? is the k-th block of ? ? . We remark that we do not require the selection procedure to be
variable selection consistent. That is, we do not require the selection procedure to only selected
relevant features. In fact, we permit the possibility that most of the selected features are irrelevant.
There are many existing methods that, under some conditions on the strength of the signal, has the
sure screening property. A prominent example is sure independence screening (SIS) [6]:
SbSIS ? {i ? [D] : N1 xTi y is among the b? N c largest entries of N1 X T y}.
(3)
SIS requires no communication among the machines, making it particularly amenable to distributed
implementation. Other methods include HOLP [18].
In the second stage of S CREENA ND C LEAN, which is presented as Algorithm 1, we solve the reduced
sparse regression problem in an iterative manner. At a high level, our approach is a constrained
quasi-Newton method. At the beginning of the second stage, each machine sketches the features that
are stored locally:
fk ? ?1 SX b ,
X
k,Sk
nT
where S ? RnT ?N is a sketching matrix and Xk,Sbk ? Rn?dk comprises the features stored on the
k-th machine that were selected by the screening stage. For notational convenience later, we divide
fk row-wise into T blocks:
X
?
?
fk,1
X
. ?
fk = ?
X
? .. ? ,
fk,T
X
where each block is a n ? dk block. We emphasize that the sketching matrix is identical on all the
machines. To ensure the sketching matrix is identical, it is necessary to synchronize the random
number generators on the machines.
We restrict our attention to sub-Gaussian sketches; i.e., the rows of Sk are i.i.d. sub-Gaussian random
vectors. Formally, a random vector x ? Rd is 1-sub-Gaussian if
2
P(?T x ? ) ? e? 2 for any ? ? Sd?1 , > 0.
i.i.d.
Two examples of sub-Gaussian sketches are the standard Gaussian sketch: Si,j ? N (0, 1), and
the Rademacher sketch: Si,j are i.i.d. Rademacher random variables.
fk
After each machine sketches the features that are stored locally, it sends the sketched features X
T
bk := N1 Xk,
and the correlation of the screened features with the response ?
y
to
a
central
machine,
bk
S
which solves a sequence of T regularized quadratic programs (QP) to estimate ? ? :
e t ? ? (b
b ?et?1 + ?
e t ?et?1 )T ?,
?et ? arg min??Bd1 21 ? T ?
???
T
T
b = ?
b1 . . . ?
bm are the correlations of the screened features with the response,
where ?
b = 1 X T X b is the Gram matrix of the features selected by the screening stage, and
?
b
S
N
S
h
e t := X
f1,t
?
...
fm,t
X
iT h
f1,t
X
...
i
fm,t .
X
As we shall see, despite the absence of strong convexity, the sequence {?et }?
t=1 converges q-linearly
to ? ? up to the statistical precision.
3
Algorithm 1 Cleaning Stage
Sketching
1
1: Each machine computes sketches ?1 St Xk,Sbk and sufficient statistics N
Xk,Sbk y, t ? [T ]
nT
2: A central machine
collects
?
?
? the sketches and sufficient statistics
? and forms:
et ?
?
1
nT
..
.
?
?
? St X b T ? . . .
?
?
k,Sk
..
.
St Xk,Sbk
... ,
..
?
? 1 .T
X
y?
b??
?
? N k,Sbk ?.
..
.
Optimization
3: for t ? [T ] do
b ?et?1 in a distributed fashion:
4:
The cluster computes ?
..
.
?
bt?1 ?
y
P
k?[m]
Xk,Sbk ?et?1,k ,
?
?
?
b ?et?1 ? ? N1 X T b y
b ?
?
k,Sk t?1 ?.
?
..
.
e t ? ? (b
b ?et?1 + ?
e t ?et?1 )T ?
5:
?et ? arg min??Bd1 12 ? T ?
???
6: end for
eT with zeros to obtain an estimator of ? ?
7: The central machine pads ?
The cleaning stage involves 2T + 1 rounds of communication: step 2 involve a single round of
communication, and step 4 involves two rounds of communication. We remark that T is a small
integer in practice. Consequently, the number of rounds of communication is a small integer.
In terms of the amount of data (in bits) sent over the network, the communication cost of the cleaning
stage grows as O(dnmT ), where d is the number of features selected by the screening stage and n is
the sketch size. The communication cost of step 2 is O(dmnT + d), while that of step 4 is O(d + N ).
Thus the dominant term is O(dnmT ) incurred by machines sending sketches to the central machine.
3
Theoretical properties of the screen-and-clean approach
In this section, we will establish our main theoretical result regarding our S CREENA ND C LEAN
approach, given as Theorem 3.5. Recall that a key element of our approach is to prove the first stage
of S CREENA ND C LEAN establishes the sure screening property, i.e., (2). To this end, we begin by
stating a result by Fan and Lv that establishes sufficient conditions for SIS, i.e., (3) to possess the
sure screening property.
Theorem 3.1 (Fan and Lv (2008)). Let ? be the covariance of the predictors and Z = X??1/2 be
the whitened predictors. We assume Z satisfies the concentration property: there are c, c1 > 1 and
C1 > 0 such that
eZ
e T > c1 and ?min d??1 Z
eZ
e T < c?1 ? e?C1 n
P ?max d??1 Z
1
e of Z. Further,
for any N ? d? submatrix Z
i.i.d.
1. the rows of Z are spherically symmetric, and i ? N (0, ? 2 ) for some ? > 0;
2. var(y) . 1 and minj?S ?j? ? Nc2? and minj?S |cov(y, xj )| ? ?c3j for some ? > 0 and
c2 , c3 > 0;
3. there is c4 > 0 such that ?max (?) ? c4 .
As long as ? < 12 , there is some ? < 1 ? 2? such that if ? = cN ?? for some c > 0, we have
1?2?
P(S ? SbSIS ) = 1 ? C2 exp ? CN
log N
for some C, C2 > 0, where SbSIS is given by (3).
The assumptions of Theorem 3.1 are discussed at length in [6], Section 5. We remark that the most
stringent assumption is the third assumption, which is an assumption on the signal-to-noise ratio
(SNR). It rules out the possibility a relevant variable is (marginally) uncorrelated with the response.
4
We continue our analysis by studying the convergence rate of our approach. We begin by describing
three structural conditions we impose on the problem. In the rest of the section, let
K(S) := {? ? Rd : k?S c k1 ? k?S k1 }.
Condition 3.2 (RE condition). There is ?2 > 0 s.t. k?k2?b ? ?1 k?k22 for any ? ? K(S).
Condition 3.3. There is ?2 > 0 s.t. k?k2?b ? ?2 k?k2?b for any ? ? K(S).
t
Condition 3.4. There is ?3 > 0 s.t.
bt
|?1T (?
b 2 | ? ?3 k?1 k b k?2 k b for any ? ? K(S).
? ?)?
?
?
The preceding conditions deserve elaboration. The cone K(S) is an object that appears in the study
of the statistical properties of constrained M-estimators: it is the set the error of the constrained lasso
?b ? ? ? belongs to. Its image under XSb is the transformed tangent cone which contains the prediction
error XSb(?bT ? ?b? ). Condition 3.2 is a common assumption in the literature on high-dimensional
statistics. It is a specialization of the notion of restricted strong convextiy that plays a crucial part in
the study of constrained M-estimators. Conditions 3.3 and 3.4 are conditions on the sketch. At a high
b t on K(S) is similar
level, Conditions 3.3 and 3.4 state that the action of the sketched Gram matrix ?
b
to that of ? on K(S). As we shall see, they are satisfied with high probability by sub-Gaussian
sketches. The following theorem is our main result regarding the S CREENA ND C LEAN method.
?
Theorem 3.5. Under Conditions 3.2, 3.3, and 3.4, for any T > 0 such that k?et ? ? ? k b ? ?L k?b ?
?
? ? k1 for all t ? T , we have
s
st (N, D)
k?et ? ? ? k?b ? ? t?1 k?e1 ? ? ? k?b +
,
1??
where ? =
c? ?3
?2
is the contraction factor (c? > 0 is an absolute constant) and
?
b 1/2
24 s b ?
2(1 + 12?3 )?max (?)
?
b
?
b k? .
k? ? ? k1 + ? k?? ? ?
st (N, D) =
?2 ?1
?2 s
To interpret Theorem 3.5, recall
? b ?
b k? ,
k?b ? ? ? k2 .P sk??
??
b ???
b k? ,
k?b ? ? ? k1 .P sk??
where
?b is the lasso estimator. Further, the prediction error of the lasso estimator is (up to a constant)
?
L
b
? k? ? ? ? k1 , which (up to a constant) is exactly statistical precision st (N, D). Theorem 3.5 states
s
that the prediction error of ?et decreases q-linearly to that of the lasso estimator. We emphasize that
the convergence rate is linear despite the absence of strong convexity, which is usually the case
when N < D. A direct consequence is that only logarithmically many iterations ensures a desired
suboptimality, which stated in the following corollary.
Corollary 3.6. Under the conditions of Theorem 3.5,
(N,D) ?1
log ? st1??
? log 11
T =
? log 1
log ?1
iterations of the constrained quasi-Newton method, where 1 = k?b1 ? ? ? k?b , is enough to produce
an iterate whose prediction error is smaller than
n
o
b 1/2
b ? ? ? k1 , st (N,D) ? k?b ? ? ? k b .
> max ?max?(?)
k
?
?
1??
s
c ?
Theorem 3.5 is vacuous if the contraction factor ? = ??2 3 is not smaller than 1. To ensure ? < 1, it
?1
3
is enough to choose the sketch size n so that ?
?2 < c? . Consider the ?good event?
E(?) := ?2 ? 1 ? ?, ?3 ? 2? .
(4)
If the rows of St are sub-Gaussian, to ensure E(?) occurs with high probability, Pilanci and Wainwright show it is enough to choose
2
n > ?cs2 W XSb(K(S) ? Sd?1 ) ,
(5)
where cs > 0 is an absolute constant and W(S) is the Gaussian-width of the set S ? Rd [13].
5
Lemma 3.7 (Pilanci and Wainwright (2014)). For any sketching matrix whose rows are independent
1-sub-Gaussian vectors, as long as the sketch size n satisfies (5),
P E(?) ? 1 ? c5 exp ?c6 n? 2 ,
where c5 , c6 are absolute constants.
As a result, when the sketch size n satisfies (5), Theorem 3.5 is non-trivial.
Tradeoffs depending on sketch size. We remark that the contraction coefficient in Theorem 3.5
depends on the sketch size. As the sketch size n increases, the contraction coefficient decays and
vice versa. Thus the sketch size allows practitioner to trade-off the total rounds of communication
with the total amount of data (in bits) sent over the network. A larger sketch size results in fewer
rounds of communication, but more bits per round of communication and vice versa. Recall [5] the
communication cost of an algorithm is
rounds ? overhead + bits ? bandwidth?1 .
By tweaking the sketch size, users can trade-off rounds and bits, thereby minimizing the communcation cost of our approach on various distributed computing platforms. For example, the user of
a cluster comprising commodity machines is more concerned with overhead than the user of a
purpose-built high performance cluster [7]. In the following, we study the two extremes of the
trade-off.
At one extreme, users are solely concerned by the total amount of data sent over the network. On
such platforms, users should use smaller sketches to reduce the total amount of data sent over the
network at the expense of performing a few extra iterations (rounds of communication).
Corollary 3.8. Under the conditions of Theorem 3 and Lemma 3.7, selecting d := b? N c features by
SIS, where ? = cN ?? for some c > 0 and ? < 1 ? 2? and letting
1
? log
log st (N,D)
cs (c? + 2)2
d?1 2
n>
W XSb(K(S) ? S ) , T =
4
log 2
in Algorithm 1 ensures k?eT ? ? ? k?b ? 3st (N, D) with probability at least
1?2?
,
1 ? c4 T exp ?c2 n? 2 ? C2 exp ? CN
log N
1
1
where c, c? , cs , c2 , c4 , C, C2 are absolute constants.
We state the corrollary in terms of the statistical precision st (N, D) and the Gaussian width to keep
the expressions concise. It is known that the Gausssian width of the transformed tangent cone that
appears in Corollary 3.8 is O(s log d)1/2 [13]. Thus it is possible to keep the sketch size n on the
order of s log d. Recalling d = b? N c, where ? is specified in the statement of Theorem 3.1, and
1
D 2
st (N, D) ? s log
, we deduce the communication cost of the approach is
N
N
e
= O(mns),
O(dnmT ) = O N (s log d)m log s log
D
e ignores polylogarithmic terms. The takeaway is it is possible to obtain an O(st (N, D)) accuwhere O
e
rate solution by sending O(mN
s) bits over the network. Compared to the O(mN 2 ) communication
cost of DECO, we see that our approach exploits sparsity to reduce communication cost.
At the other extreme, there is a line of work in statistics that studies estimators whose evaluation only
requires a single round of communication. DECO is such a method. In our approach, it is possible to
obtain an st (N, D) accurate solution in a single iteration by choosing the sketch size large enough to
ensure the contraction factor ? is on the order of st (N, D).
Corollary 3.9. Under the conditions of Theorem 3 and Lemma 3.7, selecting d := b? N c features by
SIS, where ? = cN ?? for some c > 0 and ? < 1 ? 2? and letting
2
cs (c? st (N, D)?1 + 2)2
n>
W XSb(K(S) ? Sd?1 )
4
e
and T = 1 in Algorithm 1 ensures k?T ? ? ? k?b ? 3st (N, D) with probability at least
1?2?
1 ? c4 T exp ?c2 n? 2 ? C2 exp ? CN
,
log N
where c, c? , cs , c2 , c4 , C, C2 are absolute constants.
6
10 1
m = 231
m = 277
m = 369
m = 553
m = 922
Lasso
10 0
Prediction error
Prediction error
10 1
10 -1
10 -2
m = 231
m = 277
m = 369
m = 553
m = 922
Lasso
10 0
10 -1
10 -2
2
4
6
8
10
2
Iterations
i.i.d.
4
6
8
10
Iterations
i.i.d.
(a) xi ? N (0, ID )
(b) xi ? AR(1)
f ?b ? ? ? )k2 versus iteration. Each plots shows the
Figure 1: Plots of the statistical error log kX(
2
convergence of 10 runs of Algorithm 1 on the same problem instance. We see that the statistical error
decreases linearly up to the statistical precision of the problem.
Recalling
st (N, D)2 ?
s log D
N ,
2
W XSb(K(S) ? Sd?1 ) ? s log d,
we deduce the communication cost of the one-shot approach is
N
2
e
= O(mN
),
O(dnmT ) = O N 2 m log s log
D
which matches the communication cost of DECO.
4
Simulation results
In this section, we provide empirical evaluations of our main algorithm S CREENA ND C LEAN on
synthetic datasets. In most of the experiments the performance of the methods is evaluated in terms
f ?b ? ? ? )k2 . All the experiments are implemented
of the prediction error which is defined as kX(
2
in Matlab on a shared memory machine with 512 GB RAM with 4(6) core intel Xeon E7540 2
GHz processors. We use TFOCS as a solver for any optimization problem involved, e.g., step 5 in
Algorithm 1. For brevity, we refer to our approach as SC in the rest of the section.
4.1 Impact of number of iterations and sketch size
First, we confirm the prediction of Theorem 3.5 by simulation. Figure 1 shows the prediction error of
the iterates of Algorithm 1 with different sketch sizes m. We generate a random instance of a sparse
regression problem with size 1000 by 10000 and sparsity s = 10, and apply Algorithm 1 to estimate
the regression coefficients. Since Algorithm 1 is a randomized algorithm, for a given (fixed) dataset,
its error is reported as the median of the results from 11 independent trials. The two subfigures
show the results for two random designs: standard Gaussian (left) and AR(1) (right). Within each
subfigure, each curve corresponds to a sketch size, and the dashed black line show the prediction
error of the lasso estimator. On the logarithmic scale, a linearly convergent sequence of points appear
on a straight line. As predicted by Theorem 3.5, the iterates of Algorithm 1 converge linearly up to
the statistical precision, which is (roughly) the prediction error of the lasso estimator, and then it
plateaus. As expected, the higher the sketch size is, the fewer number of iteration is needed. These
results are consistent with our theoretical findings.
4.2 Impact of sample size N
Next, we evaluate the statistical performance of our SC algorithm when N grows. For completeness,
we also evaluate several competing methods, namely, lasso, SIS [6] and DECO [17]. The synthetic
datasets used in our experiments are based on model (1). In it, X ? N (0, ID ) or X ? N (0, ?) with
all predictors equally correlated with correlation 0.7, ? N (0, 1). Similar to the setting appeared
in [17], the support of ? ? , S satisfies that |S| = 5 and its coordinates are randomly chosen from
{1, . . . , D}, and
(
1/2
(?1)Ber(0.5) |(0, 1)| + 5 logND
i?S
?
?i =
0
i?
/ S.
7
We generate datasets with fixed D = 3000 and N ranging from 50 to 600. For each N , 20 synthetic
datasets are generated and the plots are made by averaging the results.
In order to compare with methods such as DECO which is concerned with the Lagrangian formulation
of lasso, we modify our algorithm accordingly. That is, in step 5 of Algorithm 1, we solve
1 e
b ?et?1 )T ? + ?k?k1 .
?et ? arg min??Rd ? T ?
???
t ? ? (b
2
Herein, in our experiments, the regularization parameter is set to be ? = 2kX T k? . Also, for SIS
and SC, the screening size is set to be 2N . For SC, we run it with sketch size n = 2s log(N ) where
s = 5 and 3 iterations. For DECO, the dataset is partitioned into m = 3 subsets and it is implemented
without the refinement step. The results on two kinds of design matrix are presented in Figure 2.
10 2
Lasso
SIS
DECO
SC
2
prediction error
prediction error
10
10 1
10 0
Lasso
SIS
DECO
SC
10 1
10 0
10 2
10 2
n
n
i.i.d.
i.i.d.
(a) xi ? N (0, ID )
(b) xi ? N (0, ?)
f ?b ? ? ? )k2 versus log N . In the above, (a) is generated
Figure 2: Plots of the statistical error log kX(
2
on datasets with independent predictors and (b) is generated on datasets with correlated predictors.
Besides our main algorithm SC, several competing methods, namely, lasso, SIS and DECO are
evaluated. Here D = 3000. For each N , 20 independent simulated datasets are generated and the
averaged results are plotted.
As can be seen, SIS achieves similar errors as lasso. Indeed, after careful inspection, we find out
that when in the cases where predictors are highly correlated, i.e., Figure 2(b), usually less than 2
non-zero coefficients can be recovered by sure independent screening. Nevertheless, this doesn?t
deteriorate the accuracy too much. Moreover, SC?s performance is comparable to both SIS and lasso
as the prediction error goes down in the same rate, and SC outperforms DECO in our experiments.
6000
5000
4000
time (s)
Finally, in order to demonstrate that our approach is
amenable to distributed computing environments, we
implement it using Spark1 on a modern cluster with
20 nodes, each of which has 12 executor cores. We
run our algorithm on an independent Gaussian problem instance with size 6000 and 200,000, and sparsity
s = 20. The screening size is 2400, sketch size is 700,
number of iterations is 3. To show the scalability, we
report the running time using 1, 2, 4, 8, 16 machines,
respectively. As most of the steps in our approach are
embarrassingly parallel, the running time becomes
almost half as we double the number of machines.
3000
2000
1000
0
5
Conclusion and discussion
2
4
6
8
10
12
14
16
number of machines
We presented an approach to feature-distributed
sparse regression that exploits the sparsity of the re- Figure 3: Running time of a Spark implemengression coefficients to reduce communication cost. tation of SC versus number of machines.
Our approach relies on sketching to compress the
information that has to be sent over the network. Empirical results verify our theoretical findings.
1
http://spark.apache.org/
8
Acknowledgments. We would like to thank the Army Research Office and the Defense Advanced
Research Projects Agency for providing partial support for this work.
References
[1] Alekh Agarwal, Sahand Negahban, Martin J. Wainwright, et al. Fast global convergence of gradient
methods for high-dimensional statistical recovery. The Annals of Statistics, 40(5):2452?2482, 2012.
[2] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and
statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine
Learning, 3(1):1?122, 2011.
[3] Scott Shaobing Chen, David L Donoho, and Michael A Saunders. Atomic decomposition by basis pursuit.
SIAM Review, 43(1):129?159, 2001.
[4] K. L. Clarkson and D. P. Woodruff. Low rank approximation and regression in input sparsity time. In
Symposium on Theory of Computing (STOC), 2013.
[5] Jim Demmel. Communication avoiding algorithms. In 2012 SC Companion: High Performance
Computing, Networking Storage and Analysis, pages 1942?2000. IEEE, 2012.
[6] Jianqing Fan and Jinchi Lv. Sure independence screening for ultra-high dimensional feature space. Journal
of the Royal Statistical Society: Series B (Statistical Methodology), 70(5):849?911, 2008.
[7] Alex Gittens, Aditya Devarakonda, Evan Racah, Michael F. Ringenburg, Lisa Gerhardt, Jey Kottalam, Jialin
Liu, Kristyn J. Maschhoff, Shane Canon, Jatin Chhugani, Pramod Sharma, Jiyan Yang, James Demmel,
Jim Harrell, Venkat Krishnamurthy, Michael W. Mahoney, and Prabhat. Matrix factorization at scale:
a comparison of scientific data analytics in spark and C+MPI using three case studies. arXiv preprint
arXiv:1607.01335, 2016.
[8] Trevor J. Hastie, Robert Tibshirani, and Martin J. Wainwright. Statistical Learning with Sparsity: The
Lasso and Its Generalizations. CRC Press, 2015.
[9] Jason D. Lee, Yuekai Sun, Qiang Liu, and Jonathan E. Taylor. Communication-efficient sparse regression:
a one-shot approach. arXiv preprint arXiv:1503.04337, 2015.
[10] Po-Ling Loh and Martin J. Wainwright. High-dimensional regression with noisy and missing data: Provable
guarantees with nonconvexity. Ann. Statist., 40(3):1637?1664, 06 2012.
[11] Sahand N. Negahban, Pradeep Ravikumar, Martin J. Wainwright, and Bin Yu. A unified framework for highdimensional analysis of m-estimators with decomposable regularizers. Statistical Science, 27(4):538?557,
2012.
[12] Mert Pilanci and Martin J. Wainwright. Iterative Hessian sketch: Fast and accurate solution approximation
for constrained least-squares. arXiv preprint arXiv:1411.0347, 2014.
[13] Mert Pilanci and Martin J. Wainwright. Randomized sketches of convex programs with sharp guarantees.
Information Theory, IEEE Transactions on, 61(9):5096?5115, 2015.
[14] Farbod Roosta-Khorasani and Michael W. Mahoney. Sub-sampled Newton methods II: Local convergence
rates. arXiv preprint arXiv:1601.04738, 2016.
[15] Robert Tibshirani. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B Stat. Methodol.,
pages 267?288, 1996.
[16] Joel A. Tropp. Improved analysis of the subsampled randomized Hadamard transform. Adv. Adapt. Data
Anal., 3(1-2):115?126, 2011.
[17] Xiangyu Wang, David Dunson, and Chenlei Leng. Decorrelated feature space partitioning for distributed
sparse regression. arXiv preprint arXiv:1602.02575, 2016.
[18] Xiangyu Wang and Chenlei Leng. High dimensional ordinary least squares projection for screening
variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 2015.
[19] David P. Woodruff. Sketching as a tool for numerical linear algebra. arXiv preprint arXiv:1411.4357,
2014.
[20] Yuchen Zhang, John C. Duchi, and Martin J. Wainwright. Communication-efficient algorithms for statistical
optimization. Journal of Machine Learning Research, 14:3321?3363, 2013.
9
| 6187 |@word trial:1 nd:8 simulation:2 covariance:1 contraction:5 decomposition:1 concise:1 thereby:1 shot:3 harder:1 liu:2 contains:1 series:2 selecting:2 woodruff:2 outperforms:1 existing:3 recovered:1 nt:3 si:14 chu:1 john:1 numerical:1 partition:2 plot:4 operationally:1 selected:6 fewer:2 half:1 accordingly:1 inspection:1 xk:6 beginning:1 core:2 iterates:5 completeness:1 node:1 c6:2 org:1 zhang:1 rnt:1 c2:11 direct:1 bd1:2 symposium:1 consists:2 prove:1 overhead:2 manner:1 deteriorate:1 indeed:1 expected:1 roughly:3 xti:1 cardinality:1 solver:1 becomes:1 spain:1 begin:2 moreover:1 project:1 kind:1 developed:1 unified:1 unobserved:1 finding:2 guarantee:2 berkeley:2 commodity:1 growth:1 pramod:1 exactly:1 decouples:1 scaled:1 k2:7 ser:1 partitioning:1 appear:1 local:2 modify:1 sd:4 tation:1 consequence:1 despite:2 id:3 solely:1 black:1 therein:1 studied:1 collect:1 factorization:1 analytics:1 averaged:1 acknowledgment:1 atomic:1 practice:1 block:4 implement:1 procedure:2 evan:1 empirical:2 boyd:1 projection:1 tweaking:1 convenience:1 close:1 selection:4 storage:1 lagrangian:1 missing:1 holp:1 overarching:1 attention:1 go:1 convex:2 spark:3 recovery:1 decomposable:1 estimator:16 rule:1 submodel:1 racah:1 notion:1 coordinate:1 krishnamurthy:1 annals:1 play:1 user:7 cleaning:7 element:1 logarithmically:1 trend:1 particularly:1 lean:8 preprint:6 wang:2 ensures:5 adv:1 sun:2 movement:1 trade:4 decrease:2 environment:1 convexity:3 agency:1 weakly:1 solving:1 algebra:1 eric:1 basis:1 po:1 various:3 fast:3 demmel:2 sc:11 choosing:1 saunders:3 whose:3 stanford:3 solve:3 larger:1 statistic:5 cov:1 transform:2 itself:1 noisy:1 sequence:3 propose:2 mert:2 relevant:4 hadamard:2 ky:1 scalability:1 exploiting:1 convergence:11 cluster:4 double:1 farbod:1 rademacher:2 produce:1 converges:1 mmahoney:1 object:1 depending:1 develop:1 stating:1 stat:3 solves:1 soc:1 implemented:2 c:5 involves:2 trading:1 predicted:1 strong:4 direction:1 stochastic:2 stringent:1 khorasani:1 bin:1 crc:1 require:3 maschhoff:1 st1:1 f1:2 generalization:1 ultra:1 exp:6 achieves:1 purpose:1 largest:1 vice:2 establishes:2 tool:1 minimization:1 gaussian:13 rather:3 avoid:2 shrinkage:1 office:1 corollary:6 notational:1 rank:1 attains:1 dependent:1 bt:3 pad:1 transformed:3 quasi:2 selects:1 comprising:1 sketched:3 arg:4 among:3 kottalam:1 platform:3 constrained:8 marginal:1 sampling:1 qiang:1 identical:2 yu:1 nearly:1 report:1 few:1 modern:2 randomly:1 subsampled:1 explosive:1 n1:5 recalling:2 interest:1 screening:17 possibility:2 highly:1 evaluation:2 joel:1 mahoney:3 extreme:5 pradeep:1 regularizers:1 jialin:1 amenable:2 accurate:2 capable:1 partial:2 necessary:2 divide:1 taylor:1 yuchen:1 re:2 desired:1 plotted:1 theoretical:4 subfigure:2 instance:3 xeon:1 ar:2 ordinary:1 cost:15 subset:2 entry:1 snr:1 predictor:6 too:1 stored:3 reported:1 gerhardt:1 synthetic:3 st:18 randomized:3 negahban:2 siam:1 lee:1 off:5 michael:6 sketching:10 deco:14 central:4 satisfied:1 choose:2 supp:2 coefficient:7 idealized:1 depends:1 later:1 view:3 jason:1 relied:1 parallel:1 nc2:1 minimize:1 square:6 accuracy:1 marginally:1 straight:1 processor:1 plateau:1 minj:2 networking:1 decorrelated:1 trevor:1 involved:1 james:1 mi:1 sampled:1 dataset:3 popular:1 recall:3 knowledge:2 dimensionality:1 embarrassingly:1 appears:2 higher:1 methodology:2 response:5 improved:1 formulation:1 evaluated:2 strongly:1 stage:19 correlation:4 hand:1 sketch:37 tropp:1 sbk:9 scientific:1 grows:2 k22:2 verify:1 multiplier:1 counterpart:1 regularization:1 alternating:1 spherically:1 symmetric:1 neal:1 round:17 width:3 mpi:1 suboptimality:1 m:1 prominent:1 complete:2 demonstrate:1 duchi:1 image:1 wise:1 ranging:1 recently:1 parikh:1 common:1 qp:1 overview:1 apache:1 million:1 discussed:1 interpret:1 refer:4 jey:1 versa:2 rd:4 fk:6 longer:1 alekh:1 deduce:2 dominant:1 recent:1 irrelevant:3 belongs:1 jianqing:1 continue:1 seen:1 canon:1 impose:1 preceding:1 converge:2 sharma:1 xiangyu:2 signal:2 dashed:1 stephen:1 full:1 ii:1 yuekai:3 borja:1 match:1 adapt:1 jiyan:3 long:2 elaboration:1 e1:1 equally:1 ravikumar:1 impact:2 prediction:16 regression:20 whitened:1 arxiv:12 iteration:12 agarwal:1 c1:4 median:1 sends:1 crucial:1 extra:1 rest:2 posse:1 sure:7 shane:1 sent:10 spirit:1 call:1 integer:2 structural:1 practitioner:1 yang:2 presence:1 prabhat:1 enough:4 concerned:6 variety:1 independence:2 fit:5 xj:1 iterate:1 hastie:1 lasso:23 restrict:1 fm:2 reduce:10 regarding:2 cn:6 bandwidth:1 tradeoff:1 competing:2 bottleneck:1 specialization:1 expression:1 defense:1 gb:1 sahand:2 clarkson:1 loh:1 hessian:2 remark:4 action:1 matlab:1 generally:1 useful:1 involve:2 amount:9 locally:2 statist:1 chhugani:1 reduced:1 generate:2 http:1 per:1 tibshirani:2 shall:3 key:2 nevertheless:1 clean:3 utilize:1 nonconvexity:1 ram:1 cone:5 screened:2 run:3 fueled:1 tailor:1 almost:2 comparable:1 bit:16 submatrix:1 chenlei:2 convergent:1 fan:3 quadratic:1 oracle:3 strength:1 alex:1 takeaway:1 shaobing:1 min:4 performing:1 martin:7 across:2 smaller:3 partitioned:4 gittens:1 making:1 restricted:2 describing:1 needed:1 letting:2 end:2 umich:1 sending:4 studying:1 pursuit:1 permit:1 apply:1 compress:1 running:3 include:1 ensure:4 newton:5 paucity:1 exploit:3 k1:10 establish:1 society:2 occurs:1 concentration:1 gradient:1 separate:2 thank:1 simulated:1 trivial:1 provable:1 length:1 besides:1 ratio:1 minimizing:1 providing:1 roosta:1 unfortunately:1 dunson:1 robert:2 potentially:1 statement:1 expense:1 stoc:1 mink:1 stated:3 implementation:2 design:3 anal:1 unknown:2 observation:1 datasets:9 communication:35 jim:2 rn:2 sharp:1 peleato:1 bk:2 vacuous:1 namely:2 eckstein:1 specified:2 c3:1 david:3 c4:6 california:1 herein:1 polylogarithmic:2 barcelona:1 nip:1 address:2 deserve:1 usually:3 scott:1 appeared:1 sparsity:9 jinchi:1 program:2 built:1 max:5 memory:1 explanation:1 royal:2 wainwright:10 event:1 decorrelation:1 natural:2 regularized:2 synchronize:1 methodol:1 scarce:1 advanced:1 mn:10 review:1 literature:1 tangent:4 proven:1 var:1 lv:3 versus:3 generator:1 foundation:1 incurred:1 sufficient:3 consistent:2 uncorrelated:3 row:5 lisa:1 harrell:1 ber:1 wide:1 decorrelates:1 sparse:21 absolute:5 distributed:24 tolerance:1 benefit:1 dimension:1 ghz:1 gram:3 curve:1 computes:2 ignores:1 author:1 c5:2 made:1 refinement:1 ihs:1 bm:1 doesn:1 leng:2 transaction:1 emphasize:2 keep:2 confirm:1 global:1 b1:2 assumed:1 xi:4 iterative:3 sk:6 pilanci:5 ignoring:1 main:5 linearly:6 noise:1 ling:1 intel:1 venkat:1 screen:3 fashion:1 cs2:1 precision:5 sub:10 comprises:1 wish:1 third:1 minute:1 theorem:16 down:1 companion:1 discarding:3 tfocs:1 x:1 dk:3 decay:1 concern:1 albeit:1 sx:1 kx:4 chen:1 suited:1 michigan:1 logarithmic:1 army:1 ez:2 gausssian:1 aditya:1 corresponds:1 minimizer:3 chiefly:3 satisfies:4 relies:1 goal:2 consequently:1 careful:1 donoho:1 ann:1 shared:1 absence:2 averaging:1 lemma:3 total:6 formally:2 highdimensional:1 support:3 jonathan:2 brevity:1 evaluate:2 avoiding:1 correlated:3 |
5,733 | 6,188 | Bayesian Optimization with a Finite Budget:
An Approximate Dynamic Programming Approach
Remi R. Lam
Massachusetts Institute of Technology
Cambridge, MA
[email protected]
Karen E. Willcox
Massachusetts Institute of Technology
Cambridge, MA
[email protected]
David H. Wolpert
Santa Fe Institute
Santa Fe, NM
[email protected]
Abstract
We consider the problem of optimizing an expensive objective function when a
finite budget of total evaluations is prescribed. In that context, the optimal solution
strategy for Bayesian optimization can be formulated as a dynamic programming instance. This results in a complex problem with uncountable, dimension-increasing
state space and an uncountable control space. We show how to approximate the
solution of this dynamic programming problem using rollout, and propose rollout
heuristics specifically designed for the Bayesian optimization setting. We present
numerical experiments showing that the resulting algorithm for optimization with
a finite budget outperforms several popular Bayesian optimization algorithms.
1
Introduction
Optimizing an objective function is a central component of many algorithms in machine learning
and engineering. It is also essential to many scientific models, concerning everything from human
behavior, to protein folding, to population biology. Often, the objective function to optimize is
non-convex and does not have a known closed-form expression. In addition, the evaluation of this
function can be expensive, involving a time-consuming computation (e.g., training a neural network,
numerically solving a set of partial differential equations, etc.) or a costly experiment (e.g., drilling a
borehole, administering a treatment, etc.). Accordingly, there is often a finite budget specifying the
maximum number of evaluations of the objective function allowed to perform the optimization.
Bayesian optimization (BO) has become a popular optimization technique for solving problems
governed by such expensive objective functions [17, 9, 2]. BO iteratively updates a statistical model
and uses it as a surrogate for the objective function. At each iteration, this statistical model is used to
select the next design to evaluate. Most BO algorithms are greedy, ignoring how the design selected
at a given iteration will affect the future steps of the optimization. Thus, the decisions made are
typically one-step optimal. Because of this shortsightedness, such algorithms balance, in a greedy
fashion, the BO exploration-exploitation trade-off: evaluating designs to improve the statistical model
or to find the optimizer of the objective function.
In contrast to greedy algorithms, a lookahead approach is aware of the remaining evaluations and can
balance the exploration-exploitation trade-off in a principled way. A lookahead approach builds an
optimal strategy that maximizes a long-term reward over several steps. That optimal strategy is the
solution of a challenging dynamic programming (DP) problem whose complexity stems, in part, from
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
the increasing dimensionality of the involved spaces as the budget increases, and from the presence
of nested maximizations and expectations. This is especially challenging when the design space takes
an uncountable set of values.
The first contribution of this paper is to use rollout [1], an approximate dynamic programming
(ADP) algorithm to circumvent the nested maximizations of the DP formulation. This leads to a
problem significantly simpler to solve. Rollout uses suboptimal heuristics to guide the simulation
of optimization scenarios over several steps. Those simulations allow us to quantify the long-term
benefits of evaluating a given design. The heuristics used by rollout are typically problem-dependent.
The second contribution of this paper is to build heuristics adapted to BO with a finite budget that
leverage existing greedy BO strategies. As demonstrated with numerical experiments, this can lead to
improvements in performance.
The following section of this paper provides a brief description of Gaussian processes and their use
in Bayesian optimization (Sec. 2), followed by a brief overview of dynamic programming (Sec. 3).
Sec. 4 develops the connection between BO and DP and discusses some of the related work. We then
propose to employ the rollout algorithm (with heuristics adapted to BO) to mitigate the complexity of
the DP algorithm (Sec. 5). In Sec. 6, we numerically investigate the proposed algorithm and present
our conclusions in Sec. 7.
2
Bayesian Optimization
We consider the following optimization problem:
(OP)
x? = argminx?X f (x),
(1)
where x is a d-dimensional vector of design variables. The design space, X , is a bounded subset of
Rd , and f : X 7? R is an objective function that is expensive to evaluate. We are interested in finding
a minimizer x? of the objective function using a finite budget of N function evaluations. We refer to
this problem as the original problem (OP).
In the Bayesian optimization (BO) setting, the (deterministic or noisy) objective function f is modeled
as a realization of a stochastic process, typically a Gaussian process (GP) G, on a probability space
(?, ?, P), which defines a prior distribution over functions. A GP is fully defined by a mean function
m : X ? R (often set to zero without loss of generality) and a covariance kernel ? : X 2 ? R (see
[16] for an overview of GP):
f ? G(m, ?).
(2)
The BO algorithm starts with an initial design x1 and its associated value y1 = f (x1 ) provided by
the user. This defines the first training set S1 = {(x1 , y1 )}. At each iteration k ? {1, ? ? ? , N }, the
GP prior is updated, using Bayes rule, to obtain posterior distributions conditioned on the current
training set Sk = {(xi , yi )}ki=1 containing the past evaluated designs and observations. For any
(potentially non-evaluated) design x ? X , the posterior mean ?k (x) and posterior variance ? 2k (x) of
the GP, conditioned on Sk , are known in closed-form and are considered cheap to evaluate:
?k (x) = K(Xk , x)> [K(Xk , Xk ) + ?I]?1 Yk ,
? 2k (x) = ?(x, x) ? K(Xk , x)> [K(Xk , Xk ) + ?I]?1 K(Xk , x),
(3)
(4)
where K(Xk , Xk ) is the k ? k matrix whose ij th entry is ?(xi , xj ), K(Xk , x) (respectively Yk ) is
the k ? 1 vector whose ith entry is ?(xi , x) (respectively yi ), and ? is the noise variance. A new
design xk+1 is then selected and evaluated with this objective function to provide an observation
yk+1 = f (xk+1 ). This new pair (xk+1 , yk+1 ) is added to the current training set Sk to define the
training set for the next iteration Sk+1 = Sk ? {(xk+1 , yk+1 )}.
In BO, the next design to evaluate is selected by solving an auxiliary problem (AP), typically of the
form:
(AP) xk+1 = argmaxx?X Uk (x; Sk ),
(5)
where Uk is a utility function to maximize. The rationale is that, because the optimization run-time or
cost is dominated by the evaluation of the expensive function f , time and effort should be dedicated
to choosing a good and informative (in a sense defined by the auxiliary problem) design to evaluate.
2
Solving this auxiliary problem (sometimes called maximization of an acquisition or utility function)
does not involve the evaluation of the expensive objective function f , but only the posterior quantities
of the GP and, thus, is considered cheap.
Examples of utility functions, Uk , used to select the next design to evaluate in Bayesian optimization
include maximizing the probability of improvement (PI) [12], maximizing the expected improvement
(EI) in the efficient global optimization (EGO) algorithm [10], minimizing a linear combination ????
of the posterior mean ? and standard deviation ? in GP upper confidence bound (GP-UCB) [18], or
maximizing a metric quantifying the information gain [19, 6, 7]. However, the aforementioned utility
functions are oblivious to the number of objective function evaluations left and, thus, lead to greedy
optimization strategies. Devising methods that account for the remaining budget would allow to better
plan the sequence of designs to evaluate, balance in a principled way the exploration-exploitation
trade-off encountered in BO, and thus potentially lead to performance gains.
3
Dynamic Programming
In this section, we review some of the key features of dynamic programming (DP) which addresses
optimal decision making under uncertainty for dynamical systems. BO with a finite budget can be
seen as such a problem. It has the following characteristics: (1) a statistical model to represent the
objective function, (2) a system dynamic that describes how this statistical model is updated as new
information is collected, and (3) a goal that can be quantified with a long-term reward. DP provides
us with a mathematical formulation to address this class of problem. A full overview of DP can be
found in [1, 15].
We consider a system governed by a discrete-stage dynamic. At each stage k, the system is fully
characterized by a state zk ? Zk . A control uk , from a control space Uk (zk ), that generally depends
on the state, is applied. Given a state zk and a control uk , a random disturbance wk ? Wk (zk , uk )
occurs, characterized by a random variable Wk with probability distribution P(?|zk , uk ). Then, the
system evolves to a new state zk+1 ? Zk+1 , according to the system dynamic. This can be written in
the following form:
?k ? {1, ? ? ? , N }, ?(zk , uk , wk ) ? Zk ? Uk ? Wk ,
zk+1 = Fk (zk , uk , wk ),
(6)
where z1 is an initial state, N is the total number of stages, or horizon, and Fk : Zk ? Uk ? Wk 7?
Zk+1 is the dynamic of the system at stage k (where the spaces? dependencies are dropped for ease
of notation).
We seek the construction of an optimal policy (optimal in a sense yet to define). A policy,
? = {?1 , ? ? ? , ?N }, is sequence of rules, ?k : Zk 7? Uk , for k = 1, ? ? ? , N , mapping a state
zk to a control uk = ?k (zk ).
At each stage k, a stage-reward function rk : Zk ? Uk ? Wk 7? R, quantifies the benefits of applying
a control uk to a state zk , subject to a disturbance wk . A final reward function rN +1 : ZN +1 7? R,
similarly quantifies the benefits of ending at a state zN +1 . Thus, the expected reward starting from
state z1 and using policy ? is:
"
#
N
X
J? (z1 ) = E rN +1 (zN +1 ) +
rk (zk , ?k (zk ), wk ) ,
(7)
k=1
where the expectation is taken with respect to the disturbances. An optimal policy, ? ? , is a policy
that maximizes this (long-term) expected reward over the set of admissible policies ?:
J ? (z1 ) = J?? (z1 ) = max J? (z1 ),
???
(8)
where J ? is the optimal reward function, also called optimal value function. Using Bellman?s
principle of optimality, the optimal reward is given by a nested formulation and can be computed
using the following DP recursive algorithm, working backward from k = N to k = 1:
JN +1 (zN +1 ) = rN +1 (zN +1 ),
Jk (zk ) = max E[rk (zk , uk , wk ) + Jk+1 (Fk (zk , uk , wk ))].
uk ?Uk
?
(9)
(10)
The optimal reward J (z1 ) is then given by J1 (z1 ), and if u?k = ?k? (zk ) maximizes the right hand
?
side of Eq. 10 for all k and all zk , then the policy ? ? = {?1? , ? ? ? , ?N
} is optimal (e.g., [1], p.23).
3
4
Bayesian Optimization with a Finite Budget
In this section, we define the auxiliary problem of BO with a finite budget (Eq. 5) as a DP instance.
At each iteration k, we seek to evaluate the design that will lead, once the evaluation budget N
has been consumed, to the maximum reduction of the objective function. In general, the value of
the objective function f (x) at a design x is unknown before its evaluation and, thus, estimating
the long-term effect of an evaluation is not possible. However, using the GP representing f , it is
possible to characterize the unknown f (x) by a distribution. This can be used to simulate sequences
of designs and function values (i.e., optimization scenarios), compute their rewards and associated
probabilities, without evaluating f . Using this simulation machinery, it is possible to capture the
goal of achieving a long term reward in a utility function Uk . We now formulate the simulation of
optimization scenarios in the DP context and proceed with the definition of such utility function Uk .
We consider that the process of optimization is a dynamical system. At each iteration k, this system
is fully characterized by a state zk equal to the training set Sk . The system is actioned by a control
uk equal to the design xk+1 selected to be evaluated. For a given state and control, the value of the
objective function is unknown and modeled as a random variable Wk , characterized by:
Wk ? N ?k (xk+1 ), ? 2k (xk+1 ) ,
(11)
where ?k (xk+1 ) and ? 2k (xk+1 ) are the posterior mean and variance of the GP at xk+1 , conditioned
on Sk . We define a disturbance wk to be equal to a realization fk+1 of Wk . Thus, wk = fk+1
represents a possible (simulated) value of the objective function at xk+1 . Note that this simulated
value of the objective function, fk+1 , is not the value of the objective function yk+1 = f (xk+1 ).
Hence, we have the following identities: Zk = (X ? R)k , Uk = X and Wk = R.
The new state zk+1 is then defined to be the augmented training set Sk+1 = Sk ? {(xk+1 , fk+1 )},
and the system dynamic can be written as:
Sk+1 = Fk (Sk , xk+1 , fk+1 ) = Sk ? {(xk+1 , fk+1 )}.
(12)
The disturbances wk+1 at iteration k + 1 are then characterized, using Bayes? rule, by the posterior
of the GP conditioned on the training set Sk+1 .
To optimally control this system (i.e., to use an optimal strategy to solve OP), we define the stagereward function at iteration k to be the reduction in the objective function obtained at stage k:
n
o
Sk
rk (Sk , xk+1 , fk+1 ) = max 0, fmin
? fk+1 ,
(13)
Sk
where fmin
is the minimum value of the objective function in the training set Sk . We define the final
reward to be zero: rN +1 (SN +1 ) = 0. The utility function, at a given iteration k characterized by Sk ,
is defined to be the expected reward:
?xk+1 ? X , Uk (xk+1 ; Sk ) = E[rk (Sk , xk+1 , fk+1 ) + Jk+1 (Fk (Sk , xk+1 , fk+1 ))],
(14)
where the expectation is taken with respect to the disturbances, and Jk+1 is defined by Eqs. 9-10.
Note that E[rk (Sk , xk+1 , fk+1 )] is simply the expected improvement given, for all x ? X , by:
!
!
Sk
Sk
fmin
? ?k (x)
fmin
? ?k (x)
Sk
EI(x; Sk ) = fmin ? ?k (x) ?
+ ? k (x)?
,
(15)
? k (x)
? k (x)
where ? is the standard Gaussian CDF and ? is the standard Gaussian PDF.
In other words, the GP is used to simulate possible scenarios, and the next design to evaluate is
chosen to maximize the decrease of the objective function, over the remaining iterations, averaged
over all possible simulated scenarios.
Several related methods have been proposed to go beyond greedy BO strategies. Optimal formulations for BO with a finite budget have been explored in [14, 4]. Both formulations involve nested
maximizations and expectations. Those authors note that their N -steps lookahead methods scale
poorly with the number of steps considered (i.e., the budget N ); they are able to solve the problem
for two-steps lookahead. For some specific instances of BO (e.g., finding the super-level set of a
one-dimensional function), the optimal multi-step strategy can be computed efficiently [3]. Approximation techniques accounting for more steps have been recently proposed. They leverage partial
4
tree exploration [13] or Lipschitz reward function [11] and have been applied to cases where the
control spaces Uk are finite (e.g., at each iteration, uk is one of the 4 or 8 directions that a robot
can take to move before it evaluates f ). Theoretical performance guarantees are provided for the
algorithm proposed in [11]. Another approximation technique for non-greedy BO has been proposed
in GLASSES [5] and is applicable to uncountable control space Uk . It builds an approximation of
the N -steps lookahead formulation by using a one-step lookahead algorithm with approximation
of the value function Jk+1 . The approximate value function is induced by a heuristic oracle based
on a batch Bayesian optimization method. The oracle is used to select up to 15 steps at once to
approximate the value function.
In this paper, we propose to use rollout, an ADP algorithm, to address the intractability of the DP
formulation. The proposed approach is not restricted to countable control spaces, and accounts for
more than two steps. This is achieved by approximating the value function Jk+1 with simulations
over several steps, where the information acquired at each simulated step is explicitly used to simulate
the next step. Note that this is a closed-loop approach, in comparison to GLASSES [5] which is an
open-loop approach. In contrast to the DP formulation, the decision made at each simulated step of
the rollout is not optimal, but guided by problem-dependent heuristics. In this paper we propose the
use of heuristics adapted to BO, leveraging existing greedy BO strategies.
5
Rollout for Bayesian Optimization
Solving the auxiliary problem defined by Eqs. 5,14 is challenging. It requires the solution of nested
maximizations and expectations for which there is no closed-form expression known. In finite spaces,
the DP algorithm already suffers from the curse of dimensionality. In this particular setting, the state
spaces Zk = (X ? R)k are uncountable and their dimension increases by d + 1 at each stage. The
control spaces Uk = X are also uncountable, but of fixed dimension. Thus, solving Eq. 5 with utility
function defined by Eq. 14 is intractable.
To simplify the problem, we use ADP to approximate Uk with the rollout algorithm (see [1, 15] for
an overview). It is a one-step lookahead technique where Jk+1 is approximated using simulations
over several future steps. The difference with the DP formulation is that, in those simulated future
steps, rollout relaxes the requirement to optimally select a design (which is the origin of the nested
maximizations). Instead, rollout uses a suboptimal heuristic to decide which control to apply for a
given state. This suboptimal heuristic is problem-dependent and, in the context of BO with a finite
budget, we propose to use existing greedy BO algorithms as such a heuristic. Our algorithm proceeds
as follows.
For any iteration k, the optimal reward to go, Jk+1 (Eq. 14), is approximated by Hk+1 , the reward to
go induced by a heuristic ? = (?1 , ? ? ? , ?N ), also called base policy. Hk+1 is recursively given by:
HN (SN ) = EI(?N (SN ); SN ),
(16)
Hn (Sn ) = E [rn (Sn , ?n (Sn ), fn+1 ) + ?Hn+1 (F(Sn , ?n (Sn ), fn+1 ))] ,
(17)
for all n ? {k + 1, ? ? ? , N ? 1}, where ? ? [0, 1] is a discount factor incentivizing the early collection
of reward. A discount factor ? = 0, leads to a greedy strategy that maximizes the immediate collection
of reward. This corresponds to maximizing the EI. On the other hand, ? = 1, means that there is no
differentiation between collecting reward early or late in the optimization. Note that Hk+1 is defined
by recursion, and involves nested expectations. However, the nested maximizations are replaced by
the use of the base policy ?. An important point is that, even if its definition is recursive, Hk+1 can
be computed in a forward manner, unlike Jk+1 which has to be computed in a backward fashion (see
Eqs. 9,10). The DP and the rollout formulations are illustrated in Fig.1. The approximated reward
e k+1 using several simplifications. First, we use a rolling
Hk+1 is then numerically approximated by H
horizon, h, to alleviate the curse of dimensionality. At a given iteration k, a rolling horizon limits the
number of stages considered to compute the approximate reward to go by replacing the horizon N by
? = min{k + h, N }. Second, expectations are taken with respect to the (Gaussian) disturbances
N
and are approximated using Gauss-Hermite quadrature. We obtain the following formulation:
? ? (S ? ) = EI(? ? (S ? ); S ? ),
H
(18)
N
N
N
N
N
Nq
h
i
X
(q)
e n (Sn ) =
e n+1 F Sn , ?n (Sn ), f (q)
H
?(q) rn Sn , ?n (Sn ), fn+1 + ? H
,
n+1
q=1
5
(19)
Sk
xk+1
Sk
xk+1
fk+1
fk+1
Sk+1
Sk+1
xk+2
?k+1(Sk+1)
fk+2
fk+2
Sk+2
..
SN
Sk+2
..
SN
???
???
Figure 1: Graphs representing the DP (left) and the rollout (right) formulations (in the binary
decisions, binary disturbances case). Each white circle represents a training set, each black circle
represents a training set and a design. Double arrows are decisions that depend on decisions lower in
the graph (leading to nested optimizations in the DP formulation), single arrows represent decisions
made using a heuristic (independent of the lower part of the graph). Dashed lines are simulated values
of the objective function and lead to the computation of expectations. Note the simpler structure of
the rollout graph compared to the DP one.
? ? 1}, where Nq ? N is the number of quadrature weights ?(q) ? R
for all n ? {k + 1, ? ? ? , N
(q)
and points fk+1 ? R, and rk is the stage-reward defined by Eq. 13. Finally, for all iterations
k ? {1, ? ? ? , N ? 1} and for all xk+1 ? X , we define the utility function to be:
Uk (xk+1 ; Sk ) =
Nq
X
q=1
h
i
(q)
e k+1 F Sk , xk+1 , f (q)
?(q) rk Sk , xk+1 , fk+1 + ? H
.
k+1
(20)
We note that for the last iteration, k = N , the utility function is known in closed form:
UN (xN +1 ; SN ) = EI(xN +1 ; SN ).
(21)
The base policy ? used as a heuristic in the rollout is problem-dependent. A good heuristic ? should
be cheap to compute and induce an expected reward J? close to the optimal expected reward J??
(Eq. 7). In the context of BO with a finite budget, this heuristic should mimic an optimal strategy that
balances the exploration-exploitation trade-off. We propose to use existing BO strategies, in particular,
maximization of the expected improvement (which has an exploratory behavior) and minimization of
the posterior mean (which has an exploitation behavior) to build the base policy. For every iteration
? ? 1}, the
k ? {1, ? ? ? , N ? 1}, we define ? = {?k+1 , ? ? ? , ?N? } such that, at stage n ? {k + 1, N
policy component, ?n : Zn 7? X , maps a state zn = Sn to the design xn+1 that maximizes the
expected improvement (Eq. 15):
xn+1 = argmax EI(x; Sn ).
(22)
x?X
The last policy component, ?N? : ZN? 7? X , is defined to map a state zN? = SN? to the design xN? +1
that minimizes the posterior mean (Eq. 3):
xN? +1 = argmin ?N? (x).
(23)
x?X
Each evaluation of the utility function requires O Nqh applications of a heuristic. In our approach,
the heuristic involves optimizing a quantity that requires O |Sk |2 of work (rank-1 update of the
Cholesky decomposition to update the GP, and back-substitution for the posterior variance).
To summarize, we propose to use rollout, a one-step lookahead algorithm that approximates Jk+1 .
This approximation is computed using simulation over several steps (e.g., more than 3 steps), where
the information collected after a simulated step is explicitly used to simulate the next step (i.e., it is a
closed-loop approach). This is achieved using a heuristic instead of the optimal strategy, and thus,
leads to a formulation without nested maximizations.
6
6
Experiments and Discussion
In this section, we apply the proposed algorithm to several optimization problems with a finite budget
and demonstrate its performance on GP generated and classic test functions.
We use a zero-mean GP with square-exponential kernel (hyper-parameters: maximum variance
? 2 = 4, length scale L = 0.1, noise variance ? = 10?3 ) to generate 24 objective functions defined
on X = [0, 1]2 . We generate 10 designs from a uniform distribution on X , and use them as 10
different initial guesses for optimization. Thus, for each optimization, the initial training set S1
contains one training point. All algorithms are given a budget of N = 15 evaluations. For each of
the initial guess and each objective function, we run the BO algorithm with the following utility
functions: PI, EI and GP-UCB (with the parameter balancing exploration and exploitation set to
? = 3). We also run the rollout algorithm proposed in Sec. 5 and defined by Eqs. 5,20, for the same
objective functions and with the same initial guesses for different parameters of the rolling horizon
h ? {2, 3, 4, 5} and discount factor ? ? {0.5, 0.7, 0.9, 1.0}. All algorithms use the same kernel and
hyper-parameters as those used to generate the objective functions.
Given a limited evaluation budget, we evaluate the performance of an algorithm for the original
problem (Eq. 1) in terms of gap G [8]. The gap measures the best decrease in objective function from
the first to the last iteration, normalized by the maximum reduction possible:
S
G=
S1
N +1
fmin
? fmin
.
S1
fmin ? f (x? )
(24)
The mean and the median performances of the rollout algorithm are computed for the 240 experiments
for the 16 configurations of discount factors and rolling horizons. The results are reported in Table 1.
Table 1: Mean (left) and median (right) performance G over 24 objective functions and 10 initial
guesses for different rolling horizons h and discount factors ?.
?
h=2 h=3 h=4 h=5
?
h=2 h=3 h=4 h=5
0.5
0.7
0.9
1.0
0.790
0.787
0.816
0.818
0.811
0.786
0.767
0.793
0.799
0.787
0.827
0.842
0.817
0.836
0.828
0.812
0.5
0.7
0.9
1.0
0.849
0.849
0.896
0.870
0.862
0.830
0.839
0.861
0.858
0.806
0.876
0.917
0.856
0.878
0.850
0.858
The mean gap achieved is G = 0.698 for PI, G = 0.762 for EI and G = 0.711 for GP-UCB.
All the configurations of the rollout algorithm outperform the three greedy BO algorithms. The
best performance is achieved by the configuration ? = 1.0 and h = 4. For this configuration, the
performance increase with respect to EI is about 8%. The worst mean configuration (? = 0.9 and
h = 3) still outperforms EI by 0.5%.
The median performance achieved is G = 0.738 for PI, G = 0.777 for EI and G = 0.770 for
GP-UCB. All the configurations of the rollout algorithm outperform the three greedy BO algorithms.
The best performance is achieved by the configuration ? = 1.0 and h = 4 (same as best mean
performance). For this configuration, the performance increase with respect to EI is about 14%.
The worst rollout configuration (? = 0.7 and h = 4) still outperforms EI by 2.9%. The complete
distribution of gaps achieved by the greedy BO algorithms and the best and worst configurations of
the rollout is shown in Fig. 2.
We notice that increasing the length of the rolling horizon does not necessarily increase the gap (see
Table 1). This is a classic result from DP (Sec. 6.5.1 of [1]). We also notice that discounting the
future rewards has no clear effect on the gap. For all discount factors tested, we notice that reward is
not only collected at the last stage (See Fig. 2). This is a desirable property. Indeed, in a case where
the optimization has to be stopped before the end of the budget is reached, one would wish to have
collected part of the reward.
We now evaluate the performance on test functions.1 We consider four rollout configurations R-4-9
(h = 4, ? = 0.9), R-4-10 (h = 4, ? = 1.0), R-5-9 (h = 5, ? = 0.9) and R-5-10 (h = 5, ? = 1.0)
1
Test functions from http://www.sfu.ca/~ssurjano/optimization.html.
7
1.0
140
Rollout (Best)
120
Rollout (Worst)
EI
UCB
PI
80
0.8
0.6
G
Realizations
100
60
0.4
Rollout (Best)
0.2
Rollout (Worst)
EI
UCB
PI
40
20
0
0.0
0.1
0.2
0.3
0.4
0.5
G
0.6
0.7
0.8
0.9
0.0
0
1.0
2
4
6
8
Iteration k
10
12
14
Figure 2: Left: Histogram of gap for the rollout (best and worst mean configurations tested) and
greedy BO algorithms. Right: Median gap of the rollout (for the best and worst mean configurations
tested) and other algorithms as a function of iteration (budget of N = 15).
and two additional BO algorithms: PES [7] and the non-greedy GLASSES [5]. We use a squareexponential kernel for each algorithm (hyper-parameters: maximum variance ? 2 = 4, noise variance
? = 10?3 , length scale L set to 10% of the design space length scale). We generate 40 designs from
a uniform distribution on X , and use them as 40 different initial guesses for optimization. Each
algorithm is given N = 15 evaluations. The mean and median gap (over the 40 initial guesses) for
each function define 8 metrics (shown in Table 2). We found that rollout had the best metric 3 times
out of 8, and was never the worst algorithm. PES was found to perform best on 3 metrics out of
8 but was the worst algorithm for 2 metrics out of 8. GLASSES was never the best algorithm and
performed the worst in one metric. Note that the rollout configuration R-4-9 outperforms GLASSES
on 5 metrics out of 6 (excluding the case of the Griewank function). Thus, our rollout algorithm
performs well and shows robustness.
Table 2: Mean and median gap G over 40 initial guesses.
Function name
Branin-Hoo
Goldstein-Price
Griewank
Six-hump Camel
7
Mean
Median
Mean
Median
Mean
Median
Mean
Median
PI
EI
UCB
PES
GLASSES
R-4-9
R-4-10
R-5-9
R-5-10
0.847
0.922
0.873
0.983
0.827
0.904
0.850
0.893
0.818
0.909
0.866
0.981
0.884
0.953
0.887
0.970
0.848
0.910
0.733
0.899
0.913
0.970
0.817
0.915
0.861
0.983
0.819
0.987
0.972
0.987
0.664
0.801
0.846
0.909
0.782
0.919
12
12
0.776
0.941
0.904
0.959
0.895
0.991
0.882
0.967
0.860
0.926
0.898
0.943
0.784
0.985
0.885
0.962
0.825
0.900
0.887
0.921
0.861
0.989
0.930
0.960
0.793
0.941
0.903
0.950
0.743
0.928
0.867
0.954
0.803
0.907
Conclusions
We presented a novel algorithm to perform Bayesian optimization with a finite budget of evaluations.
The next design to evaluate is chosen to maximize a utility function that quantifies long-term rewards.
We propose to employ an approximate dynamic programming algorithm, rollout, to approximate
this utility function. Rollout leverages heuristics to circumvent the need for nested maximizations.
We propose to build such a heuristic using existing suboptimal Bayesian optimization strategies, in
particular maximization of the expected improvement and minimization of the posterior mean. The
proposed approximate dynamic programming algorithm is empirically shown to outperform popular
greedy and non-greedy Bayesian optimization algorithms on multiple test cases.
This work was supported in part by the AFOSR MURI on multi-information sources of multi-physics
systems under Award Number FA9550-15-1-0038, program manager Dr. Jean-Luc Cambier.
2
This gap G = 1 results from an arbitrary choice made by one optimizer used by GLASSES to evaluate the
origin. The origin happens to be the minimizer of the Griewank function. We thus exclude those results from the
analysis.
8
References
[1] D. P. Bertsekas. Dynamic programming and optimal control, volume 1. Athena Scientific, 1995.
[2] E. Brochu, V. M. Cora, and N. De Freitas. A tutorial on Bayesian optimization of expensive cost
functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint
arXiv:1012.2599, 2010.
[3] J. M. Cashore, L. Kumarga, and P. I. Frazier.
Multi-step Bayesian optimization for one-dimensional feasibility determination.
Working paper. Retrieved from
https://people.orie.cornell.edu/pfrazier/pub/workingpaper-CashoreKumargaFrazier.pdf.
[4] D. Ginsbourger and R. Le Riche. Towards Gaussian process-based optimization with finite time horizon.
In mODa 9?Advances in Model-Oriented Design and Analysis, pages 89?96. Springer, 2010.
[5] J. Gonz?lez, M. Osborne, and N. D. Lawrence. GLASSES: Relieving the myopia of Bayesian optimisation.
In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, pages 790?799,
2016.
[6] P. Hennig and C. J. Schuler. Entropy search for information-efficient global optimization. The Journal of
Machine Learning Research, 13(1):1809?1837, 2012.
[7] J. M. Hern?ndez-Lobato, M. W. Hoffman, and Z. Ghahramani. Predictive entropy search for efficient
global optimization of black-box functions. In Advances in Neural Information Processing Systems, pages
918?926, 2014.
[8] D. Huang, T. T. Allen, W. I. Notz, and N. Zeng. Global optimization of stochastic black-box systems via
sequential kriging meta-models. Journal of Global Optimization, 34(3):441?466, 2006.
[9] D. R. Jones. A taxonomy of global optimization methods based on response surfaces. Journal of Global
Optimization, 21(4):345?383, 2001.
[10] D. R. Jones, M. Schonlau, and W. J. Welch. Efficient global optimization of expensive black-box functions.
Journal of Global Optimization, 13(4):455?492, 1998.
[11] C. K. Ling, K. H. Low, and P. Jaillet. Gaussian process planning with lipschitz continuous reward functions:
Towards unifying bayesian optimization, active learning, and beyond. In 30th AAAI Conference on Artificial
Intelligence, 2016.
[12] D. J. Lizotte. Practical Bayesian Optimization. PhD thesis, Edmonton, Alta., Canada, 2008. AAINR46365.
[13] R. Marchant, F. Ramos, and S. Sanner. Sequential Bayesian optimisation for spatial-temporal monitoring.
2015.
[14] M. A. Osborne, R. Garnett, and S. J. Roberts. Gaussian processes for global optimization. In 3rd
International Conference on Learning and Intelligent Optimization (LION3), pages 1?15, 2009.
[15] W. B. Powell. Approximate Dynamic Programming: Solving the Curses of Dimensionality, volume 842.
John Wiley & Sons, 2011.
[16] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, Cambridge,
MA, 2006.
[17] J. Snoek, H. Larochelle, and R. P. Adams. Practical Bayesian optimization of machine learning algorithms.
In Advances in Neural Information Processing Systems, pages 2951?2959, 2012.
[18] N. Srinivas, A. Krause, S. M. Kakade, and M. Seeger. Gaussian process optimization in the bandit setting:
No regret and experimental design. In Proceedings of the 27th International Conference on Machine
Learning, pages 1015?1022, 2010.
[19] J. Villemonteix, E. Vazquez, and E. Walter. An informational approach to the global optimization of
expensive-to-evaluate functions. Journal of Global Optimization, 44(4):509?534, 2009.
9
| 6188 |@word exploitation:6 open:1 seek:2 simulation:7 covariance:1 accounting:1 decomposition:1 recursively:1 reduction:3 initial:10 substitution:1 contains:1 configuration:14 pub:1 ndez:1 outperforms:4 existing:5 past:1 current:2 freitas:1 yet:1 written:2 john:1 fn:3 numerical:2 informative:1 j1:1 cheap:3 designed:1 update:3 greedy:17 selected:4 devising:1 nq:3 guess:7 accordingly:1 intelligence:2 xk:39 ith:1 fa9550:1 provides:2 simpler:2 hermite:1 rollout:35 mathematical:1 branin:1 differential:1 become:1 manner:1 acquired:1 snoek:1 indeed:1 expected:10 behavior:3 planning:1 multi:4 manager:1 bellman:1 informational:1 curse:3 increasing:3 spain:1 provided:2 bounded:1 notation:1 maximizes:5 estimating:1 argmin:1 minimizes:1 finding:2 differentiation:1 guarantee:1 temporal:1 mitigate:1 every:1 collecting:1 uk:31 control:15 bertsekas:1 before:3 engineering:1 dropped:1 limit:1 ap:2 black:4 quantified:1 specifying:1 challenging:3 ease:1 limited:1 averaged:1 lion3:1 practical:2 recursive:2 regret:1 powell:1 significantly:1 confidence:1 word:1 induce:1 protein:1 close:1 context:4 applying:1 optimize:1 www:1 deterministic:1 demonstrated:1 map:2 maximizing:4 lobato:1 go:4 williams:1 starting:1 convex:1 formulate:1 welch:1 griewank:3 schonlau:1 rule:3 population:1 classic:2 exploratory:1 updated:2 construction:1 user:2 programming:12 us:3 origin:3 ego:1 expensive:9 jk:10 approximated:5 muri:1 preprint:1 capture:1 worst:10 trade:4 decrease:2 yk:6 principled:2 kriging:1 complexity:2 reward:29 dynamic:17 depend:1 solving:7 santafe:1 borehole:1 predictive:1 walter:1 artificial:2 hyper:3 choosing:1 whose:3 heuristic:21 jean:1 solve:3 statistic:1 gp:18 noisy:1 final:2 sequence:3 propose:9 lam:1 loop:3 realization:3 poorly:1 lookahead:8 description:1 double:1 requirement:1 adam:1 ij:1 op:3 eq:14 auxiliary:5 involves:2 larochelle:1 quantify:1 direction:1 guided:1 stochastic:2 exploration:6 human:1 everything:1 alleviate:1 considered:4 lawrence:1 mapping:1 optimizer:2 early:2 applicable:1 hoffman:1 minimization:2 cora:1 mit:3 gaussian:10 super:1 nqh:1 cornell:1 improvement:7 frazier:1 rank:1 hk:5 contrast:2 seeger:1 lizotte:1 sense:2 glass:8 dependent:4 typically:4 bandit:1 interested:1 aforementioned:1 html:1 plan:1 spatial:1 equal:3 aware:1 once:2 never:2 biology:1 represents:3 jones:2 future:4 mimic:1 develops:1 simplify:1 employ:2 oblivious:1 intelligent:1 oriented:1 replaced:1 argmax:1 argminx:1 ssurjano:1 investigate:1 hump:1 evaluation:16 partial:2 machinery:1 tree:1 circle:2 theoretical:1 stopped:1 instance:3 modeling:1 zn:9 maximization:11 cost:2 deviation:1 subset:1 entry:2 rolling:6 uniform:2 characterize:1 optimally:2 reported:1 dependency:1 international:3 off:4 physic:1 lez:1 thesis:1 aaai:1 central:1 nm:1 containing:1 hn:3 huang:1 dr:1 leading:1 account:2 exclude:1 de:1 relieving:1 sec:8 wk:19 explicitly:2 depends:1 performed:1 closed:6 reached:1 start:1 bayes:2 contribution:2 square:1 variance:8 characteristic:1 efficiently:1 bayesian:22 cambier:1 monitoring:1 vazquez:1 suffers:1 myopia:1 definition:2 evaluates:1 acquisition:1 villemonteix:1 involved:1 associated:2 gain:2 treatment:1 massachusetts:2 popular:3 dimensionality:4 brochu:1 goldstein:1 back:1 response:1 formulation:14 evaluated:4 box:3 generality:1 stage:12 working:2 hand:2 ei:17 replacing:1 zeng:1 defines:2 scientific:2 name:1 effect:2 normalized:1 hence:1 discounting:1 iteratively:1 illustrated:1 white:1 pdf:2 complete:1 demonstrate:1 performs:1 dedicated:1 allen:1 novel:1 recently:1 empirically:1 overview:4 volume:2 adp:3 approximates:1 numerically:3 refer:1 cambridge:3 rd:2 fk:22 similarly:1 had:1 robot:1 jaillet:1 surface:1 etc:2 base:4 posterior:11 retrieved:1 optimizing:3 scenario:5 gonz:1 meta:1 binary:2 yi:2 seen:1 minimum:1 additional:1 maximize:3 dashed:1 full:1 desirable:1 multiple:1 stem:1 characterized:6 determination:1 long:7 concerning:1 award:1 feasibility:1 involving:1 optimisation:2 expectation:8 metric:7 arxiv:2 iteration:19 kernel:4 sometimes:1 represent:2 histogram:1 achieved:7 folding:1 addition:1 krause:1 median:10 source:1 unlike:1 subject:1 induced:2 leveraging:1 camel:1 presence:1 leverage:3 relaxes:1 affect:1 xj:1 suboptimal:4 riche:1 consumed:1 expression:2 six:1 utility:14 effort:1 karen:1 proceed:1 generally:1 santa:2 involve:2 clear:1 discount:6 generate:4 http:2 outperform:3 tutorial:1 notice:3 discrete:1 hennig:1 key:1 four:1 achieving:1 backward:2 graph:4 run:3 uncertainty:1 decide:1 sfu:1 decision:7 ki:1 bound:1 followed:1 simplification:1 encountered:1 oracle:2 adapted:3 dominated:1 fmin:8 simulate:4 prescribed:1 optimality:1 min:1 according:1 combination:1 hoo:1 describes:1 son:1 kakade:1 evolves:1 making:1 s1:4 happens:1 restricted:1 taken:3 equation:1 hern:1 discus:1 end:1 apply:2 hierarchical:1 batch:1 robustness:1 jn:1 original:2 uncountable:6 remaining:3 include:1 unifying:1 ghahramani:1 build:5 especially:1 approximating:1 objective:30 move:1 added:1 quantity:2 occurs:1 already:1 strategy:14 costly:1 surrogate:1 dp:19 simulated:8 athena:1 collected:4 length:4 modeled:2 balance:4 minimizing:1 fe:2 potentially:2 taxonomy:1 robert:1 design:30 countable:1 policy:13 unknown:3 perform:3 upper:1 observation:2 finite:17 immediate:1 excluding:1 y1:2 rn:6 arbitrary:1 canada:1 david:1 pair:1 connection:1 z1:8 barcelona:1 nip:1 address:3 beyond:2 able:1 proceeds:1 dynamical:2 summarize:1 program:1 max:3 circumvent:2 disturbance:8 ramos:1 recursion:1 sanner:1 representing:2 improve:1 technology:2 administering:1 brief:2 sn:21 prior:2 review:1 afosr:1 fully:3 loss:1 rationale:1 willcox:1 principle:1 intractability:1 pi:7 balancing:1 supported:1 last:4 rasmussen:1 guide:1 allow:2 side:1 institute:3 benefit:3 dimension:3 xn:6 evaluating:3 ending:1 author:1 made:4 collection:2 forward:1 reinforcement:1 ginsbourger:1 approximate:11 global:12 active:2 consuming:1 xi:3 un:1 search:2 continuous:1 quantifies:3 sk:38 table:5 schuler:1 zk:30 ca:1 ignoring:1 argmaxx:1 complex:1 necessarily:1 garnett:1 arrow:2 noise:3 ling:1 osborne:2 allowed:1 quadrature:2 x1:3 augmented:1 fig:3 edmonton:1 fashion:2 wiley:1 wish:1 exponential:1 governed:2 pe:3 late:1 admissible:1 rk:8 incentivizing:1 specific:1 showing:1 explored:1 orie:1 essential:1 intractable:1 sequential:2 phd:1 budget:22 conditioned:4 horizon:9 gap:11 wolpert:1 entropy:2 remi:1 simply:1 bo:30 springer:1 nested:11 minimizer:2 corresponds:1 ma:3 cdf:1 goal:2 formulated:1 identity:1 quantifying:1 towards:2 lipschitz:2 price:1 luc:1 notz:1 specifically:1 total:2 called:3 gauss:1 experimental:1 ucb:7 select:4 cholesky:1 people:1 evaluate:14 tested:3 srinivas:1 alta:1 |
5,734 | 6,189 | Kernel Observers: Systems-Theoretic Modeling and
Inference of Spatiotemporally Evolving Processes
Hassan A. Kingravi
Pindrop
Atlanta, GA 30308
[email protected]
Harshal Maske and Girish Chowdhary
University of Illinois at Urbana Champaign
Urbana, IL 61801
[email protected], [email protected]
Abstract
We consider the problem of estimating the latent state of a spatiotemporally evolving continuous function using very few sensor measurements. We show that
layering a dynamical systems prior over temporal evolution of weights of a kernel
model is a valid approach to spatiotemporal modeling, and that it does not require
the design of complex nonstationary kernels. Furthermore, we show that such a
differentially constrained predictive model can be utilized to determine sensing
locations that guarantee that the hidden state of the phenomena can be recovered
with very few measurements. We provide sufficient conditions on the number and
spatial location of samples required to guarantee state recovery, and provide a lower
bound on the minimum number of samples required to robustly infer the hidden
states. Our approach outperforms existing methods in numerical experiments.
1
Introduction
Modeling of large-scale stochastic phenomena with both spatial and temporal (spatiotemporal)
evolution is a fundamental problem in the applied sciences and social networks. The spatial and
temporal evolution in such domains is constrained by stochastic partial differential equations, whose
structure and parameters may be time-varying and unknown. While modeling spatiotemporal
phenomena has traditionally been the province of the field of geostatistics, it has in recent years
gained more attention in the machine learning community [2]. The data-driven models developed
through machine learning techniques provide a way to capture complex spatiotemporal phenomena
that are not easily modeled by first-principles alone, such as stochastic partial differential equations.
In the machine learning community, kernel methods represent a class of extremely well-studied and
powerful methods for inference in spatial domains; in these techniques, correlations between the input
variables are encoded through a covariance kernel, and the model is formed through a linear weighted
combination of the kernels [14]. In recent years, kernel methods have been applied to spatiotemporal
modeling with varying degrees of success [2, 14]. Many recent techniques in spatiotemporal modeling
have focused on nonstationary covariance kernel design and associated hyperparameter learning
algorithms [4, 7, 12]. The main benefit of careful design of covariance kernels over approaches that
simply include time as an additional input variable is that they can account for intricate spatiotemporal
couplings. However, there are two key challenges with these approaches: the first is ensuring
the scalability of the model to large scale phenomena, which manifests due to the fact that the
hyperparameter optimization problem is not convex in general, leading to methods that are difficult
to implement, susceptible to local minima, and that can become computationally intractable for
large datasets. In addition to the challenge of modeling spatiotemporally varying processes, we
are interested in addressing the second very important, and widely unaddressed challenge: given a
predictive model of the spatiotemporal phenomena, how can the current latent state of the phenomena
be estimated using as few sensor measurements as possible? This is called the monitoring problem.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Monitoring a spatiotemporal phenomenon is concerned with estimating its current state, predicting
its future evolution, and inferring the initial conditions utilizing limited sensor measurements. The
key challenges here manifest due to the fact that it is typically infeasible or expensive to deploy
sensors at a large scale across vast spatial domains. To minimize the number of sensors deployed, a
predictive data-driven model of the spatiotemporal evolution could be learned from historic datasets
or through remote sensing (e.g. satellite, radar) datasets. Then, to monitor the phenomenon, the
key problem would boil down to reliably and quickly estimating the evolving latent state of the
phenomena utilizing measurements from very few sampling locations.
In this paper, we present an alternative perspective on solving the spatiotemporal monitoring problem
that brings together kernel-based modeling, systems theory, and Bayesian filtering. Our main
contributions are two-fold: first, we demonstrate that spatiotemporal functional evolution can be
modeled using stationary kernels with a linear dynamical systems layer on their mixing weights. In
other words, the model proposed here posits differential constraints, embodied as a linear dynamical
system, on the spatiotemporal evolution of a kernel based models, such as Gaussian Processes.
This approach does not necessarily require the design of complex spatiotemporal kernels, and can
accommodate positive-definite kernels on any domain on which it?s possible to define them, which
includes non-Euclidean domains such as Riemannian manifolds, strings, graphs and images [6].
Second, we show that the model can be utilized to determine sensing locations that guarantee that the
hidden states of functional evolution can be estimated using a Bayesian state-estimator with very few
measurements. We provide sufficient conditions on the number and location of sensor measurements
required and prove non-conservative lower bounds on the minimum number of sampling locations.
The validity of the presented model and sensing techniques is corroborated using synthetic and large
real datasets.
1.1
Related Work
There is a large body of literature on spatiotemporal modeling in geostatistics where specific processdependent kernels can be used [17, 2]. From the machine learning perspective, a naive approach is to
utilize both spatial and temporal variables as inputs to a Mercer kernel [10]. However, this technique
leads to an ever-growing kernel dictionary. Furthermore, constraining the dictionary size or utilizing
a moving window will occlude learning of long-term patterns. Periodic or nonstationary covariance
functions and nonlinear transformations have been proposed to address this issue [7, 14]. Work
focusing on nonseparable and nonstationary covariance kernels seeks to design kernels optimized
for environment-specific dynamics, and to tune their hyperparameters in local regions of the input
space. Seminal work in [5] proposes a process convolution approach for space-time modeling. This
model captures nonstationary structure by allowing the convolution kernel to vary across the input
space. This approach can be extended to a class of nonstationary covariance functions, thereby
allowing the use of a Gaussian process (GP) framework, as shown in [9]. However, since this
model?s hyperparameters are inferred using MCMC integration, its application has been limited to
smaller datasets. To overcome this limitation, [12] proposes to use the mean estimates of a second
isotropic GP (defined over latent length scales) to parameterize the nonstationary covariances. Finally,
[4] considers nonistropic variation across different dimension of input space for the second GP as
opposed to isotropic variation by [12]. Issues with this line of approach include the nonconvexity of
the hyperparameter optimization problem and the fact that selection of an appropriate nonstationary
covariance function for the task at hand is a nontrivial design decision (as noted in [16]).
Apart from directly modeling the covariance function using additional latent GPs, there exist several
other approaches for specifying nonstationary GP models. One approach maps the nonstationary
spatial process into a latent space, in which the problem becomes approximately stationary [15].
Along similar lines, [11] extends the input space by adding latent variables, which allows the model
to capture nonstationarity in original space. Both these approaches require MCMC sampling for
inference, and as such are subject to the limitations mentioned in the preceding paragraph.
A geostatistics approach that finds dynamical transition models on the linear combination of weights
of a parameterized model [2, 8] is advantageous when the spatial and temporal dynamics are
hierarchically separated, leading to a convex learning problem. As a result, complex nonstationary
kernels are often not necessary (although they can be accommodated). The approach presented in this
paper aligns closely with this vein of work. A system theoretic study of this viewpoint enables the
fundamental contributions of the paper, which are 1) allowing for inference on more general domains
with a larger class of basis functions than those typically considered in the geostatistics community,
2
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
(a) 1-shaded (Def. 1)(b) 2-shaded (Eq. (4))
Figure 1: Two types of Hilbert space evolutions.
Figure 2: Shaded observation matrices for dictio-
Left: discrete switches in RKHS H; Right: smooth
evolution in H.
nary of atoms.
and 2) quantifying the minimum number of measurements required to estimate the state of functional
evolution.
It should be noted that the contribution of the paper concerning sensor placement is to provide
sufficient conditions for monitoring rather than optimization of the placement locations, hence a
comparison with these approaches is not considered in the experiments.
2
Kernel Observers
This section outlines our modeling framework and presents theoretical results associated with the
number of sampling locations required for monitoring functional evolution.
2.1
Problem Formulation
We focus on predictive inference of a time-varying stochastic process, whose mean f evolves
temporally as f? +1 ? F(f? , ?? ), where F is a distribution varying with time ? and exogenous inputs
?. Our approach builds on the fact that in several cases, temporal evolution can be hierarchically
separated from spatial functional evolution. A classical and quite general example of this is the
abstract evolution equation (AEO), which can be defined as the evolution of a function u embedded
in a Banach space B: u(t)
?
= Lu(t), subject to u(0) = u0 , and L : B ? B determines spatiotemporal
transitions of u ? B [1]. This model of spatiotemporal evolution is very general (AEOs, for example,
model many PDEs), but working in Banach spaces can be computationally taxing. A simple way
to make the approach computationally realizable is to place restrictions on B: in particular, we
restrict the sequence f? to lie in a reproducing kernel Hilbert space (RKHS), the theory of which
provides powerful tools for generating flexible classes of functions with relative ease [14]. In a
kernel-based model, k : ? ? ? ? R is a positive-definite Mercer kernel on a domain ? that models
the covariance between any two points in the input space, and implies the existence of a smooth
map ? : ? ? H, where H is an RKHS with the property k(x, y) = h?(x), ?(y)iH . The key insight
behind the proposed model is that spatiotemporal evolution in the input domain corresponds to
temporal evolution of the mixing weights of a kernel model alone in the functional domain. Therefore,
f? can be modeled by tracing the evolution of its mean embedded in a RKHS using switched ordinary
differential equations (ODE) when the evolution is continuous, or switched difference equations when
it is discrete (Figure 1). The advantage of this approach is that it allows us to utilize powerful ideas
from systems theory for deriving necessary and sufficient conditions for spatiotemporal monitoring. In
this paper, we restrict our attention to the class of functional evolutions F defined by linear Markovian
transitions in an RKHS. While extension to the nonlinear case is possible (and non-trivial), it is not
pursued in this paper to help ease the exposition of the key ideas. The class of linear transitions in
RKHS is rich enough to model many real-world datasets, as suggested by our experiments.
Let y? ? RN be the measurements of the function available from N sensors at time ? , A : H ? H
be a linear transition operator in the RKHS H, and K : H ? RN be a linear measurement operator.
The model for the functional evolution and measurement studied in this paper is:
f? +1 = Af? + ?? ,
y? = Kf? + ?? ,
(1)
where ?? is a zero-mean stochastic process in H, and ?? is a Wiener process in RN . Classical
treatments of kernel methods emphasize that for most kernels, the feature map ? is unknown,
and possibly infinite-dimensional; this forces practioners to work in the dual space of H, whose
dimensionality is the number of samples in the dataset being modeled. This conventional wisdom
precludes the use of kernel methods for most tasks involving modern datasets, which may have
3
millions and sometimes billions of samples [13]. An alternative is to work with a feature map
b
b with the property that for every
?(x)
:= [ ?b1 (x) ??? ?bM (x) ] to an approximate feature space H,
b
b
b
element f ? H, ?f ? H and an > 0 s.t. kf ? f k < for an appropriate function norm. A few such
approximations are listed below.
Dictionary of atoms Let ? be compact. Given points C = {c1 , . . . , cM }, ci ? ?, we have a
dictionary of atoms F C = {?(c1 ), ? ? ? , ?(cM )}, ?(ci ) ? H, the span of which is a strict subspace
b of the RKHS H generated by the kernel. Here,
H
?bi (x) := h?(x), ?(ci )iH = k(x, ci )
(2)
M ?M
Low-rank approximations Let ? be compact, let C = {c1 , . . . , cM }, ci ? ?, and let K ? R
,
Kij := k(ci , cj ) be the Gram matrix computed from C. This matrix can be diagonalized to compute
bi , ?bi (x)) of the eigenvalues and eigenfunctions (?i , ?i (x)) of the kernel [18].
approximations (?
p
bi ?bi (x).
These spectral quantities can then be used to compute ?bi (x) := ?
2
2
Random Fourier features Let ? ? Rn be compact, and let k(x, y) = e?kx?yk /2? be the
Gaussian RBF kernel. Then random Fourier features approximate the kernel feature map as ?b? :
b where ? is a sample from the Fourier transform of k(x, y), with the property that k(x, y) =
? ? H,
b
E? [h?? (x), ?b? (y)iHb ] [13]. In this case, if V ? RM/2?n is a random matrix representing the sample
?, then ?bi (x) := [ ?1M sin([V x]i ), ?1M cos([V x]i ) ]. Similar approximations exist for other radially
symmetric and dot product kernels.
b ? H.
b
In the approximate space case, we replace the transition operator A : H ? H in (1) by Ab : H
This approximate regime, which trades off the flexibility of a truly nonparametric approach for
computational realizability, still allows for the representation of rich phenomena, as will be seen in
the sequel. The finite-dimensional evolution equations approximating (1) in dual form are
b ? + ?? , y? = Kw? + ?? ,
w? +1 = Aw
(3)
b ? RM ?M , K ? RN ?M , the vectors w? ? RM , and where we have
where we have matrices A
b counterparts. Here K is the matrix whose
slightly abused notation to let ?? and ?? denote their H
b
b
b
rows are of the form K(i) = ?(xi ) = [ ?1 (xi ) ?2 (xi ) ??? ?bM (xi ) ]. In systems-theoretic language,
each row of K corresponds to a measurement at a particular location, and the matrix itself
acts as
a measurement operator. We define the generalized observability matrix [20] as O? =
b?1
KA
???
b?L
KA
where ? = {?1 , . . . , ?L } are the set of instances ?i when we apply the operator K. A linear system
is said to be observable if O? has full column rank (i.e. RankO? = M ) for ? = {0, 1, . . . , M ? 1}
[20]. Observability guarantees two critical facts: firstly, it guarantees that the state w0 can be
recovered exactly from a finite series of measurements {y?1 , y?2 , . . . , y?L }; in particular, defining
T
y? = y?T1 , y?T2 , ? ? ? , y?LT , we have that y? = O? w0 . Secondly, it guarantees that a feedback based
observer can be designed such that the estimate of w? , denoted by w
b? , converges exponentially fast
b is available: while we
to w? in the limit of samples. Note that all our theoretical results assume A
perform system identification in the experiments (Section 3.3), it is not the focus of the paper.
We are now in a position to formally state the spatiotemporal modeling and inference problem
considered: given a spatiotemporally evolving system modeled using (3), choose a set of N sensing
locations such that even with N M , the functional evolution of the spatiotemporal model can
be estimated (which corresponds to monitoring) and can be predicted robustly (which corresponds
to Bayesian filtering). Our approach to solve this problem relies on the design of the measurement
b is observable: any Bayesian state estimator (e.g. a Kalman filter)
operator K so that the pair (K, A)
utilizing this pair is denoted as a kernel observer 1 . We will leverage the spectral decomposition of
b for this task (see ??? in supplementary for details on spectral decomposition).
A
2.2 Main Results
In this section, we prove results concerning the observability of spatiotemporally varying functions
modeled by the functional evolution and measurement equations (3) formulated in Section 2.1. In
1
In the case where no measurements are taken, for the sake of consistency, we denote the state estimator as
an autonomous kernel observer, despite this being something of an oxymoron.
4
particular, observability of the system states implies that we can recover the current state of the
spatiotemporally varying function using a small number of sampling locations N , which allows us to
1) track the function, and 2) predict its evolution forward in time. We work with the approximation
b ? H: given M basis functions, this implies that the dual space of H
b is RM . Proposition 1 shows
H
b
that if A has a full-rank Jordan decomposition, the observation matrix K meeting a condition called
shadedness (Definition 1) is sufficient for the system to be observable. Proposition 2 provides a
b
lower bound on the number of sampling locations required for observability which holds for any A.
e
Proposition 3 constructively shows the existence of an abstract measurement map K achieving this
lower bound. Finally, since the measurement map does not have the structure of a kernel matrix,
b is in Theorem 1. Proofs of all
a slightly weaker sufficient condition for the observability of any A
claims are in the supplementary material.
Definition 1. (Shaded Observation Matrix) Given k : ? ? ? ? R positive-definite on a domain ?,
b and
let {?b1 (x), . . . , ?bM (x)} be the set of bases generating an approximate feature map ?b : ? ? H,
N ?M
b
let X = {x1 , . . . , xN }, xi ? ?. Let K ? R
be the observation matrix, where Kij := ?j (xi ).
(i) (i)
(i)
b
b
For each row K(i) := [ ?1 (xi ) ??? ?M (xi ) ], define the set I(i) := {?1 , ?2 , . . . , ?Mi } to be the indices
S
in the observation matrix row i which are nonzero. Then if i?{1,...,N } I (i) = {1, 2, . . . , M }, we
denote K as a shaded observation matrix (see Figure 2a).
This definition seems quite abstract, so the following remark considers a more concrete example.
Remark 1. Let ?b be generated by the dictionary given by C = {c1 , . . . , cM }, ci ? ?. Note that since
?bj (xi ) = h?(xi ), ?(cj )iH = k(xi , cj ), K is the kernel matrix between X and C. For the kernel
matrix to be shaded thus implies that there does not exist an atom ?(cj ) such that the projections
h?(xi ), ?(cj )iH vanish for all xi , 1 ? i ? N . Intuitively, the shadedness property requires that the
sensor locations xi are privy to information propagating from every cj . As an example, note that, in
principle, for the Gaussian kernel, a single row generates a shaded kernel matrix2 .
Proposition 1. Given k : ? ? ? ? R positive-definite on a domain ?, let {?b1 (x), . . . , ?bM (x)} be
b and let X = {x1 , . . . , xN },
the set of bases generating an approximate feature map ?b : ? ? H,
b
xi ? ?. Consider the discrete linear system on H given by the evolution and measurement equations
b ? RM ?M of the form A
b = P ?P ?1
(3). Suppose that a full-rank Jordan decomposition of A
exists, where ? = [ ?1 ??? ?O ], and there are no repeated eigenvalues. Then, given a set of time
instances ? = {?1 , ?2 , . . . , ?L }, and a set of sampling locations X = {x1 , . . . , xN }, the system (3)
is observable if the observation matrix Kij is shaded according to Definition 1, ? has distinct values,
and |?| ? M .
When the eigenvalues of the system matrix are repeated, it is not enough for K to be shaded. In
b to
the next proposition, we take a geometric approach and utilize the rational canonical form of A
obtain a lower bound on the number of sampling locations required. Let r be the number of unique
b and let ?? denote the geometric multiplicity of eigenvalue ?i . Then the cyclic
eigenvalues of A,
i
b is defined as ` = max1?i?r ?? [19] (see supplementary section ?? for details).
index of A
i
Proposition 2. Suppose that the conditions in Proposition 1 hold, with the relaxation that the Jordan
blocks [ ?1 ??? ?O ] may have repeated eigenvalues (i.e. ??i and ?j s.t. ?i = ?j ). Then there exist
kernels k(x, y) such that the lower bound ` on the number of sampling locations N is given by the
b
cyclic index of A.
Section ?? in supplementary gives a concrete example to build intuition regarding this lower bound.
e corresponding to the lower bound `.
We now show how to construct a matrix K
Proposition 3. Given the conditions stated in Proposition 2, it is possible to construct a measurement
e ? R`?M for the system given by (3), such that the pair (K,
e A)
b is observable.
map K
The construction provided in the proof of Proposition 3 is utilized in Algorithm 1, which uses
b to generate a series of vectors vi ? RM , whose iterations
the rational canonical structure of A
2
However, in this case, the matrix can have many entries that are extremely close to zero, and will probably
be very ill-conditioned.
5
e
Algorithm 1 Measurement Map K
b ? RM ?M
Input: A
bT Q. Set C0 := C, and M0 := M .
Compute rational canonical form, such that C = Q?1 A
for i = 1 to ` do
Obtain MP ?i (?) of Ci?1 . This returns associated indices J (i) ? {1, 2, . . . , Mi?1 }.
Construct vector vi ? RM such that ?vi (?) = ?i (?) .
Use indices {1, 2, . . . , Mi?1 } \ J (i) to select matrix Ci . Set Mi := |{1, 2, . . . , Mi?1 } \ J (i) |
end for
? = [v1T , v2T , ..., v`T ]T
Compute K
e = KQ
? ?1
Output: K
bm1 ?1 v1 , . . . , v` , . . . , A
bm` ?1 v` } generate a basis for RM . Unfortunately, the measurement
{v1 , . . . , A
e
map K, being an abstract construction unrelated to the kernel, does not directly select X . We will
show how to use the measurement map to guide a search for X in Remark ??. For now, we state a
sufficient condition for observability of a general system.
Theorem 1. Suppose that the conditions in Proposition 1 hold, with the relaxation that the Jordan
b Define
? ? ? ?O ] may have repeated eigenvalues. Let ` be the cyclic index of A.
blocks [?1
K = [ K (1)T
??? K (`)
T
T
]
(4)
as the `-shaded matrix which consists of ` shaded matrices with the property that any subset of `
columns in the matrix are linearly independent from each other. Then system (3) is observable if ?
has distinct values, and |?| ? M .
While Theorem 1 is a quite general result, the condition that any ` columns of K be linearly
independent is a very stringent condition. One scenario where this condition can be met with
b
minimal measurements is in the case when the feature map ?(x)
is generated by a dictionary of
atoms with the Gaussian RBF kernel evaluated at sampling locations {x1 , . . . , xN } according to
(2), where xi ? ? ? Rd , and xi are sampled from a non-degenerate probability distribution on
? such as the uniform distribution. For a semi-deterministic approach, when the dynamics matrix
b is block-diagonal, a simple heuristic is given in Remark ?? in the supplementary. Note that in
A
b needs to be inferred from measurements of the process f? . If no assumptions
practice the matrix A
b at least M sensors are required for the system identification phase. Future work will
are placed on A,
study the precise conditions under which system identification is possible with less than M sensors.
Finally, computing the Jordan and rational canonical forms can be computationally expensive: see the
supplementary for more details. We note that the crucial step in our approach is computing the cyclic
index, which gives us the minimum number of sensors that need to be deployed, the computational
complexity of which is O(M 3 ). Computation of the canonical forms is required in the case we need
to strictly realize the lower bound on the number of sensors.
3
3.1
Experimental Results
Sampling Locations for Synthetic Data Sets
The goal of this experiment is to investigate the dependency of the observability of system (3) on
the shaded observation matrix and the lower bound presented in Proposition 2. The domain is fixed
on the interval ? = [0, 2?]. First, we pick sets of points C (?) = {c1 , . . . , cM? }, cj ? ?, M = 50,
and construct a dynamics matrix A = ? ? RM ?M , with cyclic index 5. We pick the RBF kernel
2
2
k(x, y) = e?kx?yk /2? , ? = 0.02. Generating samples X = {x1 , . . . , xN }, xi ? ? randomly, we
compute the `-shaded property and observability for this system. Figure 3a shows how shadedness is
a necessary condition for observability, validating Proposition 1: the slight gap between shadedness
and observability here can be explained due to numerical issues in computing the rank of O? . Next,
we again pick M = 50, but for a system with a cyclic index ` = 18. We constructed the measurement
e using Algorithm 1, and the heuristic in Remark ?? (Algorithm 2 in the supplementary) as
map K
well as random sampling to generate the sampling locations X . These results are presented in Figure
3b. The plot for random sampling has been averaged over 100 runs. It is evident from the plot that
6
observability cannot be achieved for a number of samples N < `. Clearly, the heuristic presented
outperforms random sampling; note however, that our intent is not to compare the heuristic against
random sampling, but to show that the lower bound ` provides decisive guidelines for selecting the
number of samples while using the computationally efficient random approach.
3.2
Comparison With Nonstationary Kernel Methods on Real-World Data
We use two real-world datasets to evaluate and compare the kernel observer with the two different
lines of approach for non-stationary kernels discussed in Section 1.1. For the Process Convolution
with Local Smoothing Kernel (PCLSK) and Latent Extension of Input Space (LEIS) approaches, we
compare with NOSTILL-GP [4] and [11] respectively, on the Intel Berkeley and Irish Wind datasets.
Model inference for the kernel observer involved three steps: 1) picking the Gaussian RBF kernel
2
2
k(x, y) = e?kx?yk /2? , a search for the ideal ? is performed for a sparse Gaussian Process model
(with a fixed basis vector set C selected using the method in [3]. For the data set discussed in this
section, the number of basis vectors were equal to the number of sensing locations in the training
set, with the domain for input set defined over R2 ; 2) having obtained ?, Gaussian process inference
is used to generate weight vectors for each time-step in the training set, resulting in the sequence
b (Algorithm 3 in
w? , ? ? {1, . . . , T }; 3) matrix least-squares is applied to this sequence to infer A
b is used to propagate the state w?
the supplementary). For prediction in the autonomous setup, A
forward to make predictions with no feedback, and in the observer setup, a Kalman filter (Algorithm
4 in the supplementary) with N determined using Proposition 2, and locations picked randomly, is
used to propagate w? forward to make predictions. We also compare with a baseline GP (denoted by
?original GP?), which is the sparse GP model trained using all of the available data.
Our first dataset, the Intel Berkeley research lab temperature data, consists of 50 wireless temperature
sensors in indoor laboratory region spanning 40.5 meters in length and 31 meters in width3 . Training
data consists of temperature data on March 6th 2004 at intervals of 20 minutes (beginning 00:20
hrs) which totals to 72 timesteps. Testing is performed over another 72 timesteps beginning 12:20
hrs of the same day. Out of 50 locations, we uniformly selected 25 locations each for training and
testing purposes. Results of the prediction error are shown in box-plot form in Figure 4a and as a
b
time-series in Figure 4b, note that ?Auto? refers to autonomous set up. Here, the cyclic index of A
was determined to be 2, so N was set to 2 for the kernel observer with feedback. Note that here, even
the autonomous kernel observer outperforms PCLSK and LEIS overall, and the kernel observer with
feedback N = 2 does so significantly, which is why we did not include results with N > 2.
The second dataset is the Irish wind dataset, consisting of daily average wind speed data collected
from year 1961 to 1978 at 12 meteorological stations in the Republic of Ireland4 . The prediction
b
error is in box-plot form in Figure 5a and as a time-series in Figure 5b. Again, the cyclic index of A
was determined to be 2. In this case, the autonomous kernel observer?s performance is comparable to
PCLSK and LEIS, while the kernel observer with feedback with N = 2 again outperforms all other
methods. Table ?? in the supplementary reports the total training and prediction times associated
with PCLSK, LEIS, and the kernel observer. We observed that 1) the kernel observer is an order of
magnitude faster, and 2) even for small sets, competing methods did not scale well.
3.3 Prediction of Global Ocean Surface Temperature
We analyzed the feasibility of our approach on a large dataset from the National Oceanographic
Data Center: the 4 km AVHRR Pathfinder project, which is a satellite monitoring global ocean
surface temperature (Fig. 6a). This dataset is challenging, with measurements at over 37 million
possible coordinates, but with only around 3-4 million measurements available per day, leading to
a lot of missing data. The goal was to learn the day and night temperature models on data from
the year 2011, and to monitor thereafter for 2012. Success in monitoring would demonstrate two
things: 1) the modeling process can capture spatiotemporal trends that generalize across years, and
2) the observer framework allows us to infer the state using a number of measurements that are
an order of magnitude fewer than available. Note that due to the size of the dataset and the high
computational requirements of the nonstationary kernel methods, a comparison with them was not
pursued. To build the autonomous kernel observer and general kernel observer models, we followed
the same procedure outlined in Section 3.2, but with C = {c1 , . . . , cM }, cj ? R2 , |C| = 300. Cyclic
3
4
http://db.csail.mit.edu/labdata/labdata.html
http://lib.stat.cmu.edu/datasets/wind.desc
7
0.4
0.2
0
20
30
40
0.8
0.6
0.4
0.2
10
Samples
20
30
40
295
6
290
285
4
280
2
275
0
(a) AVHHR estimate
4
2
1
0
0
20
40
14
12
10
8
6
4
20
18
18
16
14
12
10
8
6
4
(b) Error (time-series)
Original
Auto
N=250
N=500
40
Timesteps
60
80
(b) Error (time-series)
20
Original
Autonomous
KO (N = 1000)
16
20
2
30
LEIS
2
Original
Autonomous
KO (N = 1000)
18
16
14
12
10
8
6
4
2
Jul 11 Sep 11 Nov 11 Jan 12 Mar 12 May 12
Timesteps
(b) Error-day (time-series) (c) Error-night (time-series)
RMS Error in Temperature (K)
RMS Error in Temperature (K)
6
20
Timesteps
Observer PCLSK
3
Jul 11 Sep 11 Nov 11 Jan 12 Mar 12 May 12
Timesteps
Original
Autonomous
Kernel Observer N = 2
PCLSK
LEIS
10
Auto
18
LEIS
8
0
0
1
4
2
270
(a) Error (boxplot)
RMS Error in Wind Speed (knots)
RMS Error in Temperature (K)
RMS Error in Wind Speed (knots)
300
10
2
1.5
20
305
8
12
3
2.5
Figure 4: Comparison of kernel observer to
PCLSK and LEIS methods on Intel dataset.
310
10
Observer PCLSK
3.5
(a) Error (boxplot)
Figure 3: Kernel observability results.
Auto
Original
Autonomous
Kernel Observer N = 2
PCLSK
LEIS
5
50
Samples
(a) Shaded vs. observability (b) Heuristic vs. random
Original
4
Original
0
50
4.5
16
14
12
10
8
6
(d) Error-day (boxplot)
5
4
3
2
1
4
2
N=1000
6
Training Time (seconds)
10
1
5
RMS Error in Temperature (oC)
0.6
Random
Heuristic
1.2
RMS Error in Temperature (K)
0.8
RMS Error in Temperature (oC)
1
Percentage Observable
Percentage Observable
6
obs.
shaded
1.2
0
Original
Auto
N=250
N=500
N=1000
(e) Error-night (boxplot)
Original
Auto
N=250
N=500
N=1000
(f) Estimation time (day)
Figure 5: Irish Wind Figure 6: Performance of the kernel observer over AVVHR satellite
2011-12 data with different numbers of observation locations.
b was determined to be 250 and hence the Kalman filter for kernel observer model using
index of A
N ? {250, 500, 1000} at random locations was utilized to track the system state given a random
initial condition w0 . As a fair baseline, the observers are compared to training a sparse GP model
(labeled ?original?) on approximately 400, 000 measurements per day. Figures 6b and 6c compare the
autonomous and feedback approach with 1, 000 samples to the baseline GP; here, it can be seen that
the autonomous does well in the beginning, but then incurs an unacceptable amount of error when
the time series goes into 2012, i.e. where the model has not seen any training data, whereas KO does
well throughout. Figures 6d and 6e show a comparison of the RMS error of estimated values from
the real data. This figure shows the trend of the observer getting better state estimates as a function of
the number of sensing locations N 5 . Finally, the prediction time of KO is much less than retraining
the model every time step, as shown in Figure 6f.
4
Conclusions
This paper presented a new approach to the problem of monitoring complex spatiotemporally evolving
phenomena with limited sensors. Unlike most Neural Network or Kernel based models, the presented
approach inherently incorporates differential constraints on the spatiotemporal evolution of the mixing
weights of a kernel model. In addition to providing an elegant and efficient model, the main benefit
of the inclusion of the differential constraint in the model synthesis is that it allowed the derivation
of fundamental results concerning the minimum number of sampling locations required, and the
identification of correlations in the spatiotemporal evolution, by building upon the rich literature in
systems theory. These results are non-conservative, and as such provide direct guidance in ensuring
robust real-world predictive inference with distributed sensor networks.
Acknowledgment
This work was supported by AFOSR grant #FA9550-15-1-0146.
5
Note that we checked the performance of training a GP with only 1, 000 samples as a control, but the
average error was about 10 Kelvins, i.e. much worse than KO.
8
References
[1] Haim Brezis. Functional analysis, Sobolev spaces and partial differential equations. Springer
Science & Business Media, 2010.
[2] Noel Cressie and Christopher K Wikle. Statistics for spatio-temporal data. John Wiley & Sons,
2011.
[3] Lehel Csat? and Manfred Opper. Sparse on-line gaussian processes. Neural computation,
14(3):641?668, 2002.
[4] Sahil Garg, Amarjeet Singh, and Fabio Ramos. Learning non-stationary space-time models for
environmental monitoring. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial
Intelligence, July 22-26, 2012, Toronto, Ontario, Canada., 2012.
[5] David Higdon. A process-convolution approach to modelling temperatures in the north atlantic
ocean. Environmental and Ecological Statistics, 5(2):173?190, 1998.
[6] Sadeep Jayasumana, Richard Hartley, Mathieu Salzmann, Hongdong Li, and Mehrtash Harandi.
Kernel Methods on Riemannian Manifolds with Gaussian RBF Kernels. IEEE Transactions on
Pattern Analysis and Machine Intelligence (TPAMI), 2015.
[7] Chunsheng Ma. Nonstationary covariance functions that model space?time interactions. Statistics & Probability Letters, 61(4):411?419, 2003.
[8] Kanti V Mardia, Colin Goodall, Edwin J Redfern, and Francisco J Alonso. The kriged kalman
filter. Test, 7(2):217?282, 1998.
[9] C Paciorek and M Schervish. Nonstationary covariance functions for gaussian process regression.
Advances in neural information processing systems, 16:273?280, 2004.
[10] Fernando P?rez-Cruz, Steven Van Vaerenbergh, Juan Jos? Murillo-Fuentes, Miguel L?zaroGredilla, and Ignacio Santamaria. Gaussian processes for nonlinear signal processing: An
overview of recent advances. Signal Processing Magazine, IEEE, 30(4):40?50, 2013.
[11] Tobias Pfingsten, Malte Kuss, and Carl Edward Rasmussen. Nonstationary gaussian process
regression using a latent extension of the input space, 2006.
[12] Christian Plagemann, Kristian Kersting, and Wolfram Burgard. Nonstationary gaussian process
regression using point estimates of local smoothness. In Machine learning and knowledge
discovery in databases, pages 204?219. Springer, 2008.
[13] Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In NIPS,
pages 1177?1184, 2007.
[14] Carl E. Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning.
The MIT Press, December 2005.
[15] Alexandra M Schmidt and Anthony O?Hagan. Bayesian inference for non-stationary spatial
covariance structure via spatial deformations. Journal of the Royal Statistical Society: Series B
(Statistical Methodology), 65(3):743?758, 2003.
[16] Amarjeet Singh, Fabio Ramos, H Durrant-Whyte, and William J Kaiser. Modeling and decision
making in spatio-temporal processes for environmental surveillance. In Robotics and Automation
(ICRA), 2010 IEEE International Conference on, pages 5490?5497. IEEE, 2010.
[17] Christopher K Wikle. A kernel-based spectral model for non-gaussian spatio-temporal processes.
Statistical Modelling, 2(4):299?314, 2002.
[18] Christopher Williams and Matthias Seeger. Using the Nystr?m method to speed up kernel
machines. In NIPS, pages 682?688, 2001.
[19] W Murray Wonham. Linear multivariable control. Springer, 1974.
[20] Kemin Zhou, John C. Doyle, and Keith Glover. Robust and Optimal Control. Prentice Hall,
Upper Saddle River, NJ, 1996.
9
| 6189 |@word seems:1 advantageous:1 retraining:1 c0:1 norm:1 km:1 seek:1 propagate:2 covariance:13 decomposition:4 pick:3 incurs:1 thereby:1 nystr:1 accommodate:1 initial:2 cyclic:9 series:10 kingravi:1 selecting:1 salzmann:1 rkhs:8 outperforms:4 existing:1 diagonalized:1 recovered:2 com:1 current:3 ka:2 atlantic:1 john:2 realize:1 cruz:1 numerical:2 enables:1 christian:1 designed:1 plot:4 occlude:1 alone:2 stationary:5 pursued:2 selected:2 fewer:1 v:2 intelligence:2 isotropic:2 beginning:3 wolfram:1 fa9550:1 manfred:1 provides:3 location:27 toronto:1 firstly:1 glover:1 along:1 constructed:1 unacceptable:1 differential:7 become:1 direct:1 prove:2 consists:3 paragraph:1 mehrtash:1 intricate:1 nonseparable:1 growing:1 kanti:1 v1t:1 window:1 lib:1 becomes:1 spain:1 estimating:3 notation:1 unrelated:1 project:1 medium:1 provided:1 cm:6 string:1 developed:1 transformation:1 nj:1 guarantee:6 temporal:10 berkeley:2 every:3 act:1 exactly:1 rm:10 control:3 grant:1 kelvin:1 positive:4 t1:1 local:4 limit:1 despite:1 v2t:1 approximately:2 garg:1 studied:2 higdon:1 specifying:1 shaded:15 challenging:1 co:1 murillo:1 ease:2 limited:3 wonham:1 bi:7 averaged:1 unique:1 acknowledgment:1 testing:2 practice:1 block:3 implement:1 definite:4 procedure:1 jan:2 evolving:5 significantly:1 projection:1 spatiotemporally:7 word:1 refers:1 cannot:1 ga:1 selection:1 operator:6 close:1 prentice:1 seminal:1 restriction:1 conventional:1 map:15 deterministic:1 center:1 missing:1 go:1 attention:2 williams:2 convex:2 focused:1 recovery:1 estimator:3 insight:1 utilizing:4 bm1:1 deriving:1 traditionally:1 variation:2 autonomous:12 coordinate:1 construction:2 deploy:1 suppose:3 magazine:1 gps:1 us:1 cressie:1 carl:2 element:1 trend:2 expensive:2 utilized:4 hagan:1 corroborated:1 labeled:1 vein:1 observed:1 steven:1 database:1 capture:4 parameterize:1 region:2 remote:1 trade:1 yk:3 mentioned:1 intuition:1 environment:1 benjamin:1 complexity:1 tobias:1 dynamic:4 radar:1 trained:1 singh:2 solving:1 ali:1 predictive:5 upon:1 max1:1 basis:5 edwin:1 easily:1 sep:2 derivation:1 separated:2 distinct:2 fast:1 abused:1 artificial:1 whose:5 encoded:1 widely:1 larger:1 quite:3 solve:1 supplementary:10 heuristic:6 precludes:1 plagemann:1 statistic:3 vaerenbergh:1 gp:11 transform:1 itself:1 sequence:3 advantage:1 eigenvalue:7 tpami:1 matthias:1 interaction:1 product:1 mixing:3 flexibility:1 degenerate:1 ontario:1 differentially:1 scalability:1 getting:1 billion:1 requirement:1 satellite:3 generating:4 converges:1 help:1 coupling:1 stat:1 propagating:1 miguel:1 keith:1 eq:1 edward:1 predicted:1 whyte:1 implies:4 met:1 posit:1 closely:1 hartley:1 filter:4 stochastic:5 stringent:1 hassan:1 material:1 require:3 proposition:14 secondly:1 desc:1 extension:3 strictly:1 hold:3 around:1 considered:3 hall:1 predict:1 bj:1 claim:1 m0:1 dictionary:6 vary:1 layering:1 purpose:1 estimation:1 tool:1 weighted:1 mit:2 clearly:1 sensor:16 gaussian:16 rather:1 zhou:1 kersting:1 varying:7 surveillance:1 focus:2 rank:5 modelling:2 seeger:1 baseline:3 realizable:1 inference:10 typically:2 bt:1 lehel:1 hidden:3 interested:1 issue:3 dual:3 flexible:1 ill:1 denoted:3 overall:1 html:1 proposes:2 constrained:2 spatial:11 integration:1 smoothing:1 field:1 construct:4 equal:1 having:1 sampling:17 atom:5 irish:3 kw:1 future:2 t2:1 report:1 richard:1 few:6 modern:1 randomly:2 national:1 doyle:1 phase:1 consisting:1 william:1 ab:1 atlanta:1 investigate:1 truly:1 analyzed:1 behind:1 partial:3 necessary:3 daily:1 euclidean:1 accommodated:1 guidance:1 deformation:1 theoretical:2 minimal:1 santamaria:1 kij:3 instance:2 modeling:15 column:3 markovian:1 ordinary:1 paciorek:1 addressing:1 entry:1 subset:1 republic:1 kq:1 uniform:1 burgard:1 dependency:1 aw:1 spatiotemporal:24 periodic:1 synthetic:2 recht:1 fundamental:3 international:1 river:1 csail:1 sequel:1 off:1 picking:1 jos:1 together:1 quickly:1 concrete:2 synthesis:1 again:3 aaai:1 opposed:1 choose:1 possibly:1 juan:1 worse:1 leading:3 return:1 li:1 account:1 includes:1 north:1 automation:1 mp:1 vi:3 decisive:1 performed:2 observer:27 wind:7 exogenous:1 picked:1 lab:1 lot:1 recover:1 jul:2 contribution:3 minimize:1 il:1 formed:1 square:1 wiener:1 wisdom:1 generalize:1 bayesian:5 identification:4 knot:2 lu:1 monitoring:11 kuss:1 nonstationarity:1 aligns:1 checked:1 definition:4 sixth:1 against:1 involved:1 associated:4 riemannian:2 proof:2 boil:1 mi:5 rational:4 sampled:1 dataset:8 treatment:1 radially:1 manifest:2 knowledge:1 dimensionality:1 hilbert:2 cj:8 focusing:1 day:7 methodology:1 formulation:1 evaluated:1 box:2 mar:2 furthermore:2 correlation:2 hand:1 working:1 night:3 christopher:4 nonlinear:3 meteorological:1 brings:1 lei:9 jayasumana:1 alexandra:1 building:1 validity:1 counterpart:1 evolution:30 hence:2 symmetric:1 nonzero:1 laboratory:1 sin:1 noted:2 oc:2 multivariable:1 generalized:1 outline:1 theoretic:3 demonstrate:2 evident:1 temperature:13 image:1 functional:11 overview:1 exponentially:1 banach:2 million:3 slight:1 discussed:2 measurement:31 smoothness:1 rd:1 consistency:1 outlined:1 inclusion:1 illinois:3 language:1 dot:1 moving:1 privy:1 practioners:1 surface:2 base:2 something:1 recent:4 perspective:2 driven:2 apart:1 scenario:1 ecological:1 success:2 meeting:1 seen:3 minimum:6 additional:2 preceding:1 determine:2 fernando:1 colin:1 signal:2 july:1 u0:1 full:3 semi:1 infer:3 rahimi:1 champaign:1 smooth:2 faster:1 af:1 long:1 concerning:3 feasibility:1 ensuring:2 prediction:8 involving:1 ko:5 regression:3 cmu:1 iteration:1 kernel:74 represent:1 sometimes:1 girish:1 achieved:1 robotics:1 c1:6 addition:2 whereas:1 taxing:1 ode:1 interval:2 crucial:1 nary:1 unlike:1 strict:1 eigenfunctions:1 hongdong:1 subject:2 probably:1 validating:1 thing:1 db:1 elegant:1 unaddressed:1 incorporates:1 december:1 jordan:5 nonstationary:17 leverage:1 ideal:1 constraining:1 enough:2 concerned:1 switch:1 timesteps:6 restrict:2 competing:1 observability:14 idea:2 regarding:1 rms:9 pathfinder:1 remark:5 listed:1 tune:1 amount:1 nonparametric:1 generate:4 http:2 exist:4 percentage:2 canonical:5 estimated:4 track:2 per:2 labdata:2 csat:1 discrete:3 hyperparameter:3 key:5 thereafter:1 monitor:2 achieving:1 utilize:3 nonconvexity:1 v1:2 vast:1 graph:1 relaxation:2 schervish:1 year:5 run:1 parameterized:1 powerful:3 letter:1 extends:1 place:1 throughout:1 sobolev:1 matrix2:1 decision:2 ob:1 comparable:1 layer:1 bound:11 def:1 followed:1 haim:1 fold:1 nontrivial:1 placement:2 constraint:3 boxplot:4 sake:1 generates:1 fourier:3 speed:4 extremely:2 span:1 according:2 combination:2 march:1 across:4 smaller:1 slightly:2 son:1 evolves:1 making:1 goodall:1 intuitively:1 explained:1 multiplicity:1 taken:1 computationally:5 equation:9 end:1 available:5 apply:1 appropriate:2 spectral:4 ocean:3 robustly:2 alternative:2 schmidt:1 existence:2 original:12 include:3 shadedness:4 build:3 murray:1 approximating:1 classical:2 society:1 icra:1 quantity:1 kaiser:1 diagonal:1 said:1 subspace:1 fabio:2 w0:3 alonso:1 manifold:2 considers:2 collected:1 trivial:1 spanning:1 length:2 kalman:4 modeled:6 index:12 providing:1 difficult:1 susceptible:1 unfortunately:1 setup:2 stated:1 intent:1 constructively:1 design:7 reliably:1 guideline:1 unknown:2 perform:1 allowing:3 twenty:1 fuentes:1 convolution:4 observation:9 datasets:10 urbana:2 sadeep:1 finite:2 upper:1 defining:1 extended:1 ever:1 precise:1 rn:5 reproducing:1 station:1 community:3 canada:1 inferred:2 david:1 pair:3 required:10 optimized:1 learned:1 barcelona:1 geostatistics:4 nip:3 address:1 suggested:1 dynamical:4 pattern:2 below:1 indoor:1 regime:1 challenge:4 royal:1 critical:1 malte:1 business:1 force:1 predicting:1 ramos:2 hr:2 representing:1 temporally:1 mathieu:1 ignacio:1 realizability:1 naive:1 auto:6 embodied:1 prior:1 literature:2 geometric:2 meter:2 kf:2 discovery:1 relative:1 afosr:1 embedded:2 historic:1 limitation:2 filtering:2 switched:2 degree:1 sufficient:7 mercer:2 principle:2 viewpoint:1 row:5 placed:1 wireless:1 supported:1 pdes:1 infeasible:1 rasmussen:2 guide:1 weaker:1 sparse:4 tracing:1 distributed:1 benefit:2 overcome:1 dimension:1 feedback:6 valid:1 transition:6 rich:3 world:4 gram:1 forward:3 xn:5 opper:1 bm:5 social:1 transaction:1 approximate:6 emphasize:1 compact:3 observable:8 nov:2 global:2 b1:3 francisco:1 spatio:3 xi:18 continuous:2 latent:9 search:2 why:1 table:1 learn:1 robust:2 inherently:1 wikle:2 complex:5 necessarily:1 anthony:1 domain:13 did:2 main:4 hierarchically:2 linearly:2 hyperparameters:2 repeated:4 fair:1 allowed:1 body:1 x1:5 fig:1 intel:3 deployed:2 wiley:1 inferring:1 position:1 lie:1 mardia:1 vanish:1 durrant:1 rez:1 down:1 theorem:3 minute:1 specific:2 harandi:1 sensing:7 r2:2 intractable:1 exists:1 ih:4 adding:1 gained:1 ci:9 province:1 magnitude:2 conditioned:1 kx:3 gap:1 lt:1 simply:1 saddle:1 van:1 springer:3 kristian:1 corresponds:4 determines:1 relies:1 environmental:3 ma:1 goal:2 formulated:1 noel:1 quantifying:1 careful:1 exposition:1 rbf:5 replace:1 infinite:1 determined:4 uniformly:1 conservative:2 called:2 total:2 experimental:1 formally:1 select:2 evaluate:1 mcmc:2 phenomenon:12 |
5,735 | 619 | Non-Linear Dimensionality Reduction
David DeMers? & Garrison CottreU t
Dept. of Computer Science & Engr., 0114
Institute for Neural Computation
University of California, San Diego
9500 Gilman Dr.
La Jolla. CA, 92093-0114
Abstract
A method for creating a non-linear encoder-decoder for multidimensional data
with compact representations is presented. The commonly used technique of
autoassociation is extended to allow non-linear representations, and an objective function which penalizes activations of individual hidden units is shown
to result in minimum dimensional encodings with respect to allowable error in
reconstruction.
1 INTRODUCTION
Reducing dimensionality of data with minimal information loss is important for feature
extraction, compact coding and computational efficiency. The data can be tranformed
into "good" representations for further processing, constraints among feature variables
may be identified, and redundancy eliminated. Many algorithms are exponential in the
dimensionality of the input, thus even reduction by a single dimension may provide valuable
computational savings.
Autoassociating feed forward networks with one hidden layer have been shown to extract
the principal components of the data (Baldi & Hornik, 1988). Such networks have been
used to extract features and develop compact encodings of the data (Cottrell, Munro &
Zipser, 1989). Principal Components Analysis projects the data into a linear subspace
-email: [email protected]
t email: [email protected]
580
Non-Linear Dimensionality Reduction
Non-Linear
"Principal Componant." Nat
Auto-a.soclator
a a a a a
fr
fr
fr
000
Decoding layer
000
HldcMn layer
"bottleneck"
000
a
0\)0
Output
Encoding layer
a
Input
Figure 1: A network capable of non-linear lower dimensional representations of data.
with minimum information loss, by multiplying the data by the eigenvectors of the sample
covariance matrix. By examining the magnitude of the corresponding eigenvalues one can
estimate the minimum dimensionality of the space into which the data may be projected
and estimate the loss. However, if the data lie on a non-linear submanifold of the feature
space, then Principal Components will overestimate the dimensionality. For example, the
covariance matrix of data sampled from a helix in R3 will have full-rank and thus three
principal components. However, the helix is a one-dimensional manifold and can be
(smoothly) parameterized with a single number.
The addition of hidden layers between the inputs and the representation layer, and between
the representation layer and the outputs provides a network which is capable of learning
non-linear representations (Kramer, 1991; Oja, 1991; Usui, Nakauchi & Nakano, 1991).
Such networks can perform the non-linear analogue to Principal Components Analysis,
and extract "principal manifolds". Figure 1 shows the basic structure of such a network.
However, the dimensionality of the representation layer is problematic. Ideally, the dimensionality of the encoding (and hence the number of representation units needed) would be
determined from the data.
We propose a pruning method for determining the dimensionality of the representation. A
greedy algorithm which successively eliminates representation units by penalizing variances
results in encodings of minimal dimensionality with respect to the allowable reconstruction
error. The algOrithm therefore performs non-linear dimensionality reduction (NLDR).
581
582
DeMers and Cottrell
2
DIMENSIONALITY ESTIMATION BY REGULARIZATION
The a priori assignment of the number of units for the representation layer is problematic.
In order to achieve maximum data compression, this number should be as small as possible;
however, one also wants to preserve the information in the data and thus encode the data with
minimum error. If the intrinsic dimensionality is not known ahead of time (as is typical),
some method to estimate the dimensionality is desired. Minimization of the variance of a
representation unit will essentially squeeze the variance of the data into the other hidden
units. Repeated minimization results in ip..crf'..asingly lower-dimensional representation.
More formally, let the dimensionality of the raw data be n. We wish to find F and
its approximate inverse such that Rn ~ RP ~1 Rn where p < n. Let y denote the ~
dimensional vector whose elements are the p univalued functions Ii which make up F. If one
of the component functions Ii is always constant, itis not contributing to theautoassociation
and can be eliminated, yielding a function F with p - 1 components. A constant value for
Ii means that the variance of Ii over the data is zero. We add a regularization term to the
objective function penalizing the variance of one of the representation units. If the variance
can be driven to near zero while simultaneously achieving a target error in the primary task
of autoassociation, then the unit being penalized can be pruned.
=
Ap(~f=l (hp(neti ) - E(h p(neti ?)2) where neti is the net inputto the unit given
Let Hp
the jth training pattern, hp (neti ) is the activation of the pth hidden unit in the representation
layer (the one being penalized) and E is the expectation operator. For notational clarity,
the superscripts will be suppressed hereafter. E(hi(xi)) can be estimated as hp, the mean
activation of hp over all patterns in the training data.
8Hp _ 8Hp 8netp _ 2 \ (h _ h- )h'
Ap
P
P
pOl
8Wp l
Bnetp 8Wp l
where h~ is the derivative of the activation function of unit hp with respect to its input, and
is the output of the lth unit in the preceding layer. Let 8p 2Ap h~ (h p - hp). We simply
add Dp to the delta of hp due to backpropagation from the output layer.
0'
=
We first train a multi-Iayer l network to learn the identity map. When error is below a userspecified threshold, Ai is increased for the unit with lowest variance. Ifnetwork weights can
be found 2 such that the variance can be reduced below a small threshold while the remaining
units are able to encode the data, the hidden unit in question is no longer contributing to the
autoencoding, and its connections are excised from the network. The process is repeated
until the variance of the unit in question cannot be reduced while maintaining low error.
IThere is no reason to suppose that the encoding and decoding layers must be of the same size.
In fact, it may be that two encoding or decoding layers will provide superior performance. For the
helix example. the decoder had two hidden layers and linear connections from the representation to
the output, while the encoder had a single layer. Kramer (1991) uses information theoretic measures
for choosing the size of the encoding and decoding layers; however, only a fixed representation layer
and equal encoding and decoding layers are used.
2Unbounded weights will allow the same amount of information to pass through the layer with
arbitrarily small variance and using arbitrarily large weights. Therefore the weights in the network
must be bounded. Weight vectors with magnitudes larger than 10 are renormalized after each epoch.
Non-Linear Dimensionality Reduction
Figure 2: The original 3-D helix data plus reconstructionfrom a single parameter encoding.
3
RESULTS
We applied this method to several problems:
1. a closed I-D manifold in R3.
2. a I-D helix in R3.
3. Time series data generated from the Mackey-Glass delay-differential equation.
4. 160 64 by 64 pixel, 8-bit grayscale face images.
A number of parameter values must be chosen; error threshold, maximum magnitude of
weights, value of Ai when increased, and when to "give up" training. For these experiments,
they were chosen by hand; however, reasonable values can be selected such that the method
can be automated.
3.1 Static Mappings: Circle and Helix
The first problem is interesting because it is known that there is no diffeomorphism from
the circle to the unit interval. Thus (smooth) single parameter encodings cannot cover the
entire circle, though the region of the circle left unparameterized can be made arbitrarily
small. Depending on initial conditions, our technique found one of three different solutions.
Some simulations resulted in a two-dimensional representation with the encodings lying
on a circle in R2. This is a failure to reduce the dimensionality. The other solutions were
both I-D representations; one "wrapping" the unit interval around the circle, the other
"splitting" the interval into two pieces. The initial architecture consisted of a single 8-unit
encoding layer and two 8-unit decoding layers. T} was set to 0.01, L\A to 0.1, and the error
threshold, t:, to 0.001.
The helix problem is interesting because the data appears to be three-dimensional to PCA.
NLDR consistentl y finds an invertible one-dimensional representation of the data. Figure 2
583
584
DeMers and Cottrell
1r---~----~--~----~---'----~--~~--~----r----'
0 ...
&.1.9n&1
enaod.i. n 9
.nood
9
- - - --
......... .
0.8
0.7
0 ...
0.5
0.4
0.3
0.2
0.1
o~--~----~--~----~--~----~--~~--~----~--~
200
220
240
260
280
300
320
340
360
380
400
=
Figure 3: Data/rom the Mackey-Glass delay-differential equation with T
17, correlation
dimension 2.1, and the reconstructed signal encoded in two and three dimensions.
shows the original data, along with the network?s output when the representation layer was
stimulated with activation ranging from 0.1 to 0.9. The training data were mapped into the
interval 0.213 - 0.778 using a single (sigmoidal) representation unit. The initial architecture
consisted of a single 10-unit encoding layer and two 10-unit decoding layers. 7J was set to
0.01,~'\ to 0.1, and the error threshold, f, to 0.001.
3.2
NLDR Applied to Time Series
The Mackey-Glass problem consists of estimation of the intrinsic dimensionality of a
scalar signal. Classically? such time series data is embedded in a space of "high enough"
dimension sllch that one expects the geometric invariants to be preserved. However, this
may significantly overestimate the number of variables needed to describe the data. 1\vo
different series were examined; parameter settings for the Mackey-Glass equation were
chosen such that the intrinsic dimensionality is 2.1 and 3.5. The data was embedded in a
high dimensional space by the standard technique of recoding as vectors of lagged data. A
3 dimensional representation was found for the 2.1 dimensional data and a 4 dimensional
representation was found for the 3.5 dimensional data. Figure 3 shows the original data and
its reconstruction for the 2.1 dimensional data. Allowing higher reconstruction error resulted
in a 3 dimensional representation for the 3.5 dimensional data, effectively smoothing the
original signal (DeMers, 1992). Figure 4 shows the original data and its reconstruction for
the 3.5 dimensional data. The initial architecture consisted of a two 1O-unit encoding layers
and two lO-unit decoding layers, and a 7 -unit representation layer. The representation layer
was connected directly to the output layer. 7J was set to 0.01? ~,\ to 0.1, and the error
threshold? E, to 0.001.
3.3
Faces
The face image data is much more challenging. The face data are 64 x 64 pixel, 8-bit
grayscale images taken from (Cottrell & Metcalfe? 1991), each of which can be considered
to be a point in a 4,096 dimensional "pixel space". The question addressed is whether
NLDR can find low-dimensional representations of the data which are more useful than
principal components. The data was preprocessed by reduction to the first 50 principal
Non-Linear Dimensionality Reduction
O_~r-----~----~----~~----~----~----~------~----,
o
4.D
4D
d'
R_con.truat!.on_
A..cOon_t r u a t i.on.
M_ak_y-Gl. ? ? ? ? i.9"_l.
.Z"Z'or bound
0.002
_ r r o r bound 0 . 0 0 0 4
----.----.
0_7
0_",
0_4
0_3
""0
"00
Figure4: Datafrom the Mackey-Glass delay-differential equation with T = 35, correlation
dimension 3.5, and the reconstructed signal encoded in four dimensions with two different
error thresholds.
components3 of the images. These reduced representations were then processed further
by NLDR. The architecture consisted of a 30-unit encoding layer and a 30-unit decoding
layer, and an initial representation layer of 20 units. There were direct connections from the
representation layer to the output layer. TJ was 0.05, ~,\ was 0.1 and f was 0.001. NLDR
found a five-dimensional representation. Figure 5 shows four of the 160 images after
reduction to the first 50 principal components (used as training) and the same images after
reconstruction from a five dimensional encoding. We are unable to determine whether the
dimensions are meaningful; however, experiments with the decoder show that points inside
the convex hull of the representations project to images which look like faces. Figure 6
shows the reconstructed images from a linear interpolation in "face space" between the two
encodings which are furthest apart.
How useful are the representations obtained from a training set for identification and
classification of other images of the same subjects? The 5-D representations were used
to train a feedforward network to recognize the identity and gender of the subjects, as in
(Cottrell & Metcalfe, 1991). 120 images were used in training and the remaining 40 used
as a test set. The network correctly identified 98% of the training data subjects, and 95%
on the test set. The network achieved 95% correct gender recognition on both the training
and test sets. The misclassified subject is shown in Figure 7. An informal poll of visitors
to the poster in Denver showed that about 2/3 of humans classify the subject as male and
1/3 as female.
Although NLDR resulted in five dimensional encodings of the face data, and thus superficially compresses the data to approximately 55 bits per image or 0.013 bits per pixel,
there is no data compression. Both the decoder portion of the network and the eigenvectors
used in the initial processing must also be stored. These amortize to about 6 bits per pixel,
whereas the original images require only 1.1 bits per pixel under run-length encoding. In
order to achieve data compression, a much larger data set must be obtained in order to find
the underlying human face manifold.
350 was chosen by eyeballing a graph of the eigenvalues for the point at which they began to
"flatten"; any value between about 40 and 80 would be reasonable.
585
586
DeMers and Cottrell
Figure 5: Four of the originalface images and their reconstruction after encoding asftve
dimensional data.
Figure 6: The two images with 5-D encodings which are the furthest apart. and the
reconstructions offour 5-D points equally spaced along the line joining them.
Figure 7: UPat" .. the subject whose gender afeedforward network classified incorrectly.
Non-Linear Dimensionality Reduction
4 CONCLUSIONS
A method for automatically generating a non-linear encoder/decoder for high dimensional
data has been presented. The number of representation units in the final network is an
estimate of the intrinsic dimensionality of the data. The results are sensitive to the choice
of error bound, though the precise relationship is as yet unknown. The size of the encoding
and decoding hidden layers must be controlled to avoid over-fitting; any data set can be
encoded into scalar values given enough resolution. Since we are using gradient search
to solve a global non-linear optimization problem, there is no guarantee that this method
will find the global optimum and avoid convergen:;e to local minima. However, NLDR
consistently constructed low dimensional encoding!; which were decodeable with low loss.
Acknowledgements
We would like to thank Matthew Turk & Alex Pentland for making their /acerec software
available, which was used to extract the eigenvectors of the original face data. The first
author was partially supported by Fellowships from the California Space Institute and the
McDonnell-Pew Foundation.
References
Pierre Baldi and Kurt Hornik (1988) "Neural Networks and Principal Component Analysis:
Learning from Examples without Local Minima", Neural Networks 2, 53-58.
Garrison Cottrell and Paul Munro (1988) "Principal Components Analysis of Images via
Backpropagation", in Proc. SPIE (Cambridge, MA).
Garrison Cottrell, Paul Munro, and David Zipser (1989) "Image Compression by Backpropagation: A Demonstration of Extensional Programming", In Sharkey, Noel (Ed.), Models
o/Cognition: A review o/Cognitive Science, vol. 1.
Garrison Cottrell and Janet Metcalfe (1991) "EMPATH - Face, Emotion and Gender
Recognition using Holons" in Lippmann, R., Moody, 1. & Touretzky, D., (eds), Advances
in Neural Information Processing Systems 3.
David DeMers (1992) "Dimensionality Reduction for Non-Linear Time Series", Neural
and Stochastic Methods in Image and Signal Processing (SPIE 1766).
Mark Kramer (1991) "Nonlinear Principal Component Analysis Using Autoassociative
Neural Networks", AIChE lournaI37:233-243.
Erkki Oja (1991) "Data Compression, Feature Extraction, and Autoassociation in Feedforward Neural Networks" in Kohonen, T., Simula, O. and Kangas, 1., eds, Artificial Neural
Networks,737-745.
Shiro Usui, Shigeki Nakauchi, and Masae Nakano (1991) "Internal Color Representation
Acquired by a Five-Layer Neural Network", in Kohonen, T., Simula, O. and Kangas, J.,
eds, Artificial Neural Networks, 867-872.
587
PART VII
THEORY AND
ANALYSIS
| 619 |@word compression:5 simulation:1 covariance:2 ithere:1 reduction:10 initial:6 series:5 hereafter:1 empath:1 kurt:1 activation:5 yet:1 must:6 cottrell:9 extensional:1 mackey:5 greedy:1 selected:1 provides:1 sigmoidal:1 five:4 unbounded:1 along:2 constructed:1 direct:1 differential:3 consists:1 fitting:1 baldi:2 inside:1 acquired:1 multi:1 automatically:1 project:2 bounded:1 underlying:1 lowest:1 offour:1 guarantee:1 multidimensional:1 holons:1 unit:30 overestimate:2 local:2 encoding:24 joining:1 interpolation:1 ap:3 approximately:1 plus:1 examined:1 challenging:1 autoassociation:3 backpropagation:3 excised:1 significantly:1 poster:1 flatten:1 cannot:2 operator:1 janet:1 map:1 convex:1 resolution:1 splitting:1 diego:1 target:1 suppose:1 programming:1 us:1 element:1 gilman:1 recognition:2 simula:2 region:1 connected:1 valuable:1 pol:1 ideally:1 renormalized:1 engr:1 efficiency:1 train:2 describe:1 artificial:2 choosing:1 whose:2 encoded:3 larger:2 solve:1 encoder:3 ip:1 superscript:1 final:1 autoencoding:1 eigenvalue:2 net:1 reconstruction:8 propose:1 fr:3 kohonen:2 achieve:2 squeeze:1 optimum:1 generating:1 depending:1 develop:1 c:2 sllch:1 correct:1 hull:1 stochastic:1 human:2 require:1 lying:1 around:1 considered:1 mapping:1 cognition:1 matthew:1 estimation:2 proc:1 sensitive:1 minimization:2 always:1 avoid:2 encode:2 notational:1 consistently:1 rank:1 glass:5 entire:1 hidden:8 misclassified:1 pixel:6 among:1 figure4:1 classification:1 priori:1 smoothing:1 equal:1 emotion:1 saving:1 extraction:2 eliminated:2 look:1 oja:2 simultaneously:1 preserve:1 resulted:3 individual:1 recognize:1 male:1 yielding:1 tj:1 capable:2 penalizes:1 desired:1 circle:6 minimal:2 increased:2 classify:1 cover:1 assignment:1 expects:1 submanifold:1 delay:3 examining:1 stored:1 decoding:10 invertible:1 moody:1 successively:1 classically:1 dr:1 cognitive:1 creating:1 derivative:1 coding:1 piece:1 closed:1 portion:1 variance:10 spaced:1 raw:1 identification:1 multiplying:1 classified:1 touretzky:1 ed:4 email:2 failure:1 turk:1 sharkey:1 spie:2 static:1 demers:7 sampled:1 color:1 dimensionality:23 appears:1 feed:1 higher:1 though:2 until:1 correlation:2 hand:1 nonlinear:1 convergen:1 consisted:4 hence:1 regularization:2 wp:2 shigeki:1 allowable:2 crf:1 theoretic:1 vo:1 performs:1 image:17 ranging:1 began:1 superior:1 denver:1 cambridge:1 ai:2 pew:1 hp:10 had:2 shiro:1 longer:1 add:2 aiche:1 showed:1 female:1 jolla:1 driven:1 apart:2 arbitrarily:3 minimum:6 preceding:1 determine:1 signal:5 ii:4 full:1 smooth:1 equally:1 controlled:1 basic:1 essentially:1 expectation:1 achieved:1 preserved:1 addition:1 want:1 whereas:1 fellowship:1 interval:4 addressed:1 eliminates:1 subject:6 zipser:2 near:1 feedforward:2 enough:2 automated:1 architecture:4 identified:2 reduce:1 bottleneck:1 whether:2 pca:1 munro:3 autoassociative:1 useful:2 eigenvectors:3 amount:1 processed:1 reduced:3 problematic:2 estimated:1 delta:1 correctly:1 per:4 vol:1 redundancy:1 four:3 threshold:7 poll:1 achieving:1 clarity:1 preprocessed:1 penalizing:2 graph:1 run:1 inverse:1 parameterized:1 reasonable:2 bit:6 layer:37 hi:1 bound:3 ahead:1 constraint:1 alex:1 software:1 erkki:1 pruned:1 o_:1 mcdonnell:1 suppressed:1 making:1 invariant:1 taken:1 equation:4 r3:3 nldr:8 needed:2 neti:4 informal:1 available:1 pierre:1 rp:1 original:7 compress:1 remaining:2 maintaining:1 nakano:2 objective:2 question:3 wrapping:1 primary:1 gradient:1 dp:1 subspace:1 unable:1 mapped:1 thank:1 decoder:5 manifold:4 reason:1 furthest:2 rom:1 length:1 relationship:1 demonstration:1 lagged:1 unknown:1 perform:1 allowing:1 pentland:1 incorrectly:1 extended:1 precise:1 rn:2 ucsd:2 kangas:2 david:3 userspecified:1 connection:3 california:2 able:1 below:2 pattern:2 analogue:1 usui:2 extract:4 auto:1 epoch:1 geometric:1 acknowledgement:1 review:1 contributing:2 determining:1 embedded:2 loss:4 interesting:2 foundation:1 helix:7 itis:1 lo:1 penalized:2 gl:1 supported:1 jth:1 allow:2 institute:2 face:10 recoding:1 dimension:7 superficially:1 forward:1 commonly:1 made:1 san:1 projected:1 author:1 pth:1 reconstructed:3 pruning:1 compact:3 approximate:1 lippmann:1 global:2 xi:1 iayer:1 grayscale:2 search:1 stimulated:1 learn:1 ca:1 hornik:2 paul:2 repeated:2 amortize:1 garrison:4 wish:1 exponential:1 lie:1 r2:1 intrinsic:4 effectively:1 magnitude:3 nat:1 vii:1 smoothly:1 simply:1 partially:1 scalar:2 gender:4 gary:1 ma:1 lth:1 identity:2 kramer:3 noel:1 diffeomorphism:1 determined:1 typical:1 reducing:1 principal:13 pas:1 la:1 meaningful:1 formally:1 metcalfe:3 internal:1 mark:1 visitor:1 dept:1 |
5,736 | 6,190 | A Bandit Framework for Strategic Regression
Yang Liu and Yiling Chen
School of Engineering and Applied Science, Harvard University
{yangl,yiling}@seas.harvard.edu
Abstract
We consider a learner?s problem of acquiring data dynamically for training a regression model, where the training data are collected from strategic data sources.
A fundamental challenge is to incentivize data holders to exert effort to improve
the quality of their reported data, despite that the quality is not directly verifiable
by the learner. In this work, we study a dynamic data acquisition process where
data holders can contribute multiple times. Using a bandit framework, we leverage the long-term incentive of future job opportunities to incentivize high-quality
contributions. We propose a Strategic Regression-Upper Confidence Bound (SRUCB) framework, a UCB-style index combined with a simple payment rule,
where the index of a worker approximates the quality of his past contributions
and is used by the learner to determine whether the worker receives future work.
For linear regression and a certainpfamily ofnon-linear regression problems, we
show that SR-UCB enables an O log T /T -Bayesian Nash Equilibrium (BNE)
where each worker exerts a target effort level that the learner has chosen, with T
being the number of data acquisition stages. The SR-UCB framework also has
some other desirable properties: (1) The indexes can be updated in an online fashion (hence computation is light). (2) A slight
variant, namely Private SR-UCB
(PSR-UCB), is able to preserve (O log?1 T , O log?1 T )-differential privacy for
workers? data, with only a small compromise
on incentives (each worker exerting
?
6
a target effort level is an O log T / T -BNE).
1
Introduction
More and more data for machine learning nowadays are acquired from distributed, unmonitored
and strategic data sources and the quality of these collected data is often unverifiable. For example,
in a crowdsourcing market, a data requester can pay crowd workers to label samples. While this
approach has been widely adopted, crowdsourced labels have been shown to degrade the learning
performance significantly, see e.g., [19], due to the low quality of the data. How to incentivize
workers to contribute high-quality data is hence a fundamental question that is crucial to the longterm viability of this approach.
Recent works [2,4,10] have considered incentivizing data contributions for the purpose of estimating
a regression model. For example Cai et al. [2] design payment rules so that workers are incentivized
to exert effort to improve the quality of their contributed data, while Cummings et al. [4] design
mechanisms to compensate privacy-sensitive workers for their privacy loss when contributing their
data. These studies focus on a static data acquisition process, only considering one-time data acquisition from each worker. Hence, the incentives completely rely on the payment rule. However, in
stable crowdsourcing markets, workers return to receive additional work. Future job opportunities
are thus another dimension of incentives that can be leveraged to motive high-quality data contributions. In this paper, we study dynamic data acquisition from strategic agents for regression problems
and explore the use of future job opportunities to incentivize effort exertion.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
In our setting, a learner has access to a pool of workers and in each round decides on which workers
to ask for data. We propose a Multi-armed Bandit (MAB) framework, called Strategic RegressionUpper Confidence Bound (SR-UCB), that combines a UCB-style index rule with a simple per-round
payment rule to align the incentives of data acquisition with the learning objective. Intuitively, each
worker is an arm and has an index associated with him that measures the quality of his past contributions. The indexes are used by the learner to select workers in the next round. While MAB
framework is natural for modeling selection problem with data contributors of potentially varying
qualities, our setting has two challenges that are distinct from classical bandit settings. First, after
a worker contributes his data, there is no ground-truth observation to evaluate how well the worker
performs (or reward as commonly referred to in a MAB setting). Second, a worker?s performance
is a result of his strategic decision (e.g. how much effort he exerts), instead of being purely exogenously determined. Our SR-UCB framework overcomes the first challenge by evaluating the quality
of an agent?s contributed data against an estimator trained on data provided by all other agents to
obtain an unbiased estimate of the quality, an idea inspired by the peer prediction literature [11, 16].
To address the second challenge, our SR-UCB framework enables a game-theoretic equilibrium
with workers exerting target effort levels chosen by the learner. More specifically, in addition to
proposing the SR-UCB framework, our contributions include:
? We show that SR-UCB helps simplify the design of payment, and successfully incentivizes effort
exertion for acquiring data for linear regression.
Every
p
worker exerting a targeted effort level
(for labeling and reporting the data) is an O log T /T -Bayesian Nash Equilibrium (BNE). We
can also extend the above results to a certain family of non-linear regression problems.
? SR-UCB indexes can be maintained in an online fashion, hence are computationally light.
? We extend SR-UCB to Private SR-UCB (PSR-UCB) to further
guarantees, with
provide privacy
small compromise on incentives. PSR-UCB is (O log?1 T , O log?1 T )-differentially private
?
and every worker exerting the targeted effort level is an O log6 T / T -BNE.
2
Related work
Recent works have formulated various strategic learning settings under different objectives [2, 4, 10,
20]. Among these, payment based solutions are proposed for regression problems when data come
from workers who are either effort sensitive [2] or privacy sensitive [4]. These solutions induce
game-theoretic equilibria where high-quality data are contributed. The basic idea of designing the
payment rules is inspired by the much mature literature of proper scoring rules [8] and peer prediction [16]. Both [2] and [4] consider a static data acquisition procedure, while our work focuses on
a dynamic data acquisition process. Leveraging the long-term incentive of future job opportunities,
our work has a much simpler payment rule than those of [2] and [4] and relaxes some of the restrictions on the learning objectives (e.g., well behaved in [2]), at the cost of a weaker equilibrium
concept (approximate BNE in this work vs. dominate strategy in [2]).
Multi-armed Bandit (MAB) is a sequential decision making and learning framework which has
been extensively studied. It is nearly impossible to survey the entire bandit literature. The seminal
work by Lai et al [13] derived lower and upper bounds on asymptotic regret on bandit selection.
More recently, finite-time algorithms have been developed for i.i.d. bandits [1] . Different from
the classical settings, this work needs to deal with challenges such as no ground-truth observations
for bandits and bandits? rewards being strategically determined. A few recent works [7, 15] also
considered bandit settings with strategic arms. Our work differs from these in that we consider
a regression learning setting without ground-truth observations, as well as we consider long-term
workers whose decisions on reporting data can change over time.
Our work and motivations have some resemblance to online contract design problems for a principalagent model [9]. But unlike the online contract design problems, our learner cannot verify the quality
of finished work after each task assignment. In addition, instead of focusing on learning the optimal
contract, we use bandits mainly to maintain a long-term incentive for inducing high-quality data.
3
Formulation
The learner observes a set of feature data X for training. To make our analysis tractable, we assume
each x ? X is sampled uniformly from a unit ball with dimension d: x ? Rd s.t. ||x||2 ? 1. Each
2
x associates with a ground-truth response (or label) y(x), which cannot be observed directly by the
learner. Suppose x and y(x) are related through a function f : Rd ? R that y(x) = f (x) + z, where
z is a zero-mean noise with variance ?z , and is independent of x. For example, for linear regression
f (x) = ?T x for some ? ? Rd . The learner would like to learn a good estimate f? of f . For the purpose
of training, the learner needs to figure out y(x) for different x ? X. To obtain an estimate y(x)
?
of
y(x), the learner assigns each x to a selected worker to obtain a label.
Agent model: Suppose we have a set of workers U = {1, 2, ..., N} with N ? 2. After receiving
the labeling task, each worker will decide on the effort level e he wants to exert to generate an
outcome ? higher effort leads to a better outcome, but is also associated with a higher cost. We
assume e has bounded support [0, e]
? for all worker i ? U . When deciding on an effort level, a
worker wants to maximize his expected payment minus cost for effort exertion. The resulted label
y(x)
?
will be given back to the learner. Denote by y?i (x, e) the label returned by worker i for data
instance x (if assigned) with chosen effort level e. We consider the following effort-sensitive agent
model: y?i (x, e) = f (x) + z + zi (e), where zi (e) is a zero-mean noise with variance ?i (e). ?i (e) can
be different for different workers, and ?i (e) decreases in e, ?i. The z and zi ?s have bounded support
such that |z|, |zi | ? Z, ?i. Without loss of generality, we assume that the cost for exerting effort e is
simply e for every worker.
Learner?s objective Suppose the learner wants to learn f with the set of samples X. Then the
learner finds effort levels e? for data points in X such that
e? ? argmin{e(x)}x?X ERROR( f?({x, y(x,
? e(x))}x?X )) + ? ? PAYMENT({e(x)}x?X ) ,
where e(x) is the effort level for sample x, and {y(x,
? e(x))}x?X is the set of labeled responses for
training data X. f?(?) is the regression model trained over this data. The learner assigns the data and
pay appropriately to induce the corresponding effort level e? . This formulation resembles the one
presented in [2]. The ERROR term captures the expected error of the trained model using collected
data (e.g., measure in squared loss), while the PAYMENT term captures the total expected budget that
the learner spends to receive the labels. This payment quantity depends on the mechanism that the
learner chooses to use and is the expected payment of the mechanism to induce selected effort level
for each data point {e(x)}x?X . ? > 0 is a weighting factor, which is a constant. It is clear that the
objective function depends on ?i ?s. We assume for now that the learner knows ?i (?)?s,1 and the
optimal e? can be computed.
4
StrategicRegression-UCB (SR-UCB): A general template
We propose SR-UCB for solving the dynamic data acquisition problem. SR-UCB enjoys a bandit
setting, where we borrow the idea from the classical UCB algorithm [1], which maintains an index
for each arm (worker in our setting), balancing exploration and exploitation. While a bandit framework is not necessarily the best solution for our dynamic data acquisition problem, it is a promising
option for the following reasons. First, as utility maximizers, workers would like to be assigned
tasks as long as the marginal gain for taking a task is positive. A bandit algorithm can help execute
the assignment process. Second, carefully designed indexes can potentially reflect the amount of
effort exerted by the agents. Third, because the arm selection (of bandit algorithms) is based on the
indexes of workers, it introduces competition among workers for improving their indexes.
SR-UCB contains the following two critical components:
Per-round payment For each worker i, once selected to label a sample x, we will assign a base
payment pi = ei + ?, 2 after reporting the labeling outcome, where ei is the desired effort level that
we would like to induce from worker i (for simplicity we have assumed the cost for exerting effort ei
equals to the effort level), and ? > 0 is a small quantity. The design of this base payment is to ensure
once selected, a worker?s base cost will be covered. Note the above payment depends on neither the
assigned data instance x nor the reported outcome y.
? Therefore such a payment procedure can be
pre-defined after the learner sets a target effort level.
1 This
2 We
assumption can be relaxed. See our supplementary materials for the case with homogeneous ?.
assume workers have knowledge of how the mechanism sets up this ?.
3
Assignment The learner assigns multiple task {xi (t)}i?d(t) at time t, with d(t) denoting the set of
workers selected at t. Denote by ei (t) the effort level worker i exerted for xi (t), if i ? d(t). Note all
{xi (t)}i?d(t) are different tasks, and each of them is assigned to exactly one worker. The selection of
workers will depend on the notion of indexes. Details are given in Algorithm 1.
Algorithm 1 SR-UCB: Worker index & selection
Step 1. For each worker i, first train estimator f??i,t using data {x j (n) : 1 ? n ? t ? 1, j ? d(n), j 6=
i}, that is using the data collected from workers j 6= i up to time t ? 1. When t = 1, we will
initialize by sampling each worker at least once such that f??i,t can be computed.
Step 2. Then compute the following index for worker i at time t
2 s
logt
1 t
1(i ? d(n)) a ? b f??i,t (xi (n)) ? y?i (n, ei (n))
+c
,
Ii (t) =
?
ni (t) n=1
ni (t)
where ni (t) is the number of times worker i has been selected up to time t. a, b are two positive
constants for ?scoring?, and c is a normalization constant. y?i (n, ei (n)) is the corresponding label
for task xi (n) with effort level ei (n), if i ? d(n).
Step 3. Based on the above index, we select d(t) at time t such that d(t) := { j : I j (t) ? maxi Ii (t)?
?(t)}, where ?(t) is a perturbation term decreasing in t.
Some remarks on SR-UCB: (1) Different from the classical bandit setting, when calculating the
indexes, there is no ground-truth observation for evaluating the performance of each worker. Therefore we adopt the notion of scoring rule [8]. Particularly the one we used above is the well-known
Brier scoring rule: B(p, q) = a ? b(p ? q)2 . (2) The scoring rule based index looks similar to the
payment rules studied in [2, 4]. But as we will show later, under our framework the selection of a, b
is much less sensitive to different problem settings, as with an index policy, only the relative values
matter (ranking). This is another benefit of separating payment from selection. (3) Instead of only
selecting the best worker with the highest index, we select workers whose index is within a certain
range of the maximum one (a confidence region). This is because workers may have competing
expertise level and hence selecting only one of them would de-incentivize workers? effort exertion.
4.1
Solution concept
Denote by e(n) := {e1 (n), ..., eN (n)}, and e?i (n) = {e j (n)} j6=i . We define approximate Bayesian
Nash Equilibrium as our solution concept:
N,T
T :
Definition 1. Suppose SR-UCB runs for T stages. {ei (t)}i=1,t=1
is a ?-BNE if ?i, {e?i (t)}t=1
1 T
1 T
E[ ? (pi ? ei (t))1(i ? d(t)){e(n)}n?t ] ? E[ ? (pi ? e?i (t))1(i ? d(t)){e?i (n), e?i (n)}n?t ] ? ?.
T t=1
T t=1
This is to say by deviating, each worker will gain no more than ? net-payment per around. We
will establish our main results in terms of ?-BNE. The reason we adopt such a notion is that in a
sequential setting it is generally hard to achieve strict BNE or other stronger notion as any one-step
deviation may not affect a long-term evaluation by much.3 Approximate BNE is likely the best
solution concept we can hope for.
5
Linear regression
5.1
Settings and a warm-up scenario
In this section we present our results for a simple linear regression task where the feature x and observation y are linearly related via an unknown ?: y(x) = ?T x + z, ?x ? X. Let?s start with assuming
all workers are statistically identical such that ?1 = ?2 = ... = ?N . This is an easier case that serves
as a warm-up. It is known that given training data, we can find an estimation ?? that minimizes a
3 Certainly,
we can run mechanisms that induce BNE or dominant-strategy equilibrium for one-shot setting,
e.g. [2], for every time step. But such solution does not incorporate long-term incentives.
4
?T 2
non-regularized empirical risk function: ?? = argmin??R
? d ?x?X (y(x) ? ? x) (linear least square). To
put this model into SR-UCB, denote ?? ?i (t) as the linear p
least square estimator trained using data
from workers j 6= i up to time t ? 1. And Ii (t) := Si (t) + c logt/ni (t), with
2
1 t?1
T
?
Si (t) :=
? 1(i ? d(n)) a ? b ??i (t)xi (n) ? y?i (n, ei (n)) .
ni (t) n=1
(5.1)
Suppose ||?||2 ? M. Given ||x||2 ? 1 and |z|, |zi | ? Z, we then prove that ?t, n, i, (?? T?i (t)xi (n) ?
2
2
y?i (n, ei (n)))2 ? 8M 2 + 2Z 2 . Choose a, b such that a ?
p(8M + 2Z )b ? 0, then we have 0 ? Si (t) ?
a, ?i,t. For the perturbation term, we set ?(t) := O logt/t . The intuition is that with t samples,
the uncertainties in the indexes, coming
from
p
both the score calculation and the bias term, can be
upper bounded at the order of O logt/t . Thus, to not miss a competitive worker, we set the
tolerance to be at the same order.
We now develop the formal equilibrium result of SR-UCB for linear least square. Our analysis
requires the following assumption on the smoothness of ?.
Assumption 1. We assume ?(e) is convex on e ? [0, e],
? with gradient ?0 (e) being both upper
0
bounded, and lower bounded away from 0, i.e., L ? |? (e)| ? L > 0, ?e.
The learner wants to learn f with a total of NT (= |X| or dNT e = |X|) samples. Since workers are
statistically equivalent, ideally the learner would like to run SR-UCB for T steps and collect a label
for a unique sample from each worker at each step. Hence, the learner would like to elicit a single
target effort level e? from all workers and for all samples:
2
e? ? argmine Ex,y,y? ?T ({xi (n), y?i (n, e)}N,T
)
?
x
?
y
+ ? ? (e + ?)NT.
i=1,n=1
(5.2)
Due to the uncertainty in worker selection, it is highly likely that after step T , there will be tasks
left unlabelled. We can let the mechanism go for extra steps to complete labelling of these tasks.
But due to the bounded number of missed selections as we will show later, stopping at step T won?t
affect the accuracy in the model trained.
?
Theorem
p 1. Under SR-UCB for linear least square, set fixed payment pi = e + ? for all i, where
? =p
?( log T /T ), choose c to be a large enough constant, c ? Const.(M, Z, N, b), and let ?(t) :=
O logt/t . Workers have full knowledge of the mechanism and the values of the parameters.
p
Then at an O log T /T -BNE, workers, whenever selected, exert effort ei (t) ? e? for all i and t.
The net payment (payment minus
p the cost
pbe made arbitrarily small by setting
of effort) per task can
? exactly on the order of O log T /T , and pi ? e? = ? = O log T /T ? 0, as T ? ?.
Our solution heavily relies on forming a race among workers. By establishing the convergence of
bandit indexes to a function of effort (via ?(?)), we show that when other workers j 6= i follow
the equilibrium strategy, worker i will be selected w.h.p. at each round, if he also putspin the same
amount of effort. On the other hand, if worker i shirks from doing so by as much as (O log T /T ),
his number of selection will go down in order. This establishes the ?-BNE. As long as there exists
one competitive worker, all others will be incentivized to exert effort. Though
p as will
be shown
in the next section, all workers shirking from exerting effort is also an O log T /T -BNE. This
equilibrium can be removed by adding some uncertainty on top of the bandit selection procedure.
When there are ? 2 workers being selected in SR-UCB, each of them will be assigned a task with
certain probability 0 < ps < 1. While
worker is asp when there
is a single selected worker, thep
signed a task w.p. 1. Set ps := 1 ? O log T /T /? . So with probability 1 ? ps = O log T /T /? ,
even
will miss the selection. With this change, exerting e? still forms an
p the ?winning?workers
O log T /T -BNE, while every worker exerting any effort level that is ?e > O ? lower than the
p
target effort level is not a ?-BNE with ? ? O log T /T .
5
5.2
Linear regression with different ?
Now we consider the more realistic case that different workers have different noise-effort function
??s. W.l.o.g., we assume ?1 (e) < ?2 (e) < ... < ?N (e), ?e.4 In such a setting, ideally we would
always like to collect data from worker 1 since he has the best expertise level (lowest variance in
labeling noise). Suppose we are targeting an effort level e?1 from data source 1 (the best data source).
We first argue that we also need to incentivize worker 2 to exert competitive effort level e?2 such that
?1 (e?1 ) = ?2 (e?2 ), and we assume such an e?2 exists.5 This also naturally implies that e?2 > e?1 as
worker 1 contributes data with less variance in noise at the same effort level. The reason is similar
to the homogeneous setting?over time workers form a competition on ?i (ei ). Having a competitive
peer will motivate workers to exert as much effort as he can (up to the payment). Therefore the goal
for such a learner (with 2T samples to assign) is to find an effort level e? such that 6
2
2,T
e? ? argmine2 :?1 (e1 )=?2 (e2 ) Ex,y,y? ?T ({xi (n), y?i (n, ei ))}i=1,n=1
)x ? y + ? ? (e2 + ?)2T.
Set the one-step payment to be pi = e? + ?, ?i. Let e?1 be the solution to ?1 (e?1 ) = ?2 (e? ) and let
e?i = e? for i ? 2. Note for i > 2 we have ?i (e?i ) ? ?1 (e?1 ) > 0. While we have argued about the
necessity for choosing the top two most competitive workers, we have not mentioned the optimality
of doing so. In fact selecting the top two is the best we can do. Suppose on the contrary, the
optimal solution is by selecting top k > 2 workers, at effort level ek . According to our solution, we
targeted the effort level that leads to variance of noise ?k (ek ) (so the least competitive worker will
be incentivized). Then we can simply target the same effort level ek , but migrating the task loads to
only the top two workers ? this keeps the payment the same, but the variance of noise now becomes
?2 (ek ) < ?k (ek ), which leads to better performance. Denote ?1 := ?3 (e? ) ? ?1 (e?1 ) > 0 and assume
Assumption 1 applies to all ?i ?s. We prove:
p
Theorem 2. Under SR-UCB for linear least square, set c ? Const.(M, Z, b, ?1 ), ?( log T /T ) =
p
? ? ?2L1 , ?(t) := O logt/t . Then, each worker i exerting effort e?i once selected forms an
p
O log T /T -BNE.
Performance with acquired data If workers follow the ?-BNE, the contributed data from the
top two workers (who have been selected the most number of times) will have the same variance
?1 (e?1 ). Then following results
in [4], w.h.p. the performance of the trained classifier is bounded by
O ?1 (e?1 )/(?i=1,2 ni (T ))2 . Ideally we want to have ?i=1,2 ni (T ) = 2T , such that an upper bound of
O ?1 (e?1 )/(2T )2 can be achieved. Compared to the bound O ?1 (e?1 )/(2T )2 , SR-UCB?s expected
performance
loss (due to missed sampling & wrong selection, which is bounded at theorder of
O log T ) is bounded by E[?1 (e?1 )/(?i=1,2 ni (T ))2 ? ?1 (e?1 )/(2T )2 ] ? O ?1 (e?1 ) log T /T 3 w.h.p. .
Regularized linear regression Ridge estimator has been widely adopted for solving linear regression. The objective is to find a linear model ?? that minimizes the following regularized empirical
?T 2
? 2
risk: ?? = argmin??R
? d ?x?X (y(x) ? ? x) + ?||?||2 , with ? > 0 being the regularization parameter.
?
We claim
changing the f?i,t (?) in SR-UCB to the output from the above ridge regression,
p that simply
the O log T /T -BNE for inducing an effort level e? will hold. Different from the non-regularized
case, the introduction of the regularization term will add bias in ?? T?i (t), which gives a biased evaluation of indexes. However, we prove the convergence of ?? T?i (t) (so again the indexes will converge
properly) in the following lemma, which enables an easy adaption of our previous results for nonregularized case to ridge regression:
Lemma 1. With n i.i.d. samples, w.p. ? 1 ? e?Kn (K > 0 is a constant), ||?? ?i (t) ? ?||2 ? O 12 .
2
n
Non-linear regression The basic idea for extending the results to non-linear regression is inspired
by the consistency results on M-estimator [14], when the error of training data satisfies zero mean.
Similar to the reasoning for Lemma 1, if ( f??i,t (x) ? f (x))2 ? 0, we can hope for an easy adaptation
4 Combing
with the results for homogeneous workers, we can again easily extend our results to the case
where there are a mixture of homogeneous and heterogenous workers.
5 It exists when the supports for ? (?), ? (?) overlap for a large support range.
1
2
6 Since we only target the top two workers, we can limit the number of acquisitions on each stage to be no
more than two, so the number of query does not go beyond 2T .
6
of our previous results. Suppose the non-linear regression model can be characterized by a parameter
family ?, where f is characterized by parameter ?, and f??i,t by ?? i (t). Due to the consistency
of M-estimator we will have ||?? i (t) ? ?||2 ? 0. More specifically, ?
according
to the results from
[18], for the non-linear regression model we can establish an O 1/ n convergence rate with n
training samples. When f is Lipschitz in parameter space, i.e. there exists a constant LN > 0
such that | f??i,t (x) ? f (x)| ? LN ||?? i (t) ? ?||2 , by dominated
convergence theorem we also have
( f??i,t (x) ? f (x))2 ? 0, and ( f??i,t (x) ? f (x))2 ? O 1/t . The rest of the proof can then follow.
Example 1. Logistic function f (x) =
6
1
T
1+e?? x
satisfies Lipschitz condition with LN = 1/4.
Computational issues
In order to update the indexes and select workers adaptively, we face a few computational challenges.
First, in order to update the index for each worker at any time t, a new estimator ?? ?i (t) (using data
from all other workers j 6= i up to time t ? 1) needs to be re-computed. Second, we need to re-apply
?? ?i (t) to every collected sample from worker i, {(xi (n), y?i (n, ei (n)) : i ? d(n), n = 1, 2, ...t ? 1} from
previous rounds. We propose online variants of SR-UCB to address these challenges.
Online update of ?? ?i (?) Inspired by the online learning literature, instead of re-computing ?? ?i (t)
at each step, which involves re-calculating the inverse of a covariance matrix (e.g., (?I + X T X)?1
for ridge regression) whenever there is a new sample point arriving, we can update ?? ?i (t) in an
online fashion, which is computationally much more efficient. We demonstrate our results with
ridge linear regression. Start with an initial model ?? online
(1). Denote by (x?i (t), y??i (t)) any newly
?i
arrived sample at time t from worker j 6= i. Update ?? online
(t + 1) (for computing Ii (t + 1)) as [17]:
?i
?? online
(t + 1) := ?? online
(t) ? ?t ? ??? online (t) [(?T x?i (t) ? y??i (t))2 + ?||?||22 ] ,
?i
?i
?i
Notice there could be multiple such data points arriving at each time ? in which case we will update sequentially in an arbitrarily order. It is also possible that there is no sample point arriving from
workers other than i at a time t, in which case we simply do not perform an update. Name this online
updating SR-UCB as OSR1-UCB. With online updating, the accuracy of trained model ?? online
(t + 1)
?i
converges slower, so is the accuracy in the index for characterizing
worker?s
performance. Neverp
theless we prove exerting targeted effort exertion e? is O log T /T -BNE under OSR1-UCB for
ridge regression, using convergence results for ?? online
(t) proved in [17].
?i
Online score update Online updating can also help compute Si (t) (in Ii (t)) efficiently. Instead of
repeatedly re-calculating the score for each data point (in Si (t)), we only update the newly assigned
samples which has not been evaluated yet, by replacing ?? online
(t) with ?? online
(n) in Si (t):
?i
?i
Sionline (t) :=
1 t
(n))T xi (n) ? y?i (n, ei (n)))2 ].
? 1(i ? d(n))[a ? b((?? online
?i
ni (t) n=1
(6.1)
With less aggressive update, again the index term?s accuracy converges slower than before, which is
due to the fact the older data is scored using an older (and less accurate) version of ?? online
without
?i
online (t) +
being
further
updated.
We
propose
OSR2-UCB
where
we
change
the
index
SR-UCB
to:
S
p
?i
c (logt)2 /ni (t), to accommondate the slower convergence. We establish an O log T / T -BNE
for workers exerting target effort?the change is due to the change of the bias term.
7
Privacy preserving SR-UCB
With a repeated data acquisition setting, workers? privacy in data may leak repeatedly. In this section
we study an extension of SR-UCB to preserve privacy of each individual worker?s contributed data.
Denote the training data collected as D := {y?i (t, ei (t))}i?d(t),t . We quantify privacy using differential
privacy [5], and we adopt (?, ?)-differential privacy (DP) [6], which for our setting is defined below:
Definition 2. A mechanism M : (X ? R)|D | ? O is (?, ?)-differentially private if for any i ? d(t),t,
any two distinct y?i (t, ei (t)), y?0i (t, e0i (t)), and for every subset of possible outputs S ? O , Pr[M (D ) ?
S ] ? exp(?) Pr[M (D \{y?i (t, ei (t))}, y?0i (t, e0i (t))) ? S ] + ?.
7
An outcome o ? O of a mechanism contains two parts, both of which can contribute to privacy
? ), which is trained using all data collected after T
leakage: (1) The learned regression model ?(T
? ), this information will be released for
rounds. Suppose after learning the regression model ?(T
public usage or monitoring. This information contains each individual worker?s private information.
Note this is a one-shot leak of privacy (published at the end of the training (step T )). (2) The
indexes can reveal private information. Each worker i?s data will be utilized towards calculating
other workers? indexes I j (t), j 6= i, as well as his own Ii (t), which will be published.7 Note this type
of leakage occurs at each step. The lemma below allows us to focus on the privacy losses in S j (t),
instead of I j (t), as both I j (t) and ni (t) are functions of {S j (n)}n?t .
Lemma 2. At any time t, ?i, ni (t) can be written as a function of {S j (n), n < t} j .
? ) To protect privacy in ?(T
? ), following standard method [6], we add a
Preserving privacy in ?(T
p
?
?
Laplacian noise vector v? to it: ? (T ) = ?(T ) + v? , where Pr(v? ) ? exp(??? ||v? ||2 ). ?? > 0 is a
parameter controlling the noise level.
?
Lemma 3.
Set ?? = 2 T , the output ?? p (T ) of SR-UCB for linear regression preserves
?
? )||2 = ||v? ||2 ? log T / T .
(O T ?1/2 , exp(? O T ))-DP. Further w.p. ? 1 ? 1/T 2 , ||?? p (T ) ? ?(T
Preserving privacy in {Ii (t)}i,t : a continual privacy preserving model For indexes {Ii (t)}i , it
is also tempting to add vi (t) to each index, i.e. Ii (t) := Ii (t) + vi (t), where again vi (t) is a zeromean Laplacian noise. However releasing {Ii (t)}i at each step will release a noisy version of each
y?i (n, ei (n)), i ? d(n), ?n < t. The composition theory in differential privacy [12] implies that the
preserved privacy level will grow in time t, unless we add significant noise on each stage, which
will completely destroy the informativeness of our index policy. We borrow the partial sum idea for
continual observations [3]. The idea is when releasing continual data, instead of inserting noise at
every step, the current to-be-released data will be decoupled into sum of partial sums, and we only
add noise to each partial sum and this noisy version of the partial sums can be re-used repeatedly.
We consider adding noise to a modified version of the online indexes {Sionline (t)}i,t as defined in
Eqn. (6.1), with ?? online
(t) replaced by ?tn=1 ?? ?i (n)/t, where ?? ?i (n) is the regression model we
?i
estimated using all data from worker j 6= i up to time n. For each worker i, his contributed data
appear in both {Sionline (t)}t and{Sonline
(t)}t , j 6= i. For Sonline
(t), j 6= i, we want to preserve privacy
j
j
t
in ?n=1 ?? ? j (n)/t, which contains information of y?i (n, ei (n)).
We first apply the partial sums idea to ?tn=1 ?? ? j (n)/t. Write down t as a binary string and find the
rightmost digit that is a 1, then flip that digit to 0: convert is back to decimal gives q(t). Take the
sum from q(t) + 1 to t: ?tn=q(t)+1 ?? ? j (n) as one partial sum. Repeat above for q(t), to get q(q(t)),
q(t)
and the second partial sum ?
?? ? j (n), until we reach q(?) = 0. So
n=q(q(t))+1
q(t)
t
0
1
?
?
?
(n)/t
=
(
?
(n)
+
?? ? j (n) + ... + ? ?? ? j (n)) .
?j
? ?j
?
?
t n=q(t)+1
n=1
n=0
n=q(q(t))+1
t
(7.1)
Add noise v?? with Pr(v?? ) ? e??||v?? ||2 to each partial sum. The number of noise terms is bounded
by ? dlogte at time t. So is the number of appearance of each private data in the partial sums [3].
Denote the noisy version of ?tn=1 ?? ? j (n)/t as ?? online
(n). Each Sionline (t) is computed using ?? online
(n).
?i
?i
For Sionline (t), we also want to preserve privacy in y?i (n, ei (n)). Clearly Sionline (t) can be written as
ni (t)
sum of partial sums of terms involving y?i (n, ei (n)): write Sionline (t) as a summation: ?n=1
dS(n)/ni (t)
?
online
T
2
(short-handing dS(n) := a ? b((??i (t(n))) xi (t(n)) ? y?i (t(n), ei (t(n)))) , where t(n) denotes the
time of worker i being sampled the n-th time.). Decouple Sionline (t) into partial sums using the same
technique. For each partial sum, add a noise vS with distribution Pr(vS ) ? e??|vS | .
We then show that with the above two noise exertion procedures, our index policy SR-UCB will
not lose its value in incentivizing effort. In order to prove similar convergence results, we need to
modify SR-UCB by changing the index to the following format:
p
?
Ii (t) = S?ionline (t) + c(log3 t log3 T )/ ni (t), ?(t) = O (log3 t log3 T )/ t ,
7 It is debatable whether the indexes should be published or not.
But revealing decisions on worker selection
will also reveal information on the indexes. We consider the more direct scenario ? indexes are published.
8
where S?ionline (t) denotes the noisy version of Sionline (t) with added noises ( vS , v?? etc). The change of
bias is mainly to incorporate the increased uncertainty level (due to added privacy preserving noise).
Denote this mechanism as PSR-UCB, we have:
3
Theorem 3.
Set ??1:= 1/ log T for added noises (both vS and v?? ), PSR-UCB preserves
?1
(O log T , O log T )-DP for linear regression.
With homogeneous
workers, we similarly can prove exerting effort {e?i }i (optimal effort level) is
?
6
O log T / T -BNE. We can see that, in order to protect privacy in the bandit setting, the approximation term of BNE is worse than before.
Acknowledgement: We acknowledge the support of NSF grant CCF-1301976.
References
[1] Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed
bandit problem. Machine learning, 47(2-3):235?256, 2002.
[2] Yang Cai, Constantinos Daskalakis, and Christos H Papadimitriou. Optimum statistical estimation with strategic data sources. arXiv preprint arXiv:1408.2539, 2014.
[3] T-H Hubert Chan, Elaine Shi, and Dawn Song. Private and continual release of statistics. ACM
Transactions on Information and System Security (TISSEC), 14(3):26, 2011.
[4] Rachel Cummings, Stratis Ioannidis, and Katrina Ligett. Truthful linear regression. In Proceedings of The 28th Conference on Learning Theory, COLT 2015, pages 448?483, 2015.
[5] Cynthia Dwork. Differential privacy. In Automata, languages and programming. 2006.
[6] Cynthia Dwork and Aaron Roth. The algorithmic foundations of differential privacy.
[7] Arpita Ghosh and Patrick Hummel. Learning and incentives in user-generated content: Multiarmed bandits with endogenous arms. In Proceedings of the 4th conference on Innovations in
Theoretical Computer Science, pages 233?246. ACM, 2013.
[8] Tilmann Gneiting and Adrian E Raftery. Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association, 102(477):359?378, 2007.
[9] Chien-Ju Ho, Aleksandrs Slivkins, and Jennifer Wortman Vaughan. Adaptive contract design
for crowdsourcing markets: Bandit algorithms for repeated principal-agent problems. In Proceedings of the fifteenth ACM EC, pages 359?376. ACM, 2014.
[10] Stratis Ioannidis and Patrick Loiseau. Linear regression as a non-cooperative game. In Web
and Internet Economics, pages 277?290. Springer, 2013.
[11] Radu Jurca and Boi Faltings. Collusion-resistant, incentive-compatible feedback payments. In
Proceedings of the 8th ACM conference on Electronic commerce, pages 200?209. ACM, 2007.
[12] Peter Kairouz, Sewoong Oh, and Pramod Viswanath. The composition theorem for differential
privacy. arXiv preprint arXiv:1311.0776, 2013.
[13] Tze Leung Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances in applied mathematics, 6(1):4?22, 1985.
[14] Guy Lebanon. m-estimators and z-estimators.
[15] Yishay Mansour, Aleksandrs Slivkins, and Vasilis Syrgkanis. Bayesian incentive-compatible
bandit exploration. In Proceedings of the Sixteenth ACM EC, pages 565?582. ACM, 2015.
[16] Nolan Miller, Paul Resnick, and Richard Zeckhauser. Eliciting informative feedback: The
peer-prediction method. Management Science, 51(9):1359?1373, 2005.
[17] Alexander Rakhlin, Ohad Shamir, and Karthik Sridharan. Making gradient descent optimal for
strongly convex stochastic optimization. arXiv preprint arXiv:1109.5647, 2011.
[18] BLS Prakasa Rao. The rate of convergence of the least squares estimator in a non-linear
regression model with dependent errors. Journal of Multivariate Analysis, 1984.
[19] Victor S Sheng, Foster Provost, and Panagiotis G Ipeirotis. Get another label? improving
data quality and data mining using multiple, noisy labelers. In Proceedings of the 14th ACM
SIGKDD international conference on Knowledge discovery and data mining, 2008.
[20] Panos Toulis, David C. Parkes, Elery Pfeffer, and James Zou. Incentive-Compatible Experimental Design. Proceedings 16th ACM EC?15, pages 285?302, 2015.
9
| 6190 |@word private:8 version:6 longterm:1 exploitation:1 stronger:1 adrian:1 covariance:1 minus:2 shot:2 initial:1 liu:1 contains:4 score:3 selecting:4 necessity:1 denoting:1 rightmost:1 past:2 current:1 nt:2 si:6 yet:1 written:2 realistic:1 informative:1 enables:3 designed:1 ligett:1 update:10 v:6 selected:12 short:1 parkes:1 kairouz:1 contribute:3 simpler:1 direct:1 differential:7 prove:6 combine:1 incentivizes:1 privacy:27 acquired:2 expected:5 market:3 nor:1 brier:1 multi:2 inspired:4 decreasing:1 armed:2 considering:1 becomes:1 provided:1 spain:1 estimating:1 bounded:10 lowest:1 argmin:3 spends:1 minimizes:2 string:1 developed:1 proposing:1 ghosh:1 guarantee:1 every:8 continual:4 pramod:1 exactly:2 classifier:1 wrong:1 unit:1 grant:1 appear:1 positive:2 before:2 engineering:1 gneiting:1 modify:1 limit:1 despite:1 establishing:1 signed:1 exert:7 studied:2 resembles:1 dynamically:1 collect:2 range:2 statistically:2 unique:1 commerce:1 regret:1 differs:1 digit:2 procedure:4 stratis:2 empirical:2 elicit:1 significantly:1 revealing:1 confidence:3 induce:5 pre:1 get:2 cannot:2 targeting:1 selection:14 put:1 risk:2 impossible:1 seminal:1 vaughan:1 restriction:1 equivalent:1 shi:1 roth:1 go:3 economics:1 exogenously:1 syrgkanis:1 convex:2 survey:1 automaton:1 simplicity:1 assigns:3 rule:14 estimator:10 dominate:1 borrow:2 oh:1 his:8 notion:4 crowdsourcing:3 requester:1 updated:2 target:9 suppose:9 heavily:1 controlling:1 user:1 programming:1 homogeneous:5 yishay:1 designing:1 shamir:1 harvard:2 associate:1 particularly:1 updating:3 utilized:1 viswanath:1 labeled:1 cooperative:1 observed:1 pfeffer:1 preprint:3 resnick:1 capture:2 region:1 elaine:1 decrease:1 highest:1 removed:1 observes:1 mentioned:1 intuition:1 nash:3 leak:2 reward:2 ideally:3 dynamic:5 trained:8 depend:1 solving:2 motivate:1 compromise:2 purely:1 learner:27 completely:2 easily:1 various:1 train:1 distinct:2 query:1 labeling:4 outcome:5 crowd:1 choosing:1 peer:4 whose:2 widely:2 supplementary:1 say:1 katrina:1 nolan:1 statistic:1 fischer:1 noisy:5 online:29 net:2 cai:2 propose:5 yiling:2 debatable:1 coming:1 adaptation:1 inserting:1 vasilis:1 achieve:1 sixteenth:1 inducing:2 competition:2 differentially:2 convergence:8 p:3 extending:1 sea:1 optimum:1 converges:2 help:3 develop:1 school:1 job:4 involves:1 come:1 implies:2 quantify:1 stochastic:1 exploration:2 material:1 public:1 argued:1 assign:2 mab:4 summation:1 extension:1 strictly:1 migrating:1 hold:1 around:1 considered:2 ground:5 deciding:1 exp:3 equilibrium:10 algorithmic:1 claim:1 adopt:3 released:2 purpose:2 estimation:3 lose:1 label:11 panagiotis:1 sensitive:5 him:1 contributor:1 robbins:1 successfully:1 establishes:1 hope:2 clearly:1 always:1 modified:1 asp:1 varying:1 derived:1 focus:3 release:2 properly:1 mainly:2 sigkdd:1 dependent:1 stopping:1 leung:1 entire:1 bandit:24 issue:1 motive:1 among:3 colt:1 initialize:1 marginal:1 equal:1 once:4 exerted:2 having:1 sampling:2 identical:1 look:1 nearly:1 constantinos:1 future:5 papadimitriou:1 others:1 simplify:1 richard:1 few:2 strategically:1 preserve:6 resulted:1 individual:2 deviating:1 replaced:1 hummel:1 maintain:1 karthik:1 highly:1 dwork:2 mining:2 evaluation:2 certainly:1 introduces:1 mixture:1 light:2 hubert:1 accurate:1 nowadays:1 worker:109 partial:12 ohad:1 decoupled:1 unless:1 desired:1 re:6 theoretical:1 instance:2 increased:1 modeling:1 rao:1 assignment:3 strategic:10 cost:7 deviation:1 subset:1 wortman:1 reported:2 kn:1 combined:1 chooses:1 adaptively:1 ju:1 fundamental:2 international:1 contract:4 receiving:1 pbe:1 pool:1 squared:1 reflect:1 again:4 cesa:1 management:1 leveraged:1 choose:2 guy:1 worse:1 ek:5 american:1 style:2 return:1 combing:1 aggressive:1 de:1 dnt:1 matter:1 ranking:1 depends:3 race:1 vi:3 later:2 endogenous:1 doing:2 start:2 competitive:6 crowdsourced:1 maintains:1 option:1 contribution:6 square:6 ni:15 holder:2 accuracy:4 variance:7 who:2 efficiently:1 miller:1 bayesian:4 monitoring:1 expertise:2 j6:1 published:4 reach:1 whenever:2 definition:2 against:1 acquisition:12 james:1 e2:2 naturally:1 associated:2 proof:1 static:2 sampled:2 gain:2 newly:2 proved:1 ask:1 knowledge:3 exerting:13 carefully:1 auer:1 back:2 focusing:1 higher:2 cummings:2 follow:3 response:2 formulation:2 execute:1 though:1 evaluated:1 generality:1 zeromean:1 strongly:1 stage:4 until:1 d:2 hand:1 receives:1 e0i:2 eqn:1 ei:24 replacing:1 web:1 sheng:1 logistic:1 quality:17 reveal:2 behaved:1 resemblance:1 name:1 usage:1 concept:4 unbiased:1 verify:1 ccf:1 hence:6 assigned:6 regularization:2 deal:1 round:7 game:3 maintained:1 won:1 arrived:1 theoretic:2 complete:1 ridge:6 demonstrate:1 performs:1 l1:1 tn:4 reasoning:1 recently:1 dawn:1 extend:3 slight:1 approximates:1 he:5 association:1 significant:1 composition:2 multiarmed:2 smoothness:1 rd:3 consistency:2 mathematics:1 similarly:1 language:1 stable:1 access:1 resistant:1 etc:1 labelers:1 align:1 base:3 dominant:1 add:7 nicolo:1 own:1 recent:3 chan:1 patrick:2 multivariate:1 scenario:2 certain:3 binary:1 arbitrarily:2 scoring:6 victor:1 preserving:5 herbert:1 additional:1 relaxed:1 determine:1 maximize:1 converge:1 tempting:1 truthful:1 ii:12 zeckhauser:1 multiple:4 desirable:1 full:1 unlabelled:1 characterized:2 calculation:1 long:8 compensate:1 lai:2 prakasa:1 e1:2 laplacian:2 prediction:4 variant:2 regression:35 basic:2 involving:1 panos:1 exerts:2 fifteenth:1 arxiv:6 normalization:1 achieved:1 receive:2 addition:2 want:7 preserved:1 grow:1 source:5 crucial:1 appropriately:1 extra:1 biased:1 unlike:1 sr:34 rest:1 strict:1 releasing:2 mature:1 contrary:1 leveraging:1 sridharan:1 yang:2 leverage:1 viability:1 relaxes:1 enough:1 easy:2 affect:2 zi:5 competing:1 idea:7 whether:2 utility:1 effort:55 song:1 peter:2 returned:1 remark:1 repeatedly:3 generally:1 clear:1 covered:1 verifiable:1 amount:2 extensively:1 generate:1 nsf:1 notice:1 estimated:1 per:4 write:2 incentive:13 bls:1 arpita:1 changing:2 neither:1 incentivize:6 destroy:1 asymptotically:1 sum:15 convert:1 run:3 inverse:1 uncertainty:4 reporting:3 family:2 rachel:1 decide:1 electronic:1 missed:2 decision:4 bound:5 internet:1 pay:2 unverifiable:1 collusion:1 dominated:1 optimality:1 format:1 handing:1 radu:1 according:2 ball:1 logt:7 making:2 intuitively:1 pr:5 computationally:2 ln:3 payment:28 jennifer:1 mechanism:10 tilmann:1 know:1 flip:1 tractable:1 serf:1 end:1 adopted:2 apply:2 away:1 ho:1 slower:3 top:7 denotes:2 include:1 ensure:1 opportunity:4 const:2 calculating:4 establish:3 classical:4 eliciting:1 leakage:2 objective:6 question:1 quantity:2 occurs:1 added:3 strategy:3 gradient:2 dp:3 incentivized:3 separating:1 degrade:1 argue:1 collected:7 reason:3 assuming:1 index:40 decimal:1 innovation:1 potentially:2 design:8 proper:2 policy:3 unknown:1 contributed:6 perform:1 upper:5 bianchi:1 observation:6 finite:2 acknowledge:1 descent:1 mansour:1 perturbation:2 provost:1 aleksandrs:2 david:1 namely:1 slivkins:2 security:1 learned:1 protect:2 barcelona:1 heterogenous:1 nip:1 address:2 able:1 beyond:1 below:2 challenge:7 critical:1 overlap:1 natural:1 rely:1 warm:2 regularized:4 ipeirotis:1 arm:5 older:2 improve:2 toulis:1 finished:1 raftery:1 literature:4 acknowledgement:1 discovery:1 contributing:1 asymptotic:1 relative:1 loss:5 ioannidis:2 log6:1 allocation:1 foundation:1 agent:7 informativeness:1 sewoong:1 foster:1 pi:6 balancing:1 compatible:3 repeat:1 arriving:3 enjoys:1 bias:4 weaker:1 formal:1 template:1 taking:1 face:1 characterizing:1 distributed:1 benefit:1 tolerance:1 dimension:2 feedback:2 evaluating:2 commonly:1 made:1 adaptive:2 ec:3 log3:4 transaction:1 lebanon:1 approximate:3 chien:1 overcomes:1 keep:1 decides:1 sequentially:1 bne:22 assumed:1 xi:12 thep:1 daskalakis:1 promising:1 learn:3 contributes:2 improving:2 necessarily:1 zou:1 main:1 linearly:1 motivation:1 noise:21 scored:1 paul:2 repeated:2 argmine:1 referred:1 en:1 fashion:3 christos:1 winning:1 weighting:1 third:1 incentivizing:2 theorem:5 down:2 load:1 cynthia:2 maxi:1 rakhlin:1 maximizers:1 exists:4 sequential:2 adding:2 labelling:1 budget:1 chen:1 easier:1 simply:4 explore:1 psr:5 likely:2 forming:1 appearance:1 tze:1 applies:1 acquiring:2 springer:1 truth:5 satisfies:2 relies:1 adaption:1 acm:10 goal:1 targeted:4 formulated:1 towards:1 lipschitz:2 content:1 change:6 hard:1 determined:2 specifically:2 uniformly:1 miss:2 lemma:6 decouple:1 called:1 total:2 principal:1 experimental:1 ucb:46 aaron:1 select:4 support:5 alexander:1 incorporate:2 evaluate:1 ex:2 |
5,737 | 6,191 | Spectral Learning of Dynamic Systems from
Nonequilibrium Data
Hao Wu and Frank No?
Department of Mathematics and Computer Science
Freie Universit?t Berlin
Arnimallee 6, 14195 Berlin
{hao.wu,frank.noe}@fu-berlin.de
Abstract
Observable operator models (OOMs) and related models are one of the most important and powerful tools for modeling and analyzing stochastic systems. They
exactly describe dynamics of finite-rank systems and can be efficiently and consistently estimated through spectral learning under the assumption of identically
distributed data. In this paper, we investigate the properties of spectral learning
without this assumption due to the requirements of analyzing large-time scale
systems, and show that the equilibrium dynamics of a system can be extracted
from nonequilibrium observation data by imposing an equilibrium constraint. In
addition, we propose a binless extension of spectral learning for continuous data.
In comparison with the other continuous-valued spectral algorithms, the binless
algorithm can achieve consistent estimation of equilibrium dynamics with only
linear complexity.
1
Introduction
In the last two decades, a collection of highly related dynamic models including observable operator
models (OOMs) [1?3], predictive state representations [4?6] and reduced-rank hidden Markov models
[7, 8], have become powerful and increasingly popular tools for analysis of dynamic data. These
models are largely similar, and all can be learned by spectral methods in a general framework of
multiplicity automata, or equivalently sequential systems [9, 10]. In contrast with the other commonly
used models such as Markov state models [11, 12], Langevin models [13, 14], traditional hidden
Markov models (HMMs) [15, 16], Gaussian process state-space models [17, 18] and recurrent
neural networks [19], the spectral learning based models can exactly characterize the dynamics of a
stochastic system without any a priori knowledge except the assumption of finite dynamic rank (i.e.,
the rank of Hankel matrix) [10, 20], and the parameter estimation can be efficiently performed for
discrete-valued systems without solving any intractable inverse or optimization problem. We focus in
this paper only on stochastic systems without control inputs and all spectral learning based models
can be expressed in the form of OOMs for such systems, so we will refer to them as OOMs below.
In most literature on spectral learning, the observation data are assumed to be identically (possibly not
independently) distributed so that the expected values of observables associated with the parameter
estimation can be reliably computed by empirical averaging. However, this assumption can be
severely violated due to the limit of experimental technique or computational capacity in many
practical situations, especially where metastable physical or chemical processes are involved. A
notable example is the distributed computing project Folding@home [21], which explores protein
folding processes that occur on the timescales of microseconds to milliseconds based on molecular
dynamics simulations on the order of nanoseconds in length. In such a nonequilibrium case where
distributions of observation data are time-varying and dependent on initial conditions, it is still unclear
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
if promising estimates of OOMs can be obtained. In [22], a hybrid estimation algorithm was proposed
to improve spectral learning of large-time scale processes by using both dynamic and static data,
but it still requires assumption of identically distributed data. One solution to reduce the statistical
bias caused by nonequilibrium data is to discard the observation data generated before the system
reaches steady state, which is a common trick in applied statistics [23]. Obviously, this way suffers
from substantial information loss and is infeasible when observation trajectories are shorter than
mixing times. Another possible way would be to learn OOMs by likelihood-based estimation instead
of spectral methods, but there is no effective maximum likelihood or Bayesian estimator of OOMs
until now. The maximum pseudo-likelihood estimator of OOMs proposed in [24] demands high
computational cost and its consistency is yet unverified.
Another difficulty for spectral approaches is learning with continuous data, where density estimation
problems are involved. The density estimation can be performed by parametric methods such as the
fuzzy interpolation [25] and the kernel density estimation [8]. But these methods would reduce the
flexibility of OOMs for dynamic modeling because of their limited expressive capacity. Recently, a
kernel embedding based spectral algorithm was proposed to cope with continuous data [26], which
avoids explicit density estimation and learns OOMs in a nonparametric manner. However, the kernel
embedding usually yields a very large computational complexity, which greatly limits practical
applications of this algorithm to real-world systems.
The purpose of this paper is to address the challenge of spectral learning of OOMs from nonequilibrium data for analysis of both discrete- and continuous-valued systems. We first provide a modified
spectral method for discrete-valued stochastic systems which allows us to consistently estimate
the equilibrium dynamics from nonequilibrium data, and then extend this method to continuous
observations in a binless manner. In comparison with the existing learning methods for continuous
OOMs, the proposed binless spectral method does not rely on any density estimator, and can achieve
consistent estimation with linear computational complexity in data size even if the assumption of identically distributed observations does not hold. Moreover, some numerical experiments are provided
to demonstrate the capability of the proposed methods.
2
2.1
Preliminaries
Notation
In this paper, we use P to denote probability distribution for discrete random variables and probability
density for continuous random variables. The indicator function of event e is denoted by 1e and
the Dirac delta function centered at x is denoted by ?x (?). For a given process {at }, we write
the subsequence (ak , ak+1 , . . . , ak0 ) as ak:k0 , and E? [at ] , limt?? E[at ] means the equilibrium
p
expected value of at if the limit exists. In addition, the convergence in probability is denoted by ?.
2.2
Observable operator models
An m-dimensional observable operator model (OOM) with observation space O can be represented by
a tuple M = (?, {?(x)}x?O , ?), which consists of an initial state vector ? ? R1?m , an evaluation
vector ? ? Rm?1 and an observable operator matrix ?(x) ? Rm?m associated to each element
x ? O. M defines a stochastic process {xt } in O as
P (x1:t |M) = ??(x1:t )?
(1)
under the condition that ??(x1:t )? ? 0, ??(O)? = 1 and ??(x1:t )? = ??(x
? 1:t )?(O)? hold
t
for all t and x1:t ? O [10], where ?(x1:t ) , ?(x1 ) . . . ?(xt ) and ?(A) , A dx ? (x). Two
OOMs M and M0 are said to be equivalent if P (x1:t |M) ? P (x1:t |M0 ).
3
3.1
Spectral learning of OOMs
Algorithm
Here and hereafter, we only consider the case that the observation space O is a finite set. (Learning
with continuous observations will be discussed in Section 4.2.) A large number of largely similar
2
Algorithm 1 General procedure for spectral learning of OOMs
INPUT: Observation trajectories generated by a stochastic process {xt } in O
?
? = (?,
? {?(x)}
?
OUTPUT: M
x?O , ?)
PARAMETER: m: dimension of the OOM. D1 , D2 : numbers of feature functions. L: order of
feature functions.
1: Construct feature functions ?1 = (?1,1 , . . . , ?1,D1 )> and ?2 = (?2,1 , . . . , ?2,D2 )> , where
each ?i,j is a mapping from OL to R and D1 , D2 ? m.
2: Approximate
? , E [? (xt?L:t?1 )] , ?
? , E [? (xt:t+L?1 )]
?
1
1
2
2
C1,2 , E ?1 (xt?L:t?1 )?2 (xt:t+L?1 )>
C1,3 (x) , E 1xt =x ? ?1 (xt?L:t?1 )?2 (xt+1:t+L )> , ?x ? O
(5)
(6)
(7)
?
?
? ,?
? ,C
? 1,2 and C
? 1,3 (x) over observation data.
by their empirical means ?
1
2
?1
3: Compute F1 = U?
? RD1 ?m and F2 = V ? RD2 ?m from the truncated singular value
? 1,2 ? U?V> , where ? ? Rm?m is a diagonal matrix contains the top m
decomposition C
? 1,2 , and U and V consist of the corresponding m left and right singular
singular values of C
? 1,2 .
vectors of C
4: Compute
?
?
=
?
?(x)
=
??
F>
1 ?1
? 1,3 (x)F2 ,
F> C
?
?
=
?? > F
?
2 2
1
(8)
?x ? O
(9)
(10)
spectral methods have been developed, and the generic learning procedure of these methods is
summarized in Algorithm 1 by omitting details of algorithm implementation and parameter choice
[27, 7, 28]. For convenience of description and analysis, we specify in this paper the formula for
?
?
? ,?
? ,C
? 1,2 and C
? 1,3 (x) in Line 2 of Algorithm 1 as follows:
calculating ?
1
2
N
X
?
? = 1
?
? (~s 1 ),
1
N n=1 1 n
N
X
?? = 1
? (~s 2 )
?
2
N n=1 2 n
N
X
? 1,2 = 1
C
? (~s 1 )? (~s 2 )>
N n=1 1 n 2 n
N
X
? 1,3 (x) = 1
C
1s2 =x ?1 (~sn1 )?2 (~sn3 )> ,
N n=1 n
(2)
(3)
?x ? O
(4)
Here {(~sn1 , s2n , ~sn3 )}N
n=1 is the collection of all subsequences of length (2L + 1) appearing in observation data (N = T ? 2L for a single observation trajectory of length T ). If an observation subsequence
xt?L:t+L is denoted by (~sn1 , s2n , ~sn3 ) with some n, then ~sn1 = xt?L:t?1 and ~sn3 = xt+1:t+L represents
the prefix and suffix of xt?L:t+L of length L, s2n = xt is the intermediate observation value, and
~sn2 = xt:t+L?1 is an ?intermediate part? of the subsequence of length L starting from time t (see
Fig. 1 for a graphical illustration).
Algorithm 1 is much more efficient than the commonly used likelihood-based learning algorithms
and does not suffer from local optima issues. In addition, and more importantly, this algorithm can be
shown to be consistent if (~sn1 , s2n , ~sn3 ) are (i) independently sampled from M or (ii) obtained from
a finite number of trajectories which have fully mixed so that all observation triples are identically
distributed (see, e.g., [8, 3, 10] for related works). However, the asymptotic correctness of OOMs
learned from short trajectories starting from nonequilibrium states has not been formally determined.
3
?
?
Figure 1: Illustration of variables ~sn1 , s2n , ~sn3 and ~sn2 used in Eqs. (2)-(4) with (~sn1 , s2n , ~sn3 ) =
xt?L:t+L .
3.2
Theoretical analysis
We now analyze statistical properties of the spectral algorithm without the assumption of identically
distributed observations. Before stating our main result, some assumptions on observation data are
listed as follows:
Assumption 1. The observation data consists of I independent trajectories of length T produced
by a stochastic process {xt }, and the data size tends to infinity with (i) I ? ? and T = T0 or (ii)
T ? ? and I = I0 .
Assumption 2. {xt } is driven by an m-dimensional OOM M = (?, {?(x)}x?O , ?), and
0
T
1 X
p
ft ? E? [f (xt:t+l?1 )] = E? [f (xt:t+l?1 ) |x1:k ]
T 0 t=1
(11)
as T 0 ? ? for all k, l, x1:k and f : Ol 7? R.
? 1,2 is not less than m.
Assumption 3. The rank of the limit of C
Notice that Assumption 2 only states the asymptotic stationarity of {xt } and marginal distributions of
observation triples are possibly time dependent if ? 6= ?? (O). Assumption 3 ensures that the limit
? given by Algorithm 1 is well defined, which generally holds for minimal OOMs (see [10]).
of M
Based on the above assumptions, we have the following theorem concerning the statistical consistency
of the OOM learning algorithm (see Appendix A.1 for proof):
Theorem 1. Under Assumptions 1-3, there exists an OOM M0 = (? 0 , {?0 (x)}x?O , ? 0 ) which is
? and satisfies
equivalent to M
p
? 0 ? ?,
p
?0 (x) ? ?(x), ?x ? O
(12)
This theorem is central in this paper, which implies that the spectral learning algorithm can achieve
consistent estimation of all parameters of OOMs except initial state vectors even for nonequilibrium
p
? ? ? 0 does not hold in most cases except when {xt } is stationary.). It can be further
data. (?
generalized according to requirements in more complicated situations where, for example, observation
trajectories are generated with multiple different initial conditions (see Appendix A.2).
4
Spectral learning of equilibrium OOMs
In this section, applications of spectral learning to the problem of recovering equilibrium properties
of dynamic systems from nonequilibrium data will be highlighted, which is an important problem in
practice especially for thermodynamic and kinetic analysis in computational physics and chemistry.
4.1
Learning from discrete data
According to the definition of OOMs, the equilibrium dynamics of an OOM M =
(?, {?(x)}x?O , ?) can be described by an equilibrium OOM Meq = (? eq , {?(x)}x?O , ?) as
lim P (xt+1:t+k = z1:k |M) = P (x1:t = z1:k |Meq )
t??
if the equilibrium state vector
? eq = lim ??(O)t
t??
4
(13)
(14)
exists. From (13) and (14), we have
t+1
? eq ?(O) = limt??
= ? eq
P ? eq ?(O)
? eq ? = limt?? x?O P (xt+1 = x) = 1
(15)
The above equilibrium constraint of OOMs motivates the following algorithm for learning equilibrium
? (x) and ?
? and calculate ?
? eq by a quadratic programming
OOMs: Perform Algorithm 1 to get ?
problem
2
?
? eq = arg
?
min
(16)
w?(O) ? w
?
w?{w|w?=1}
(See Appendix A.3 for a closed-form expression of the solution to (16).)
The existence and uniqueness of ? eq are shown in Appendix A.3, which yield the following theorem:
?
? eq = (?
? eq , {?(x)}
?
Theorem 2. Under Assumptions 1-3, the estimated equilibrium OOM M
x?O , ?)
provided by Algorithm 1 and Eq. (16) satisfies
p
? eq ?
P x1:l = z1:l |M
lim P (xt+1:t+l = z1:l )
(17)
t??
for all l and z1:l .
?
? eq can also be computed as an eigenvector of ?(O).
Remark 1. ?
But the eigenvalue problem possibly
yields numerical instability and complex values because of statistical noise, unless some specific
?
? eq ?(O)
? eq can be exactly solved in the real field
feature functions ?1 , ?2 are selected so that ?
=?
[29].
4.2
Learning from continuous data
A straightforward way to extend spectral algorithms to handle continuous data is based on the
coarse-graining of the observation space. Suppose that {xt } is a stochastic process in a continuous
observation space O ? Rd , and O is partitioned into J discrete bins B1 , . . . , BJ . Then we can utilize
the algorithm in Section 4.1 to approximate the equilibrium transition dynamics between bins as
? (Bj ) . . . ?
? (Bj ) ?
? eq ?
?
lim P (xt+1 ? Bj1 , . . . , xt+l ? Bjl ) ? ?
1
l
t??
(18)
?
? eq = (?
? eq , {?(x)}
? for the continuous dynamics of {xt } with
and obtain a binned OOM M
x?O , ?)
? (x))
?(B
?
?(x)
=
vol(B (x))
(19)
by assuming the observable operator matrices are piecewise constant on bins, where B (x) denotes
the bin containing x and vol(B) is the volume of B. Conventional wisdom dictates that the number
of bins is a key parameter for the coarse-graining strategy and should be carefully chosen for the
balance of statistical noise and discretization error. However, we will show in what follows that it is
justifiable to increase the number of bins to infinity.
Let us consider the limit case where J ? ? and bins are infinitesimal with maxj vol(Bj ) ? 0. In
this case,
? (x))
? s2 ?s2 (x) , x = s2
?(B
W
n
?
n
n
?(x)
=
lim
=
(20)
0,
otherwise
vol(B(x))?0 vol(B (x))
where
? s2 = 1 F> ?1 (~s 1 )?2 (~s 3 )> F2
W
(21)
n
n
n
N 1
? eq becomes a binless OOM over sample points X =
according to (9) in Algorithm 1. Then M
{s2n }N
and
can
be
estimated
from
data
by
Algorithm 2, where the feature functions can be selected
n=1
as indicator functions, radial basis functions or other commonly used activation functions for singlelayer neural networks in order to digest adequate dynamic information from observation data.
The binless algorithm presented here can be efficiently implemented in a linear computational
complexity O(N ), and is applicable to more general cases where observations are strings, graphs or
other structured variables. Unlike the other spectral algorithms for continuous data, it does not require
5
Algorithm 2 Procedure for learning binless equilibrium OOMs
INPUT: Observation trajectories generated by a stochastic process {xt } in O ? Rd
?
? = (?,
? {?(x)}
?
OUTPUT: Binless OOM M
x?O , ?)
1: Construct feature functions ?1 : RLd 7? RD1 and ?2 : RLd 7? RD2 with D1 , D2 ? m.
?
?
? ,?
? ,C
? 1,2 by (2) and (3).
2: Calculate ?
1
2
3: Compute F1 = U??1 ? RD1 ?m and F2 = V ? RD2 ?m from the truncated singular value
? 1,2 ? U?V> .
decomposition C
P
?
?
?
? ?
? and ?(x)
4: Compute ?,
=
z?X Wz ?z (x) by (8), (16) and (21), where ?(O) =
?
P
?
? z.
dx ?(x)
= z?X W
O
that the observed dynamics coincides with some parametric model defined by feature functions. Lastly
but most importantly, as stated in the following theorem, this algorithm can be used to consistently
extract static and kinetic properties of a dynamic system in equilibrium from nonequilibrium data
(see Appendix A.3 for proof):
Theorem 3. Provided that the observation space O is a closed set in Rd , feature functions ?1 , ?2
are bounded on OL , and Assumptions 1-3 hold, the binless OOM given by Algorithm 2 satisfies
h
i p
? eq ?
E g (x1:r ) |M
E? [g (xt+1:t+r )]
(22)
with
h
i
? eq =
E g (x1:r ) |M
X
x1:r
? z ...W
?z ?
?W
?
g (x1:r ) ?
1
r
(23)
?X r
(i) for all continuous functions g : Or 7? R.
(ii) for all bounded and Borel measurable functions g : Or 7? R, if there exist positive constants
?? and ? so that k? (x)k ? ?? and limt?? P (xt+1:t+r = z1:r ) ? ? for all x ? O and
z1:r ? Or .
4.3
Comparison with related methods
It is worth pointing out that the spectral learning investigated in this section is an ideal tool for analysis of dynamic properties of stochastic processes, because the related quantities, such as stationary
distributions, principle components and time-lagged correlations, can be easily computed from parameters of discrete OOMs or binless OOMs. For many popular nonlinear dynamic models, including
Gaussian process state-space models [17] and recurrent neural networks [19], the computation of
such quantities is intractable or time-consuming.
The major disadvantage of spectral learning is that the estimated OOMs are usually only ?approximately valid? and possibly assign ?negative probabilities? to some observation sequences. So it
is difficult to apply spectral methods to prediction, filtering and smoothing of signals where the
Bayesian inference is involved.
5
Applications
In this section, we evaluate our algorithms on two diffusion processes and the molecular dynamics of
alanine dipeptide, and compare them to several alternatives. The detailed settings of simulations and
algorithms are provided in Appendix B.
Brownian dynamics Let us consider a one-dimensional diffusion process driven by the Brownian
dynamics
p
dxt = ??V (xt )dt + 2? ?1 dWt
(24)
with observations generated by
yt =
1, xt ? I
0, xt ? II
6
(b)
(a)
II
I
???
trajectory length
(d)
histogram
(c)
???
trajectory length
True
OOM
Empirical
HMM
EQ-OOM
Figure 2: Comparison of modeling methods for a one-dimensional diffusion process. (a) Potential
function. (b) Estimates of the difference between equilibrium probabilities of I and II given by the
traditional OOM, HMM and the equilibrium OOM (EQ-OOM) obtained from the proposed algorithm
with O = {I, II}. (c) Estimates of the probability difference given by the empirical estimator, HMM
and the proposed binless OOM with O = [0, 2]. (d) Stationary histograms of {xt } with 100 uniform
bins estimated from trajectories with length 50. The length of each trajectory is T = 50 ? 1000
and the number of trajectories is [105 /T ]. Error bars are standard deviations over 30 independent
experiments.
The potential function V (x) is shown in Fig. 2(a), which contains two potential wells I, II. In this
example, all simulations are performed by starting from a uniform distribution on [0, 0.2], which
implies that simulations are highly nonequilibrium and it is difficult to accurately estimate the
equilibrium probabilities ProbI = E? [1xt ?I ] = E? [yt ] and ProbII = E? [1xt ?II ] = 1 ? E? [yt ]
of the two potential wells from the simulation data. We first utilize the traditional spectral learning
without enforcing equilibrium, expectation?maximization based HMM learning and the proposed
discrete spectral algorithm to estimate ProbI and ProbII based on {yt }, and the estimation results
with different simulation lengths are summarized in Fig. 2(b). It can be seen that, in contrast to with
the other methods, the spectral algorithm for equilibrium OOMs effectively reduce the statistical bias
in the nonequilibrium data, and achieves statistically correct estimation at T = 300.
Figs. 2(c) and 2(d) plot estimates of stationary distribution of {xt } obtained from {xt } directly, where
the empirical estimator calculates statistics through averaging over all observations. In this case, the
proposed binless OOM significantly outperform the other methods, and its estimates are very close to
true values even for extremely small short trajectories.
Fig. 3 provides an example of a two-dimensional diffusion process. The dynamics of this process can
also be represented in the form of (24) and the potential function is shown in Fig. 3(a). The goal of
this example is to estimate the first time-structure based independent component wTICA [30] of this
process from simulation data. Here wTICA is a kinetic quantity of the process and is the solution to
the generalized eigenvalue problem
C? w = ?C0 w
with the largest
eigenvalue,
where C0is the covariance matrix of {xt } in equilibrium and
C? = E? xt x>
?
E
[x
] E? x>
is the equilibrium time-lagged covariance matrix. The
?
t
t+?
t
simulation data are also nonequilibrium with all simulations starting from the uniform distribution on
[?2, 0] ? [?2, 0]. Fig. 3(b) displays the estimation errors of wTICA obtained from different learning
methods, which also demonstrates the superiority of the binless spectral method.
Alanine dipeptide Alanine dipeptide is a small molecule which consists of two alanine amino acid
units, and its configuration can be described by two backbone dihedral angles. Fig. 4(a) shows the
potential profile of the alanine dipeptide with respect to the two angles, which contains five metastable
7
(b)
Empirical
HMM
EQ-OOM
error of
coord 2
(a)
coord 1
trajectory length
Figure 3: Comparison of modeling methods for a two-dimensional diffusion process. (a) Potential
function. (b) Estimation error of wTICA ? R2 of the first TIC with lag time 100. Length of each
trajectory is T = 200 ? 2500 and the number of trajectories is [105 /T ]. Error bars are standard
deviations over 30 independent experiments.
I
I
II
V
III
IV
II
V
(b)
error of
angle 2
(a)
angle 1
Empirical
HMM
EQ-OOM
simulation time (ns)
Figure 4: Comparison of modeling methods for molecular dynamics of alanine dipeptide. (a) Reduced
free energy. (b) Estimation error of ?, where the horizontal axis denotes the total simulation time
T ? I. Length of each trajectory is T = 10ns and the number of trajectories is I = 150 ? 1500.
Error bars are standard deviations over 30 independent experiments.
states {I, II, III, IV, V}. We perform multiple short molecular dynamics simulations starting from
the metastable state IV, where each simulation length is 10ns, and utilizes different methods to
approximate the stationary distribution ? = (ProbI , ProbII , . . . , ProbV ) of the five metastable states.
As shown in Fig. 4(b), the proposed binless algorithm yields lower estimation error compared to each
of the alternatives.
6
Conclusion
In this paper, we investigated the statistical properties of the general spectral learning procedure for
nonequilibrium data, and developed novel spectral methods for learning equilibrium dynamics from
nonequilibrium (discrete or continuous) data. The main ideas of the presented methods are to correct
the model parameters by the equilibrium constraint and to handle continuous observations in a binless
manner. Interesting directions of future research include analysis of approximation error with finite
data size and applications to controlled systems.
Acknowledgments
This work was funded by Deutsche Forschungsgemeinschaft (SFB 1114) and European Research
Council (starting grant ?pcCells?).
References
[1] H. Jaeger, ?Observable operator models for discrete stochastic time series,? Neural Comput., vol. 12, no. 6,
pp. 1371?1398, 2000.
[2] M.-J. Zhao, H. Jaeger, and M. Thon, ?A bound on modeling error in observable operator models and an
associated learning algorithm,? Neural Comput., vol. 21, no. 9, pp. 2687?2712, 2009.
[3] H. Jaeger, ?Discrete-time, discrete-valued observable operator models: a tutorial,? tech. rep., International
University Bremen, 2012.
[4] M. L. Littman, R. S. Sutton, and S. Singh, ?Predictive representations of state,? in Adv. Neural. Inf. Process.
Syst. 14 (NIPS 2001), pp. 1555?1561, 2001.
8
[5] S. Singh, M. James, and M. Rudary, ?Predictive state representations: A new theory for modeling dynamical
systems,? in Proc. 20th Conf. Uncertainty Artif. Intell. (UAI 2004), pp. 512?519, 2004.
[6] E. Wiewiora, ?Learning predictive representations from a history,? in Proc. 22nd Intl. Conf. on Mach. Learn.
(ICML 2005), pp. 964?971, 2005.
[7] D. Hsu, S. M. Kakade, and T. Zhang, ?A spectral algorithm for learning hidden Markov models,? in Proc.
22nd Conf. Learning Theory (COLT 2009), pp. 964?971, 2005.
[8] S. Siddiqi, B. Boots, and G. Gordon, ?Reduced-rank hidden Markov models,? in Proc. 13th Intl. Conf. Artif.
Intell. Stat. (AISTATS 2010), vol. 9, pp. 741?748, 2010.
[9] A. Beimel, F. Bergadano, N. H. Bshouty, E. Kushilevitz, and S. Varricchio, ?Learning functions represented
as multiplicity automata,? J. ACM, vol. 47, no. 3, pp. 506?530, 2000.
[10] M. Thon and H. Jaeger, ?Links between multiplicity automata, observable operator, models and predictive
state representations ? a unified learning framework,? J. Mach. Learn. Res., vol. 16, pp. 103?147, 2015.
[11] J.-H. Prinz, H. Wu, M. Sarich, B. Keller, M. Senne, M. Held, J. D. Chodera, C. Sch?tte, and F. No?,
?Markov models of molecular kinetics: Generation and validation,? J. Chem. Phys., vol. 134, p. 174105,
2011.
[12] G. R. Bowman, V. S. Pande, and F. No?, An introduction to Markov state models and their application to
long timescale molecular simulation. Springer, 2013.
[13] A. Ruttor, P. Batz, and M. Opper, ?Approximate Gaussian process inference for the drift function in
stochastic differential equations,? in Adv. Neural. Inf. Process. Syst. 26 (NIPS 2013), pp. 2040?2048, 2013.
[14] N. Schaudinnus, B. Bastian, R. Hegger, and G. Stock, ?Multidimensional langevin modeling of nonoverdamped dynamics,? Phys. Rev. Lett., vol. 115, no. 5, p. 050602, 2015.
[15] L. R. Rabiner, ?A tutorial on hidden markov models and selected applications in speech recognition,? Proc.
IEEE, vol. 77, no. 2, pp. 257?286, 1989.
[16] F. No?, H. Wu, J.-H. Prinz, and N. Plattner, ?Projected and hidden markov models for calculating kinetics
and metastable states of complex molecules,? J. Chem. Phys., vol. 139, p. 184114, 2013.
[17] R. D. Turner, M. P. Deisenroth, and C. E. Rasmussen, ?State-space inference and learning with Gaussian
processes,? in Proc. 13th Intl. Conf. Artif. Intell. Stat. (AISTATS 2010), pp. 868?875, 2010.
[18] S. S. T. S. Andreas Svensson, Arno Solin, ?Computationally efficient bayesian learning of Gaussian process
state space models,? in Proc. 19th Intl. Conf. Artif. Intell. Stat. (AISTATS 2016), pp. 213?221, 2016.
[19] S. Hochreiter and J. Schmidhuber, ?Long short-term memory,? Neural Comp., vol. 9, no. 8, pp. 1735?1780,
1997.
[20] H. Wu, J.-H. Prinz, and F. No?, ?Projected metastable markov processes and their estimation with
observable operator models,? J. Chem. Phys., vol. 143, no. 14, p. 144101, 2015.
[21] M. Shirts and V. S. Pande, ?Screen savers of the world unite,? Science, vol. 290, pp. 1903?1904, 2000.
[22] T.-K. Huang and J. Schneider, ?Spectral learning of hidden Markov models from dynamic and static data,?
in Proc. 30th Intl. Conf. on Mach. Learn. (ICML 2013), pp. 630?638, 2013.
[23] M. K. Cowles and B. P. Carlin, ?Markov chain monte carlo convergence diagnostics: a comparative review,?
J. Am. Stat. Assoc., vol. 91, no. 434, pp. 883?904, 1996.
[24] N. Jiang, A. Kulesza, and S. Singh, ?Improving predictive state representations via gradient descent,? in
Proc. 30th AAAI Conf. Artif. Intell. (AAAI 2016), 2016.
[25] H. Jaeger, ?Modeling and learning continuous-valued stochastic processes with OOMs,? Tech. Rep.
GMD-102, German National Research Center for Information Technology (GMD), 2001.
[26] B. Boots, S. M. Siddiqi, G. Gordon, and A. Smola, ?Hilbert space embeddings of hidden markov models,?
in Proc. 27th Intl. Conf. on Mach. Learn. (ICML 2010), 2010.
[27] M. Rosencrantz, G. Gordon, and S. Thrun, ?Learning low dimensional predictive representations,? in Proc.
22nd Intl. Conf. on Mach. Learn. (ICML 2004), pp. 88?95, ACM, 2004.
[28] B. Boots, Spectral Approaches to Learning Predictive Representations. PhD thesis, Carnegie Mellon
University, 2012.
[29] H. Jaeger, M. Zhao, and A. Kolling, ?Efficient estimation of OOMs,? in Adv. Neural. Inf. Process. Syst. 18
(NIPS 2005), pp. 555?562, 2005.
[30] G. Perez-Hernandez, F. Paul, T. Giorgino, G. De Fabritiis, and F. No?, ?Identification of slow molecular
order parameters for markov model construction,? J. Chem. Phys., vol. 139, no. 1, p. 015102, 2013.
9
| 6191 |@word nd:3 c0:2 d2:4 simulation:14 decomposition:2 covariance:2 initial:4 configuration:1 contains:3 series:1 hereafter:1 prefix:1 existing:1 discretization:1 activation:1 yet:1 dx:2 numerical:2 wiewiora:1 plot:1 rd2:3 stationary:5 selected:3 probi:3 short:4 coarse:2 provides:1 zhang:1 five:2 bowman:1 become:1 differential:1 consists:3 manner:3 expected:2 ol:3 shirt:1 becomes:1 project:1 spain:1 notation:1 moreover:1 provided:4 bounded:2 deutsche:1 what:1 backbone:1 tic:1 string:1 eigenvector:1 fuzzy:1 developed:2 bj1:1 arno:1 unified:1 freie:1 noe:1 pseudo:1 multidimensional:1 exactly:3 universit:1 rm:3 demonstrates:1 assoc:1 control:1 unit:1 grant:1 superiority:1 before:2 positive:1 local:1 tends:1 limit:6 severely:1 sutton:1 mach:5 ak:3 analyzing:2 jiang:1 interpolation:1 approximately:1 hernandez:1 batz:1 coord:2 hmms:1 limited:1 statistically:1 practical:2 acknowledgment:1 practice:1 procedure:4 empirical:7 significantly:1 dictate:1 radial:1 protein:1 get:1 convenience:1 close:1 operator:11 instability:1 equivalent:2 conventional:1 measurable:1 yt:4 center:1 straightforward:1 starting:6 independently:2 automaton:3 keller:1 estimator:5 kushilevitz:1 importantly:2 embedding:2 handle:2 beimel:1 construction:1 suppose:1 oom:21 programming:1 trick:1 element:1 recognition:1 observed:1 ft:1 pande:2 solved:1 calculate:2 ensures:1 adv:3 substantial:1 complexity:4 littman:1 dynamic:30 singh:3 solving:1 predictive:8 f2:4 observables:1 basis:1 easily:1 k0:1 stock:1 represented:3 describe:1 effective:1 monte:1 lag:1 valued:6 otherwise:1 statistic:2 timescale:1 highlighted:1 obviously:1 sequence:1 eigenvalue:3 propose:1 mixing:1 flexibility:1 achieve:3 description:1 dirac:1 convergence:2 requirement:2 r1:1 optimum:1 jaeger:6 intl:7 comparative:1 recurrent:2 stating:1 stat:4 bshouty:1 eq:27 recovering:1 implemented:1 implies:2 direction:1 correct:2 stochastic:13 centered:1 bin:8 require:1 assign:1 f1:2 preliminary:1 extension:1 kinetics:2 hold:5 equilibrium:25 mapping:1 bj:4 pointing:1 m0:3 major:1 achieves:1 purpose:1 uniqueness:1 estimation:19 proc:11 applicable:1 council:1 largest:1 correctness:1 tool:3 gaussian:5 modified:1 varying:1 tte:1 focus:1 consistently:3 rank:6 likelihood:4 greatly:1 contrast:2 tech:2 am:1 inference:3 dependent:2 suffix:1 i0:1 hidden:8 issue:1 arg:1 colt:1 denoted:4 priori:1 smoothing:1 marginal:1 field:1 construct:2 represents:1 icml:4 future:1 piecewise:1 gordon:3 national:1 intell:5 maxj:1 stationarity:1 investigate:1 highly:2 evaluation:1 diagnostics:1 perez:1 held:1 chain:1 fu:1 tuple:1 shorter:1 unless:1 iv:3 unite:1 re:1 theoretical:1 minimal:1 modeling:9 disadvantage:1 maximization:1 cost:1 deviation:3 nonequilibrium:15 uniform:3 characterize:1 density:6 explores:1 international:1 rudary:1 physic:1 graining:2 thesis:1 central:1 aaai:2 containing:1 dihedral:1 possibly:4 huang:1 conf:10 zhao:2 syst:3 potential:7 de:2 chemistry:1 summarized:2 notable:1 caused:1 performed:3 closed:2 analyze:1 capability:1 complicated:1 acid:1 largely:2 efficiently:3 yield:4 wisdom:1 rabiner:1 bayesian:3 identification:1 accurately:1 produced:1 carlo:1 trajectory:19 sn1:7 justifiable:1 worth:1 comp:1 history:1 reach:1 suffers:1 phys:5 definition:1 infinitesimal:1 energy:1 pp:19 involved:3 james:1 associated:3 proof:2 static:3 sampled:1 hsu:1 rld:2 popular:2 knowledge:1 lim:5 hilbert:1 carefully:1 dt:1 specify:1 smola:1 lastly:1 until:1 correlation:1 horizontal:1 expressive:1 nonlinear:1 defines:1 artif:5 omitting:1 true:2 chemical:1 steady:1 coincides:1 generalized:2 demonstrate:1 novel:1 recently:1 nanosecond:1 common:1 physical:1 volume:1 extend:2 discussed:1 refer:1 mellon:1 imposing:1 rd:3 consistency:2 mathematics:1 funded:1 brownian:2 inf:3 driven:2 discard:1 schmidhuber:1 rep:2 seen:1 schneider:1 signal:1 ii:12 multiple:2 thermodynamic:1 long:2 concerning:1 molecular:7 controlled:1 calculates:1 prediction:1 expectation:1 histogram:2 kernel:3 limt:4 hochreiter:1 folding:2 c1:2 addition:3 singular:4 sch:1 unlike:1 ideal:1 intermediate:2 iii:2 identically:6 forschungsgemeinschaft:1 embeddings:1 carlin:1 reduce:3 idea:1 andreas:1 t0:1 expression:1 sfb:1 sn3:7 suffer:1 speech:1 remark:1 adequate:1 generally:1 detailed:1 listed:1 sn2:2 nonparametric:1 siddiqi:2 gmd:2 reduced:3 outperform:1 exist:1 millisecond:1 notice:1 tutorial:2 estimated:5 delta:1 alanine:6 discrete:12 write:1 carnegie:1 vol:19 key:1 diffusion:5 utilize:2 graph:1 inverse:1 angle:4 powerful:2 uncertainty:1 hankel:1 wu:5 utilizes:1 home:1 appendix:6 bound:1 display:1 bastian:1 quadratic:1 binned:1 occur:1 constraint:3 infinity:2 min:1 extremely:1 department:1 structured:1 metastable:6 according:3 increasingly:1 partitioned:1 kakade:1 rev:1 multiplicity:3 computationally:1 equation:1 german:1 apply:1 spectral:37 generic:1 s2n:7 appearing:1 dwt:1 alternative:2 existence:1 top:1 denotes:2 include:1 graphical:1 calculating:2 especially:2 quantity:3 digest:1 parametric:2 strategy:1 traditional:3 diagonal:1 unclear:1 said:1 gradient:1 link:1 berlin:3 capacity:2 hmm:6 thrun:1 enforcing:1 assuming:1 length:15 illustration:2 balance:1 equivalently:1 difficult:2 frank:2 hao:2 stated:1 negative:1 lagged:2 implementation:1 reliably:1 motivates:1 perform:2 boot:3 observation:32 markov:14 finite:5 descent:1 solin:1 truncated:2 langevin:2 situation:2 drift:1 z1:7 learned:2 barcelona:1 prinz:3 nip:4 address:1 bar:3 below:1 usually:2 dynamical:1 kulesza:1 challenge:1 including:2 wz:1 memory:1 unverified:1 event:1 difficulty:1 hybrid:1 rely:1 indicator:2 turner:1 improve:1 technology:1 axis:1 extract:1 review:1 literature:1 singlelayer:1 bjl:1 asymptotic:2 loss:1 fully:1 dxt:1 mixed:1 interesting:1 generation:1 filtering:1 triple:2 validation:1 consistent:4 principle:1 bremen:1 last:1 free:1 rasmussen:1 infeasible:1 bias:2 distributed:7 dimension:1 ak0:1 world:2 avoids:1 transition:1 valid:1 opper:1 lett:1 collection:2 commonly:3 projected:2 cope:1 approximate:4 observable:11 ruttor:1 uai:1 b1:1 assumed:1 consuming:1 subsequence:4 continuous:18 svensson:1 decade:1 promising:1 learn:6 molecule:2 improving:1 investigated:2 complex:2 european:1 aistats:3 timescales:1 main:2 s2:5 noise:2 paul:1 profile:1 amino:1 x1:17 fig:9 borel:1 screen:1 slow:1 n:3 explicit:1 comput:2 bergadano:1 binless:15 learns:1 formula:1 theorem:7 xt:43 specific:1 r2:1 kolling:1 intractable:2 exists:3 consist:1 sequential:1 effectively:1 phd:1 demand:1 rd1:3 expressed:1 rosencrantz:1 springer:1 ooms:29 satisfies:3 extracted:1 kinetic:3 acm:2 goal:1 microsecond:1 determined:1 except:3 averaging:2 total:1 experimental:1 formally:1 deisenroth:1 thon:2 chem:4 violated:1 evaluate:1 d1:4 cowles:1 |
5,738 | 6,192 | What Makes Objects Similar:
A Unified Multi-Metric Learning Approach
Han-Jia Ye
De-Chuan Zhan
Xue-Min Si
Yuan Jiang
Zhi-Hua Zhou
National Key Laboratory for Novel Software Technology,
Nanjing University, Nanjing, 210023, China
{yehj,zhandc,sixm,jiangy,zhouzh}@lamda.nju.edu.cn
Abstract
Linkages are essentially determined by similarity measures that may be derived
from multiple perspectives. For example, spatial linkages are usually generated
based on localities of heterogeneous data, whereas semantic linkages can come
from various properties, such as different physical meanings behind social relations. Many existing metric learning models focus on spatial linkages, but leave
the rich semantic factors unconsidered. Similarities based on these models are
usually overdetermined on linkages. We propose a Unified Multi-Metric Learning (U M2 L) framework to exploit multiple types of metrics. In U M2 L, a type of
combination operator is introduced for distance characterization from multiple perspectives, and thus can introduce flexibilities for representing and utilizing both
spatial and semantic linkages. Besides, we propose a uniform solver for U M2 L
which is guaranteed to converge. Extensive experiments on diverse applications
exhibit the superior classification performance and comprehensibility of U M2 L.
Visualization results also validate its ability on physical meanings discovery.
1
Introduction
Similarities measure the closeness of connections between objects and usually are reflected by distances. Distance Metric Learning (D ML) aims to learn appropriate metric that can figure out the
underlying linkages or connections, thus can greatly improve the performance of similarity-based
classifiers, such as kNN.
Objects are linked with each other for different reasons. Global D ML methods consider the deterministic single metric which measures similarities between all object pairs. Recently, investigations
on local D ML have considered locality specific approaches, and consequently multiple metrics are
learned. These metrics are either in charge of different spatial areas [15, 20] or responsible for each
specific instance [7, 22]. Both global and local D ML methods emphasize the linkage constraints
(including must-link and cannot-link) in localities with univocal semantic meaning, e.g., the side
information of class. However, there can be many different reasons for two instances to be similar
in real world applications [3, 9].
Linkages between objects can be with multiple latent semantics. For example, in a social network,
friendship linkages may lie on different hobbies of users. Although a user has many friends, their
common hobbies could be different and as a consequence, one can be friends with others for different reasons. Another concrete example is, for articles on ?A. Feature Learning? which are closely
related to both ?B. Feature Selection? and ?C. Subspace Models?, their connections are different in
semantics. The linkage between A and B emphasizes ?picking up some helpful features?, while the
common semantic between A and C is about ?extracting subspaces? or ? feature transformation?.
These phenomena clearly indicate ambiguities rather than a single meaning in linkage generation.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Hence, the distance/similarity measurements are overdetermined in these applications. As a consequence, a new type of multi-metric learner which can describe the ambiguous linkages is desired.
In this paper, we propose a Unified Multi-Metric Learning (U M2 L) approach which integrates the
consideration of linking semantic ambiguities and localities in one framework. In the training process, more than one metric is learned to measure distances between instances and each of them
reflects a type of inherent spatial or semantic properties of objects. During the test, U M2 L can automatically pick up or integrate these measurements, since semantically/spatially similar data points
have small distances and otherwise they are pulled away from each other; such a mechanism enables the adaptation to environment to some degree, which is important for the development of
learnwares [25]. Furthermore, the proposed framework can be easily adapted to different types of
ambiguous circumstances: by specifying the mechanism of metric integration, various types of linkages in applications can be considered; by incorporating sparse constraints, U M2 L also turns out
good visualization results reflecting physical meanings of latent linkages between objects; besides,
by limiting the number of metrics or specifying the regularizer, the approach can be degenerated
to some popular D ML methods, such as MM L MNN [20]. Benefitting from alternative strategy and
stochastic techniques, the general framework can be optimized steadily and efficiently.
Our main contributions are: (I) A Unified Multi-Metric Learning framework considering both data
localities and ambiguous semantics linkages. (II) A flexible framework adaptable for different tasks.
(III) Unified and efficient optimization solutions, superior and interpretable results.
The rest of this paper starts with some notations. Then the U M2 L framework is presented in detail,
which is followed by a review of related work. The last are experiments and conclusion.
2
The Unified Multi-Metric Framework
Generally speaking, the supervision information for Distance Metric Learning (D ML) is formed as
pairwise constraints or triplet sets. We restrict our discussion on the latter one, T = {xt , yt , zt }Tt=1 ,
since it provides more local information. In each triplet, target instance yt is more similar to xt
than imposter zt and {xt , yt , zt } ? Rd . Sd and Sd+ are the set of symmetric and positive semidefinite (PSD)
? matrix of size d ? d, respectively. I is the identity matrix. Matrix Frobenius Norm
?M ?F = Tr(M ? M ). Let mi and mj denote the i-th row and j-th column of matrix M respec?d
tively, and ?2,1 -norm ?M ?2,1 = i ?mi ?2 . Operator [?]+ = max(?, 0) preserves the non-negative
part of the input value. D ML aims at learning a metric M ? Sd+ making similar instances have small
distances to each other and dissimilar ones far apart. The (squared) Mahalanobis distance between
pair (xt , yt ) with metric M can be denoted as:
Dis2M (xt , yt ) = (xt ? yt )? M (xt ? yt ) = Tr(M Atxy ).
(1)
Atxy = (xt ? yt )(xt ? yt )? ? Sd+ is the outer product of difference between instance xt and
yt . The distance in Eq.1 assumes that there is a single type of relationship between object features,
which uses univocal linkages between objects.
Multi-metric learning takes data heterogeneities into consideration. However, both single metric
learned by global D ML and multiple metrics learned with local methods focus on exploiting locality
information, i.e., constraints or metrics are closely related to the localities. In particular, local D ML
approaches mainly aim at learning a set of multiple metrics one for each local area. In this paper,
a general multi-metric configuration is investigated to deal with linkage ambiguities from both semantic and locality perspectives. We denote the set of K multiple metrics to be learned as MK =
+
{M1 , M2 , . . . , MK } and {Mk }K
k=1 ? Sd . Similarity score between a pair of instances based on Mk ,
w.l.o.g., can be set as the negative distance, i.e., fMk (xt , yt ) = ?Dis2Mk (xt , yt ). In multi-metric
scenario, consequently, there will be a set of multiple similarity scores fMK = {fMk }K
k=1 . Each
metric/score in the set reflects a particular semantic or spatial view of data. The overall similarity
score f v (xt , yt ) = ?v (fMK (xt , yt )), v = {1, 2} and ?v (?) is a functional operator closely related
to concrete applications, which maps the set of similarity scores w.r.t. all metrics to a single value.
With these discussions, the Unified Multi-Metric Learning (U M2 L) framework can be denoted as:
?
)
1? ( 1 t t
? f (x , y ) ? f 2 (xt , zt ) + ?
?k (Mk ) .
T t=1
T
min
MK
K
k=1
2
(2)
The overall inter-instance similarity f 1 and f 2 are based on operators ?1 and ?2 respectively. ?(?) is
a convex loss function which encourages (xt , yt ) to have larger overall similarity score than (xt , zt ).
Note that although inter-instance similarities are defined on different metrics in MK , the convex loss
function ?(?) acts as a bridge and makes the similarities measured by different metrics comparable as
in [20]. The fact that triplet restrictions being provided without specifying concrete measurements
makes it reasonable to use flexible ?s. For instance, in a social network, similar nodes only share
some common interests (features) rather than consistently possessing all interests. Tendency on
different types of hobbies can be reflected by various metrics. Therefore, the similarity scores may
be calculated with different measurements and operator ?v is used for taking charge of ?selecting?
or ?integrating? the right base metric for measuring similarities. The choices of loss functions and
?s are substantial issues in this framework and will be described later. Convex regularizer ?k (Mk )
can impose prior or structure information on base metric Mk . ? ? 0 is a balance parameter.
2.1
Choices for ?
U M2 L takes both spatial and ambiguous semantic linkages into account based on the configurations
of ?, which integrates or selects base metrics. As an integrator, in applications where locality related
multiple metrics are needed, ? can be an RBF like function which decreases as the distance is increasing. The locality determines the impact of each metric. When ? acts as a selector, U M2 L should
automatically assign triplets to one of the metrics which can explain instance similarity/dissimilarity
best. Besides, from the aspect of loss function ?(?), the elected f s form a comparable set of similarity measurements [17, 20]. In this case, we may implement the operator ? by choosing the most
remarkable base metric making the pair of instances xt and yt similar. Advantages of selection
mechanism are two folds. First, it reduces the impact of initial triplets construction in localities [19];
second, it stresses the most evident semantic and reflects the consideration of ambiguous semantics
in a linkage construction. Choices of ?s heavily depend on concrete applications. It is actually a
combiner and can get inspiration from ensemble methods [24]. Here, we mainly consider 4 different
types of linkage based on various sets of ?s as follows.
Apical Dominance Similarity (ADS): which is named after the phenomenon in auxanology of
plants, where the most important term dominates the evaluation. In this case, ?1 = ?2 = max(?),
i.e., maximum similarity among all similarities calculated with MK on similar pair (xt , yt ) should
be larger than the maximum similarity of (xt , zt ). This corresponds to similar pairs being close
to each other under at least one measurement, meanwhile dissimilar pairs are disconnected by all
different measurements. This type of linkage generation often occurs in social network applications,
e.g., nodes are linked together for a portion of similar orientations while nodes are unlinked because
there are no common interests. By explicitly modeling each node in a social network as an instance,
each of the base metrics {Mk }K
k=1 can represent parts of semantics in linkages. Then the dissimilar
pair in a triplet, e.g., the non-friendship relationship, should be with small similarity scores over
MK ; while for the similar pair, there should be at least one base similarity score with high value,
which reflects their common interests [3, 11].
One Vote Similarity (OVS): which indicates the existence of potential key metric in MK , i.e.,
either similar or dissimilar pair is judged by at least one key metric respectively, while remaining
metrics with other semantic meanings are ignored. In this case, ?1 = max(?) and ?2 = min(?). This
type of similarity should usually be applied as an ?interpreter? in domains like image, video which
are with complicated semantics. The learned metrics reveal different latent concepts in objects. Note
that simply applying OVS in U M2 L with impropriate regularizer ? will lead to a trivial solution, i.e.,
Mk = 0, which satisfies all similar pair restrictions yet has no generalization ability. Therefore, we
need to set ?k (Mk ) = ?Mk ? I?2F or restrict the trace of Mk to equal to 1.
Rank Grouping Similarity (RGS): which groups the pairs and makes the similar pairs with higher
ranks than dissimilar ones. This is the most rigorous similarity and we also refer it as One-Vote
Veto Similarity (OV2 S). In this case, ?1 = min(?) while ?2 = max(?), which regards the pairs as
dissimilar even when there is only one metric denying the linkage. This case is usually applied to
applications where latent multiple views exist and different views are measured by different metrics
in MK . In these applications, it is obviously required that all potential views obtain consistencies,
and weak conflict detected by one metric should also be punished by RGS (OV2 S) loss.
?
Average Case Similarity (ACS): which treats all metrics in MK equally, i.e., ?1 = ?2 = (?).
This is the general case when there is no prior knowledge on applications.
3
?
There are many derivatives of similarity where ?v is configured as min(?), max(?) and (?). Furthermore, ?v in fact can be with richer forms, and we will leave the discussions of choosing different
?s later in section 3. Besides, in the framework, multiple choices of the regularizer ?k (?) can be
made. As most D ML methods [14], ?k (Mk ) can be set as ?Mk ?2F . Yet it also can be incorporated
with more structural information, e.g., we can configure ?(Mk ) = ?Mk ?2,1 , where the row/column
sparsity filters influential features for composing linkages in a network; or ?k (Mk ) = Tr(Mk ),
which guarantees the low rank property for all metrics. Due to the high applicability of the proposed
framework, we name it as U M2 L (Unified Multi-Metric Learning).
General Solutions for U M2 L
2.2
U M2 L can be solved alternatively between metrics MK and affiliation portion of each instance,
when ? is a piecewise linear operator such as max(?) and min(?). For example, in the case
t
=
of ADS, the metric used to measure the similarity of pair (xt , yt ) is decided by: kv,?
t
t
arg maxk fMk (x , y ), which is the index of the metric Mk that has the largest similarity value
over the pair. Once the dominating key metric of each instance is found, the whole optimization
problem is convex w.r.t. each Mk , which can be easily optimized. On account of the convexity of
each sub-problem in the alternating approach, the whole objective is ensured to decrease in iterations so as to converge eventually. It is notable that when dealing with a single triplet in a stochastic
approach, the convergence can be guaranteed as well in Theorem 1, which will be introduced later.
In batch case, for facilitating the discussion, we can implement ?(?) as the smooth hinge loss, i.e.,
?(x) = [ 12 ? x]+ if x ? 1 or x ? 0 and equals to 12 (1 ? x)2 otherwise. If trace norm ?k (Mk ) =
Tr(Mk ) is used, MK can be solved with accelerated projected gradient descent method. If the
whole objective in Eq. 2 is denoted as LMK , the gradient w.r.t. one metric Mk can be computed as:
t
t
t Axz ) ? Tr(Mkt Axy ))
?LMK
1 ? ??(Tr(Mk2,?
1 ? t
1,?
?Mk (at ) + ?I ,
=
+ ?I =
?Mk
T
?Mk
T
t?T?k
(3)
t?T?k
where the first part is a sum of gradients over the triplets subset T?k whose membership indexes
t
t
}. The separated gradient ?tMk (at ), with
or k = k2,?
containing k, i.e., T?k = {t | k = k1,?
t
t
t
?
t A
t A
a = Tr(Mk2,?
xz ) ? Tr(Mk1,?
xy ), for triplet t ? Tk is:
{
t
?tMk (at ) =
0
t
t
?(k = k1,?
)Atxy ? ?(k = k2,?
)Atxz
t
t
t
t
?(k = k1,? )(1 ? a )Axy ? ?(k = k2,?
)(1 ? at )Atxz
if a ? 1
if at ? 0
otherwise
.
?(?) is the Kronecker delta function, which contributes to the computation of the gradient when ?v is
optimized by Mk . After accelerated gradient descent, a projection step is conducted to maintain the
PSD property of each solution. If structured sparsity is stressed, ?2,1 -norm is used as a regularizer,
i.e., ?k (Mk ) = ?Mk ?2,1 . FISTA [2] can be used to optimize the non-smooth regularizer efficiently:
after a gradient
descent with step size ? on the smooth loss to get an intermediate solution Vk =
?
Mk ?? T1 t?T?k ?tMk (at ), the following proximal sub-problem is conducted to get a further update:
1
Mk? = arg min ?M ? Vk ?2F + ??M ?2,1 .
M ?Sd 2
(4)
The PSD property of Mk can be ensured by a projection in each iteration, or can often be preserved
by last step projection [14]. Hence, in Eq. 4, only symmetric constraint of Mk is imposed. Since
?2,1 -norm considers only one-side (row-wise) property of a matrix, Lim et al. [12] uses iterative
symmetric projection to get a solution, which has heavy computational cost in some cases. In a
reweighted way, the proximal subproblem can be tackled by the following lemma efficiently.
Lemma 1 The proximal problem in Eq. 4 can be solved by updating diagonal matrixes D1 and D2
and symmetric matrix M alternatively:
1
1
?
?
{D1,ii =
, D2,ii =
}di=1 ; vec(M ) = (I ? (I + D1 ) + ( D2 ? I))?1 vec(Vk ) ,
i
2?mi ?2
2?m ?2
2
2
where vec(?) is the vector form of a matrix and ? means the Kronecker product. Due to the diagonal
property of each term, the update of M can be further simplified.1
1
Detailed derivation and efficiency comparison are in the supplementary material.
4
The update of M in Lemma 1 takes row-wise and column-wise ?2 -norm into consideration simultaneously, and usually gets converged in about 5 ? 10 iterations.
The batch solution for U M2 L can benefit from the acceleration strategy [2]. The computational cost
of a full gradient, however, sometimes becomes the dominant expense owing to the huge number
of triplets. Inspired by [6], we propose a stochastic solution, which manipulates one triplet in each
iteration. In the s-th iteration, we sample a triplet (xs , ys , zs ) uniformly and update current solution
s
set MsK = {Mks }K
k=1 . The whole objective of s-th iteration with MK is:
LsMsK = ?(f 1 (xs , ys ) ? f 2 (xs , zs )) + ?
K
?
?k (Mks ).
(5)
k=1
Similar to proximal gradient solution, after doing (sub-) gradient descent on the loss function
in Eq. 5, proximal operator can be utilized to update base metrics {Mks }K
k=1 . The stochas?
) ?
tic strategy is guaranteed to converge theoretically. By denoting M?K = (M1? , . . . , MK
?S
s
s
s
arg min s=1 L (M1 , . . . , MK ) as the optimal solution, given totally S iterations, we have:
Theorem 1 Suppose in U M2 L framework, the loss ?(?) is a convex one and selection operator ?v
is in piecewise linear form. If each training instance ?x?2 ? 1, the sub-gradient set of ?k (?) is
bounded by R, i.e., ???k (Mk )?2F ? R2 and sub-gradient of loss ?(?) is bounded by C. When for
each base metric2 ?Mk ? Mk? ?F ? D, it holds that:3
S
?
?
LsMsK ? LsM?K ? 2GD + B S
s=1
2
with G = max(C , R ) and B = ( D2 + 8G2 ). Given hinge loss, C 2 = 16.
2
3
2
2
Related Work and Discussions
Global D ML approaches devote to finding a single metric for all instances [5, 20] while local D ML
approaches further take spatial data heterogeneities into consideration. Recently, different types of
local metric approaches are proposed, either assigning cluster-specific metric to instance based on
locality [20] or constructing local metrics generatively [13] or discriminatively [15, 18]. Furthermore, instance specific metric learning methods [7, 22] extend the locality properties of linkages to
extreme and gain improved classification performance. However, these D ML methods, either global
or local, take univocal semantic from label, namely, the side information.
Richness of semantics is noticed and exploited by machine learning researchers [3, 11]. In D ML
community, P SD [9] and S CA [4] are proposed. P SD works as collective classification which is
less related to U M2 L. S CA, a multi-metric learning method based on pairwise constraints, focuses
on learning metrics under one specific type of ambiguities, i.e., linkages are with competitive semantic meanings. U M2 L is a more general multi-metric learning framework which considers triplet
constraints and various kinds of ambiguous linkages from both localities and semantic views.
U M2 L maintains good compatibilities and can degenerate to several state-of-the-art D ML methods.
For example, by considering univocal semantic (K = 1), we can get a global metric learning model
used in [14]. If we further choose the hinge loss and set the regularizer ?(M ) = tr(M B) with B
an intra-class similar pair covariance matrix, U M2 L degrades to L MNN [20]. With trace norm on
M , [10] is recovered. For multi-metric approaches, if we set ?v as the indicator of classes for the
second instance in a similar or dissimilar pair, U M2 L can be transformed to MM L MNN [20].
4
Experiments on Different Types of Applications
Due to different choices of ?s in U M2 L, we test the framework in diverse real applications, namely
social linkages/feature pattern discovering, classification, physical semantic meaning distinguishing
and visualization on multi-view semantic detection. To simplify the discussion, we use alternative
batch solver, smooth hinge loss and set regularizer ?k (Mk ) = ?Mk ?2,1 if without further statement.
Triplets are constructed with 3 targets and 10 impostors with Euclidean nearest neighbors.
2
3
This condition generally holds according to the norm regularizer in the objective function.
Detailed proof can be found in the supplementary material.
5
4.1
Comparisons on Social Linkage/Feature Pattern Discovering
ADS configuration is designed for social linkage and pattern discovering. To validate the effectiveness of U M2 LADS , we test it on social network data and synthetic data to show its grouping ability
on linkages and features, respectively.
Social linkages come from 6 real world Facebook network datasets from [11]. Given friendship
circles of an ego user and users? binary features, the goal of ego-user linkages discovering is to utilize
the overall linkage and figure out how users are grouped. We form instances by taking absolute value
of differences between features of ego and the others. After circles with < 5 nodes are removed,
K is configured as the number of circles remained. Pairwise distance is computed by each metric
in MK , and a threshold is tuned on the training set to filter out irrelevant users. Thus, users with
different common hobbies are grouped together. M AC detects group assignments based on binary
features [8]; S CA constructs user linkages in a probabilistic way, and E GO [11] can directly output
user circles. KMeans (KM) and Spectral Clustering (SC) directly group users based on their features
without using linkages. Performance is measured by Balanced Error Rate (BER) [11], the lower the
better. Results are listed in Table 1, which shows U M2 LADS performs the best on most datasets.
Table 2: BER of feature pattern discovery comparisons on synthetic datasets: U M2 LADS vs. others
Table 1: BER of the linkage discovering comparisons on Facebook datasets: U M2 LADS vs. others
BER?
KM
SP
M AC
S CA
E GO
U M2 L
BER?
KM
SP
S CA
E GO
U M2 L
Facebook_348
Facebook_414
Facebook_686
Facebook_698
Facebook_1684
Facebook_3980
.669
.721
.637
.661
.807
.708
.669
.721
.637
.661
.807
.708
.730
.699
.681
.640
.767
.541
.847
.870
.772
.729
.844
.667
.426
.449
.446
.392
.491
.538
.405
.420
.391
.420
.465
.402
syn1
syn2
ad
ccd
my_movie
reuters
.382
.564
.670
.244
.370
.704
.382
.564
.670
.244
.370
.704
.392
.399
.400
.250
.249
.400
.467
.428
.583
.225
.347
.609
.355
.323
.381
.071
.155
.398
Similarly, we test feature pattern discovering ability of U M2 LADS on 4 transformed multi-view
datasets. For each dataset, we first extract principal components of each view, and construct sublinkage candidates between instances with random thresholds on each single view. Thus, these
candidates are various among different views. After that, the overall linkage is further generated
from these candidates using ?or? operation. With features on each view and the overall linkage, the
goal of feature pattern discovering is to reveal responsible features for each sub-linkage. Zero-value
rows/columns of learned metrics indicate irrelevant features in the corresponding group. Syn1 and
syn2 are purely synthetic datasets with features sampled from Uniform, Beta, Binomial, Gamma and
Normal distributions using different parameters. BER results are listed in Table 2 and U M2 LADS
achieves the best on all datasets. These assessments indicate U M2 LADS can figure out reasonable
linkages or patterns hidden behind observations, and even better than domain specific methods.
4.2
Comparisons on Classification Performance
To test classification generalization performance, our framework is compared with 8 state-of-the-art
metric learning methods on 10 benchmark datasets and 8 large scale datasets (results of 8 large scale
data are in the supplementary material). In detail, global D ML methods: I TML [5], L MNN [20]
and E IG [21]; local and instance specific D ML methods: P LML [18], S CML (local version) [15];
MM L MNN [20], I SD [22] and S CA [4].
In U M2 L, distance values from different metrics are comparable. Therefore in the test phase, we first
? using each base metric Mk . Then 3 ? K distance
compute 3 nearest neighbors for testing instance x
values are collected adaptively and the smallest 3 ones (3 instances with the highest similarity scores)
form neighbor candidates. Majority voting over them is used for prediction.
Evaluations on classification are repeated for 30 times. In each trial, 70% of instances are used
for training, and the remaining part is for test. Cross-validation is employed for parameters tuning. Generalization errors (mean?std.) based on 3NN are listed in Table 3 where Euclidean distance results (E UCLID) are also listed as a baseline. Considering the abilities of multi-semantic
description of ADS and the rigorous restrictions of RGS, U M2 LADS/RGS are implemented in this
comparison. Number of metrics K is configured as the number of classes. Table 3 clearly shows
that U M2 LADS/RGS perform well on most datasets. Especially, U M2 LRGS achieves best on more
datasets according to t-tests and this can be attributed to the rigorous restrictions of RGS.
6
Table 3: Comparisons of classification performance (test errors, mean ? std.) based on 3NN. U M2 LADS and
U M2 LRGS are compared. The best performance on each dataset is in bold. Last two rows list the Win/Tie/Lose
counts of U M2 LADS/RGS against other methods on all datasets with t-test at significance level 95%.
U M2 LADS U M2 LRGS
Autompg
Clean1
German
Glass
Hayes-r
Heart-s
House-v
Liver-d
Segment
Sonar
W/T/L
W/T/L
.201?.034
.070?.018
.281?.019
.312?.043
.276?.044
.190?.035
.051?.015
.363?.045
.023?.038
.136?.032
.225?.031
.086?.020
.284?.030
.293?.047
.307?.068
.194?.063
.048?.013
.342?.047
.029?.034
.132?.036
U M2 LADS vs. others
U M2 LRGS vs. others
P LML
S CML
MM L MNN
ISD
S CA
I TML
L MNN
E IG
E UCLID
.265?.048 .253?.026 .256?.032 .288?.033 .286?.037 .292?.032 .259?.037 .266?.031 .260?.036
.098?.027 .100?.027 .097?.022 .143?.023 .306?.072 .141?.024 .084?.021 .127?.021 .139?.023
.280?.016 .302?.021 .289?.019 .297?.017 .292?.023 .288?.021 .292?.021 .284?.014 .296?.021
.389?.050 .328?.054 .296?.047 .334?.050 .529?.053 .311?.038 .315?.049 .314?.050 .307?.042
.436?.201 .296?.053 .282?.062 .378?.093 .379?.068 .342?.080 .314?.072 .289?.067 .398?.046
.365?.127 .205?.040 .191?.037 .192?.036 .203?.039 .186?.032 .200?.026 .189?.034 .190?.030
.121?.240 .066?.019 .055?.017 .072?.024 .174?.075 .063?.023 .061?.017 .080?.024 .083?.025
.361?.055 .371?.042 .372?.045 .364?.042 .408?.011 .377?.052 .373?.045 .380?.037 .384?.040
.041?.031 .041?.008 .036?.006 .063?.009 .324?.043 .050?.012 .039?.006 .059?.016 .050?.007
.171?.048 .193?.045 .157?.038 .182?.038 .220?.040 .174?.039 .145?.032 .159?.042 .168?.036
6/4/0
6/4/0
7/3/0
8/2/0
4/6/0
5/5/0
7/3/0
9/1/0
8/2/0
8/2/0
6/4/0
8/2/0
5/5/0
8/2/0
6/4/0
7/3/0
8/2/0
8/2/0
(a) L MNN
(b) P LML 1
(c) P LML 2
(d) MM L MNN 1
(e) MM L MNN 2
(f) MM L MNN 3
(g) U M2 L 1
(h) U M2 L 2
(i) U M2 L 3
(j) U M2 L 4
(k) U M2 L 5
(l) U M2 L 6
Figure 1: Word clouds generated from the results of compared D ML methods. The size of word depends on the
importance weight of each word (feature). The weight is calculated by decomposing each metric Mk = Lk L?
k,
and calculate the ?2 -norm of each row in Lk , where each row corresponds to a specific word. Each subplot
gives a word cloud for a base metric learned from D ML approaches.
4.3
Comparisons of Latent Semantic Discovering
U M2 L is proposed for D ML with both localities and semantic linkages considered. Hence, to investigate the ability of latent semantics discovering, two assessments in real applications are performed,
i.e., Academic Paper Linkages Explanation (APLE) and Image Weak Label Discovering (IWLD).
In APLE, data are collected from 2012-2015 ICML papers, which can be connected with each other
by more than one topic, yet only the session ID is captured to form explicit linkages. 3 main directions of sessions are picked up in this assessment, i.e., ?feature learning?, ?online learning? and
?deep learning?. No sub-fields and additional labels/topics are provided. Simplest TF-IDF is used
to extract features, which forms a corpus of 220 papers and 1622 words in total. Aiming at finding
the hidden linkages together with their causes, both U M2 LADS and U M2 LOVS are invoked. To avoid
trivial solutions, regularizer for each metric is configured as ?k (Mk ) = ?Mk ? I?2F for U M2 LOVS .
All feature (word) weights and correlations can be provided by learned metrics, i.e., with decomposition Mk = Lk L?
k , the ?2 -norm value of each row in Lk can be regarded as the weight for each
feature (word). The importance of feature (word) weights is demonstrated in word clouds in Fig. 1,
where the size of fonts reflects the weights of each word. Due to the page limits, supplementary
materials represent full evaluations.
Fig. 1 shows the results of L MNN [20] (a), P LML [18] (b, c), MM L MNN [20] (d, e, f) and U M2 LOVS
(g ? l) with K = 6, respectively. Global method L MNN returns one subplot. The metric learned by
L MNN perhaps has discriminative ability but the weights of words cannot distinguish subfields in 3
selective domains. For multi-metric learning approaches P LML and MM L MNN, though they can provide more than one base metric and consequently have multiple word clouds, the words presented in
subplots are not with legible physical semantic meanings. Especially, P LML outputs multiple metrics which are similar to each other (tends to global learner?s behavior) and only focus on first part
of the alphabet, while MM L MNN by default only learns multiple metrics with the number of base
metrics equaling to the number of classes. However, results of U M2 LOVS clearly demonstrate all 3
fields. On session ?online learning?, it can discover different sub-fields such as ?online convex opti7
(a) (sea, mountains)
(b) (mountains, sea)
(c) (sea, sunset) (a) ADS subspace 1 (b) ADS subspace 2 (c) RGS subspace
Figure 2: Results of visual semantic discovery on im- Figure 3: Subspaces discovered by U M2 LADS (a,b)
ages. The first annotation in the bracket is the provided and U M2 LRGS (c). Instances possess 2 semantic
weak label. The second one is one of the latent semantic properties, i.e., color and shape. Blue dot-lines give
labels discovered by U M2 L.
the decision boundary.
mization? (g and h), and ?online (multi-) armed bandit problem? (j); for session ?feature learning?,
it has ?feature score? (i) and ?PCA projection? (l); and for ?deep learning?, the word cloud returns
popular words like ?network layer?, ?autoencoder? and ?layer?(k).
Besides APLE, the second application is about weak label discovering in images from [23], where
the most obvious label for each image is used for triplets constraints generation. U M2 LOVS can
obtain multiple metrics, each of which is with a certain visual semantic. By computing similarities
based on different metrics, latent semantics can be discovered, i.e., if we assume images connected
with high similarities share the same label, missing labels can be completed as in Fig. 2. More weak
label results can be found in the supplementary material.
4.4
Investigations of Latent Multi-View Detection
Another direct application of U M2 L is hidden multi-view detection, where data can be described by
multiple views from different channels yet feature partitions are not clearly provided [16]. Data with
multi-view goes consistent with the assumption of ADS or RGS configuration. ADS emphasizes the
existence of relevant views and aims at decomposing helpful aspects or views; while RGS requires
full accordance among views. Trace norm regularizes the approach in this part to get low dimensional projection. U M2 L framework facilitates the understanding of data by decomposing each base
metric to low dimensional subspace, i.e., for each base metric Mk , 2 eigen-vectors Lk ? Rd?2
corresponding to the largest 2 eigen-values are picked as orthogonal bases.
The hidden multi-view data [1] are composed of 200 instances and each instance has two hidden
views, namely color and shape. We perform U M2 LADS/RGS on this dataset with K = 2. Results of
other methods such as S CA can be found in the supplementary material. Fig. 3 (a) (b) give the 2-D
visualization results by plotting the projected instances in subspaces corresponding to metric M1
and M2 of U M2 LADS . It clearly shows that M1 captures the semantic view of color, and M2 reflects
the meaning of shape. While for U M2 LRGS , the visualization result of one of the obtained metrics
is showed in Fig. 3 (c). It can be clearly found that both U M2 LADS and U M2 LRGS can capture the
two different semantic views hidden in data. Moreover, since U M2 LRGS requires more accordance,
it can capture these physical meanings with a single metric.
5
Conclusion
In this paper, we propose the Unified Multi-Metric Learning (U M2 L) framework which can exploit
side information from multiple aspects such as locality and semantics linkage constraints. It is notable that both types of constraints can be absorbed in the multi-metric loss functions with a type of
flexible function operator ? in U M2 L. By implementing ? in different forms, U M2 L can be used for
local metric learning in classification, latent semantic linkage discovering, etc., or degrade to stateof-the-art D ML approaches. The regularizer in U M2 L is flexible for different purposes. U M2 L can be
solved by various optimization techniques such as proximal gradient and accelerated stochastic approaches, and theoretical guarantee on the convergence is proved. Experiments show the superiority
of U M2 L in classification performance and hidden semantics discovery. Automatic determination of
the number of base metrics is an interesting future work.
Acknowledgements This research was supported by NSFC (61273301, 61333014), Collaborative
Innovation Center of Novel Software Technology and Industrialization, and Tencent Fund.
8
References
[1] E. Amid and A. Ukkonen. Multiview triplet embedding: Learning attributes in multiple maps. In ICML,
pages 1472?1480, Lille, France, 2015.
[2] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.
SIIMS, 2(1):183?202, 2009.
[3] D. Chakrabarti, S. Funiak, J. Chang, and S. Macskassy. Joint inference of multiple label types in large
networks. In ICML, pages 874?882, Beijing, China, 2014.
[4] S. Changpinyo, K. Liu, and F. Sha. Similarity component analysis. In NIPS, pages 1511?1519. MIT Press,
Cambridge, MA., 2013.
[5] J. V. Davis, B. Kulis, P. Jain, S. Sra, and I. S. Dhillon. Information-theoretic metric learning. In ICML,
pages 209?216, Corvalis, OR., 2007.
[6] J. C. Duchi and Y. Singer. Efficient online and batch learning using forward backward splitting. JMLR,
10:2899?2934, 2009.
[7] E. Fetaya and S. Ullman. Learning local invariant mahalanobis distances. In ICML, pages 162?168, Pairs,
France, 2015.
[8] M. Frank, A. P. Streich, D. Basin, and J. M. Buhmann. Multi-assignment clustering for boolean data.
JMLR, 13:459?489, 2012.
[9] J.-H. Hu, D.-C. Zhan, X. Wu, Y. Jiang, and Z.-H. Zhou. Pairwised specific distance learning from physical
linkages. TKDD, 9(3):Article 20, 2015.
[10] K. Huang, Y. Ying, and C. Campbell. GSML: A unified framework for sparse metric learning. In ICDM,
pages 189?198, Miami, FL., 2009.
[11] J. Leskovec and J. Mcauley. Learning to discover social circles in ego networks. In NIPS, pages 539?547.
MIT Press, Cambridge, MA., 2012.
[12] D. Lim, G. Lanckriet, and B. McFee. Robust structural metric learning. In ICML, pages 615?623, Atlanta,
GA., 2013.
[13] Y.-K. Noh, B.-T. Zhang, and D. Lee. Generative local metric learning for nearest neighbor classification.
In NIPS, pages 1822?1830. MIT Press, Cambridge, MA., 2010.
[14] Q. Qian, R. Jin, S. Zhu, and Y. Lin. Fine-grained visual categorization via multi-stage metric learning. In
CVPR, pages 3716?3724, Boston, MA., 2015.
[15] Y. Shi, A. Bellet, and F. Sha. Sparse compositional metric learning. In AAAI, pages 2078?2084, Quebec,
Canada, 2014.
[16] W. Wang and Z.-H. Zhou. A new analysis of co-training. In ICML, pages 1135?1142, Haifa, Israel, 2010.
[17] B. Wang, J. Jiang, W. Wang, Z.-H. Zhou, and Z. Tu. Unsupervised metric fusion by cross diffusion. In
CVPR, pages 2997?3004, Providence, RI., 2012.
[18] J. Wang, A. Kalousis, and A. Woznica. Parametric local metric learning for nearest neighbor classification.
In NIPS, pages 1601?1609. MIT Press, Cambridge, MA., 2012.
[19] J. Wang, A. Woznica, and A. Kalousis. Learning neighborhoods for metric learning. In ECML/PKDD,
pages 223?236, Bristol, UK, 2012.
[20] K. Q. Weinberger and L. K. Saul. Distance metric learning for large margin nearest neighbor classification.
JMLR, 10:207?244, 2009.
[21] Y. Ying and P. Li. Distance metric learning with eigenvalue optimization. JMLR, 13:1?26, 2012.
[22] D.-C. Zhan, M. Li, Y.-F. Li, and Z.-H. Zhou. Learning instance specific distances using metric propagation.
In ICML, pages 1225?1232, Montreal, Canada, 2009.
[23] M.-L. Zhang and Z.-H. Zhou. ML-KNN: A lazy learning approach to multi-label learning. Pattern
Recognition, 40(7):2038?2048, 2007.
[24] Z.-H. Zhou. Ensemble methods: foundations and algorithms. Chapman & Hall/CRC, Boca Raton, FL.,
2012.
[25] Z.-H. Zhou. Learnware: On the future of machine learning. Frontiers of Computer Science, 10(4):589?
590, 2016.
9
| 6192 |@word trial:1 kulis:1 version:1 norm:11 d2:4 km:3 cml:2 hu:1 covariance:1 decomposition:1 pick:1 tr:9 mcauley:1 denying:1 initial:1 configuration:4 generatively:1 score:11 selecting:1 aple:3 liu:1 denoting:1 tuned:1 imposter:1 existing:1 current:1 recovered:1 si:1 yet:4 assigning:1 must:1 partition:1 shape:3 enables:1 designed:1 interpretable:1 update:5 fund:1 v:4 generative:1 discovering:12 characterization:1 provides:1 node:5 lsm:1 zhang:2 constructed:1 direct:1 beta:1 chakrabarti:1 yuan:1 introduce:1 theoretically:1 pairwise:3 inter:2 behavior:1 pkdd:1 xz:1 multi:28 integrator:1 inspired:1 zhouzh:1 detects:1 automatically:2 zhi:1 armed:1 solver:2 considering:3 increasing:1 spain:1 provided:5 underlying:1 notation:1 bounded:2 becomes:1 discover:2 totally:1 israel:1 what:1 tic:1 mountain:2 kind:1 z:2 interpreter:1 unified:10 finding:2 transformation:1 guarantee:2 act:2 charge:2 voting:1 tie:1 ensured:2 classifier:1 k2:3 uk:1 superiority:1 positive:1 nju:1 t1:1 accordance:2 local:16 treat:1 sd:9 consequence:2 aiming:1 limit:1 tends:1 id:1 jiang:3 nsfc:1 china:2 specifying:3 tml:2 co:1 subfields:1 decided:1 responsible:2 testing:1 impostor:1 implement:2 mcfee:1 area:2 projection:6 word:16 integrating:1 nanjing:2 cannot:2 get:7 selection:3 operator:10 close:1 judged:1 ga:1 applying:1 restriction:4 optimize:1 deterministic:1 map:2 yt:18 imposed:1 demonstrated:1 go:4 missing:1 center:1 shi:1 convex:6 splitting:1 manipulates:1 qian:1 m2:76 utilizing:1 regarded:1 embedding:1 limiting:1 target:2 construction:2 heavily:1 user:11 unconsidered:1 suppose:1 us:2 distinguishing:1 overdetermined:2 lanckriet:1 ego:4 recognition:1 updating:1 utilized:1 std:2 sunset:1 cloud:5 subproblem:1 solved:4 capture:3 wang:5 calculate:1 boca:1 equaling:1 connected:2 richness:1 decrease:2 removed:1 highest:1 substantial:1 stochas:1 environment:1 convexity:1 balanced:1 depend:1 ov:2 segment:1 purely:1 efficiency:1 learner:2 easily:2 joint:1 mization:1 various:7 regularizer:11 derivation:1 alphabet:1 separated:1 jain:1 fast:1 describe:1 detected:1 sc:1 choosing:2 neighborhood:1 whose:1 richer:1 larger:2 dominating:1 supplementary:6 cvpr:2 otherwise:3 ability:7 knn:2 funiak:1 online:5 obviously:1 advantage:1 eigenvalue:1 propose:5 product:2 mks:3 adaptation:1 tu:1 relevant:1 flexibility:1 degenerate:1 description:1 frobenius:1 validate:2 kv:1 exploiting:1 convergence:2 cluster:1 sea:3 categorization:1 leave:2 object:10 tk:1 friend:2 ac:3 montreal:1 liver:1 measured:3 nearest:5 eq:5 implemented:1 come:2 indicate:3 msk:1 direction:1 closely:3 owing:1 filter:2 stochastic:4 attribute:1 material:6 implementing:1 fmk:5 crc:1 assign:1 generalization:3 investigation:2 im:1 frontier:1 mm:10 hold:2 miami:1 considered:3 hall:1 normal:1 achieves:2 smallest:1 purpose:1 integrates:2 lose:1 label:12 punished:1 bridge:1 largest:2 grouped:2 tf:1 reflects:6 mit:4 clearly:6 aim:4 lamda:1 rather:2 zhou:8 avoid:1 shrinkage:1 combiner:1 derived:1 focus:4 vk:3 consistently:1 rank:3 indicates:1 mainly:2 greatly:1 rigorous:3 baseline:1 hobby:4 benefitting:1 helpful:2 inference:1 glass:1 clean1:1 membership:1 nn:2 hidden:7 relation:1 bandit:1 selective:1 france:2 selects:1 semantics:11 transformed:2 compatibility:1 issue:1 overall:6 flexible:4 classification:13 denoted:3 among:3 orientation:1 development:1 stateof:1 art:3 spatial:8 integration:1 changpinyo:1 equal:2 once:1 construct:2 field:3 chapman:1 lille:1 icml:8 unsupervised:1 future:2 others:6 piecewise:2 inherent:1 simplify:1 composed:1 preserve:1 national:1 simultaneously:1 gamma:1 beck:1 phase:1 maintain:1 psd:3 detection:3 atlanta:1 interest:4 huge:1 investigate:1 intra:1 evaluation:3 extreme:1 bracket:1 semidefinite:1 behind:2 configure:1 xy:1 orthogonal:1 euclidean:2 desired:1 circle:5 haifa:1 legible:1 theoretical:1 leskovec:1 mk:57 instance:31 column:4 modeling:1 boolean:1 teboulle:1 measuring:1 assignment:2 applicability:1 cost:2 apical:1 subset:1 uniform:2 conducted:2 providence:1 xue:1 proximal:6 gd:1 synthetic:3 adaptively:1 probabilistic:1 lee:1 picking:1 together:3 concrete:4 squared:1 ambiguity:4 aaai:1 containing:1 choose:1 amid:1 huang:1 derivative:1 return:2 ullman:1 li:3 account:2 potential:2 de:1 bold:1 configured:4 notable:2 explicitly:1 ad:9 depends:1 later:3 view:22 performed:1 picked:2 linked:2 doing:1 portion:2 start:1 competitive:1 maintains:1 complicated:1 annotation:1 jia:1 streich:1 contribution:1 collaborative:1 formed:1 unlinked:1 efficiently:3 ensemble:2 weak:5 emphasizes:2 researcher:1 bristol:1 converged:1 explain:1 facebook:2 against:1 moreover:1 steadily:1 obvious:1 proof:1 mi:3 di:1 attributed:1 gain:1 sampled:1 dataset:3 proved:1 popular:2 knowledge:1 lim:2 color:3 actually:1 reflecting:1 adaptable:1 campbell:1 higher:1 reflected:2 improved:1 though:1 furthermore:3 stage:1 correlation:1 fetaya:1 assessment:3 propagation:1 reveal:2 perhaps:1 name:1 ye:1 concept:1 hence:3 inspiration:1 arg:3 spatially:1 symmetric:4 laboratory:1 alternating:1 dhillon:1 semantic:29 deal:1 reweighted:1 mahalanobis:2 during:1 encourages:1 ambiguous:6 davis:1 stress:1 evident:1 tt:1 demonstrate:1 multiview:1 theoretic:1 performs:1 duchi:1 elected:1 meaning:11 image:5 invoked:1 novel:2 recently:2 consideration:5 possessing:1 superior:2 common:6 wise:3 functional:1 physical:7 tively:1 linking:1 extend:1 m1:5 lad:18 measurement:7 refer:1 cambridge:4 vec:3 rd:2 tuning:1 consistency:1 automatic:1 similarly:1 session:4 dot:1 han:1 similarity:37 supervision:1 etc:1 base:16 dominant:1 showed:1 perspective:3 irrelevant:2 apart:1 scenario:1 certain:1 affiliation:1 binary:2 syn2:2 exploited:1 captured:1 additional:1 impose:1 subplot:2 employed:1 converge:3 ii:3 multiple:20 full:3 reduces:1 smooth:4 academic:1 determination:1 cross:2 lin:1 icdm:1 equally:1 y:2 impact:2 mk1:1 prediction:1 heterogeneous:1 essentially:1 metric:106 circumstance:1 iteration:7 represent:2 sometimes:1 preserved:1 whereas:1 fine:1 rest:1 comprehensibility:1 posse:1 facilitates:1 veto:1 quebec:1 effectiveness:1 extracting:1 structural:2 intermediate:1 iii:1 subplots:1 restrict:2 cn:1 pca:1 linkage:49 speaking:1 cause:1 compositional:1 deep:2 ignored:1 generally:2 detailed:2 listed:4 chuan:1 industrialization:1 simplest:1 exist:1 delta:1 blue:1 diverse:2 woznica:2 macskassy:1 dominance:1 key:4 group:4 threshold:2 diffusion:1 utilize:1 backward:1 isd:1 sum:1 beijing:1 inverse:1 named:1 mk2:2 reasonable:2 wu:1 decision:1 zhan:3 comparable:3 layer:2 fl:2 guaranteed:3 followed:1 tackled:1 distinguish:1 fold:1 adapted:1 constraint:10 kronecker:2 idf:1 ri:1 software:2 aspect:3 min:8 influential:1 structured:1 according:2 combination:1 disconnected:1 kalousis:2 bellet:1 making:2 invariant:1 heart:1 lml:7 visualization:5 turn:1 eventually:1 mechanism:3 count:1 needed:1 german:1 singer:1 tmk:3 operation:1 decomposing:3 away:1 appropriate:1 spectral:1 alternative:2 batch:4 weinberger:1 eigen:2 existence:2 assumes:1 remaining:2 clustering:2 binomial:1 axy:2 ccd:1 completed:1 hinge:4 exploit:2 k1:3 especially:2 objective:4 noticed:1 occurs:1 font:1 strategy:3 degrades:1 sha:2 parametric:1 diagonal:2 devote:1 exhibit:1 gradient:13 win:1 subspace:8 distance:21 link:2 majority:1 outer:1 degrade:1 topic:2 considers:2 collected:2 trivial:2 reason:3 degenerated:1 siims:1 besides:5 index:2 relationship:2 learnware:1 balance:1 innovation:1 ying:2 statement:1 frank:1 expense:1 trace:4 negative:2 zt:6 collective:1 perform:2 observation:1 datasets:12 benchmark:1 descent:4 jin:1 ecml:1 heterogeneity:2 maxk:1 incorporated:1 regularizes:1 discovered:3 community:1 canada:2 raton:1 introduced:2 pair:19 required:1 namely:3 extensive:1 connection:3 optimized:3 conflict:1 learned:10 barcelona:1 nip:5 usually:6 pattern:8 sparsity:2 including:1 max:7 video:1 explanation:1 indicator:1 buhmann:1 zhu:1 representing:1 improve:1 technology:2 lk:5 extract:2 autoencoder:1 review:1 prior:2 discovery:4 understanding:1 acknowledgement:1 loss:14 plant:1 discriminatively:1 ukkonen:1 noh:1 generation:3 interesting:1 remarkable:1 age:1 validation:1 foundation:1 integrate:1 degree:1 basin:1 consistent:1 autompg:1 article:2 plotting:1 thresholding:1 share:2 heavy:1 row:9 supported:1 last:3 side:4 pulled:1 ber:6 neighbor:6 saul:1 taking:2 absolute:1 sparse:3 benefit:1 regard:1 boundary:1 calculated:3 default:1 world:2 rich:1 forward:1 made:1 corvalis:1 projected:2 simplified:1 ig:2 far:1 social:11 emphasize:1 selector:1 dealing:1 ml:22 global:9 hayes:1 corpus:1 discriminative:1 alternatively:2 latent:10 iterative:2 triplet:16 sonar:1 table:7 channel:1 learn:1 robust:1 mj:1 sra:1 composing:1 ca:8 contributes:1 mnn:17 tencent:1 investigated:1 meanwhile:1 constructing:1 domain:3 tkdd:1 sp:2 significance:1 main:2 whole:4 reuters:1 repeated:1 facilitating:1 fig:5 sub:8 explicit:1 lie:1 candidate:4 house:1 jmlr:4 learns:1 grained:1 theorem:2 remained:1 friendship:3 specific:10 xt:21 r2:1 x:3 list:1 closeness:1 dominates:1 incorporating:1 grouping:2 fusion:1 importance:2 dissimilarity:1 margin:1 boston:1 locality:16 rg:11 simply:1 absorbed:1 visual:3 lazy:1 g2:1 chang:1 hua:1 corresponds:2 determines:1 satisfies:1 ma:5 identity:1 goal:2 kmeans:1 consequently:3 rbf:1 acceleration:1 mkt:1 fista:1 determined:1 respec:1 uniformly:1 semantically:1 lemma:3 principal:1 total:1 tendency:1 vote:2 latter:1 stressed:1 dissimilar:7 accelerated:3 lmk:2 d1:3 phenomenon:2 |
5,739 | 6,193 | Learning and Forecasting Opinion Dynamics in
Social Networks
Abir De?
Isabel Valera?
Niloy Ganguly?
?
Sourangshu Bhattacharya
Manuel Gomez-Rodriguez?
?
IIT Kharagpur
MPI for Software Systems?
{abir.de,niloy,sourangshu}@cse.iitkgp.ernet.in
{ivalera,manuelgr}@mpi-sws.org
Abstract
Social media and social networking sites have become a global pinboard for exposition and discussion of news, topics, and ideas, where social media users often
update their opinions about a particular topic by learning from the opinions shared
by their friends. In this context, can we learn a data-driven model of opinion dynamics that is able to accurately forecast users? opinions? In this paper, we introduce SLANT, a probabilistic modeling framework of opinion dynamics, which
represents users? opinions over time by means of marked jump diffusion stochastic differential equations, and allows for efficient model simulation and parameter
estimation from historical fine grained event data. We then leverage our framework to derive a set of efficient predictive formulas for opinion forecasting and
identify conditions under which opinions converge to a steady state. Experiments
on data gathered from Twitter show that our model provides a good fit to the data
and our formulas achieve more accurate forecasting than alternatives.
1
Introduction
Social media and social networking sites are increasingly used by people to express their opinions,
give their ?hot takes?, on the latest breaking news, political issues, sports events, and new products.
As a consequence, there has been an increasing interest on leveraging social media and social networking sites to sense and forecast opinions, as well as understand opinion dynamics. For example,
political parties routinely use social media to sense people?s opinion about their political discourse1 ;
quantitative investment firms measure investor sentiment and trade using social media [18]; and,
corporations leverage brand sentiment, estimated from users? posts, likes and shares in social media
and social networking sites, to design their marketing campaigns2 . In this context, multiple methods
for sensing opinions, typically based on sentiment analysis [21], have been proposed in recent years.
However, methods for accurately forecasting opinions are still scarce [7, 8, 19], despite the extensive
literature on theoretical models of opinion dynamics [6, 9].
In this paper, we develop a novel modeling framework of opinion dynamics in social media and social networking sites, SLANT3 , which allows for accurate forecasting of individual users? opinions.
The proposed framework is based on two simple intuitive ideas: i) users? opinions are hidden until
they decide to share it with their friends (or neighbors); and, ii) users may update their opinions
about a particular topic by learning from the opinions shared by their friends. While the latter is one
of the main underlying premises used by many well-known theoretical models of opinion dynamics [6, 9, 22], the former has been ignored by models of opinion dynamics, despite its relevance on
closely related processes such as information diffusion [12].
1
http://www.nytimes.com/2012/10/08/technology/campaigns-use-social-media-to-lure-younger-voters.html
http://www.nytimes.com/2012/07/31/technology/facebook-twitter-and-foursquare-as-corporate-focus-groups.html
3
Slant is a particular point of view from which something is seen or presented.
2
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
More in detail, our proposed model represents users? latent opinions as continuous-time stochastic
processes driven by a set of marked jump stochastic differential equations (SDEs) [14]. Such construction allows each user?s latent opinion to be modulated over time by the opinions asynchronously
expressed by her neighbors as sentiment messages. Here, every time a user expresses an opinion by
posting a sentiment message, she reveals a noisy estimate of her current latent opinion. Then, we
exploit a key property of our model, the Markov property, to develop:
I. An efficient estimation procedure to find the parameters that maximize the likelihood of a
set of (millions of) sentiment messages via convex programming.
II. A scalable simulation procedure to sample millions of sentiment messages from the proposed model in a matter of minutes.
III. A set of novel predictive formulas for efficient and accurate opinion forecasting, which
can also be used to identify conditions under which opinions converge to a steady state of
consensus or polarization.
Finally, we experiment on both synthetic and real data gathered from Twitter and show that our
model provides a good fit to the data and our predictive formulas achieve more accurate opinion
forecasting than several alternatives [7, 8, 9, 15, 26].
Related work. There is an extensive line of work on theoretical models of opinion dynamics and
opinion formation [3, 6, 9, 15, 17, 26]. However, previous models typically share the following
limitations: (i) they do not distinguish between latent opinion and sentiment (or expressed opinion), which is a noisy observation of the opinion (e.g., thumbs up/down, text sentiment); (ii) they
consider users? opinions to be updated synchronously in discrete time, however, opinions may be
updated asynchronously following complex temporal patterns [12]; (iii) the model parameters are
difficult to learn from real fine-grained data and instead are set arbitrarily, as a consequence, they
provide inaccurate fine-grained predictions; and, (iv) they focus on analyzing only the steady state
of the users? opinions, neglecting the transient behavior of real opinion dynamics which allows for
opinion forecasting methods. More recently, there have been some efforts on designing models that
overcome some of the above limitations and provide more accurate predictions [7, 8]. However,
they do not distinguish between opinion and sentiment and still consider opinions to be updated
synchronously in discrete time. Our modeling framework addresses the above limitations and, by
doing so, achieves more accurate opinion forecasting than alternatives.
2
Proposed model
In this section, we first formulate our model of opinion dynamics, starting from the data it is designed
for, and then introduce efficient methods for model parameter estimation and model simulation.
Opinions data. Given a directed social network G = (V, E), we record each message as e :=
(u, m, t), where the triplet means that the user u ? V posted a message with sentiment m at time
t. Given a collection of messages {e1 = (u1 , m1 , t1 ), . . . , en = (un , mn , tn )}, the history Hu (t)
gathers all messages posted by user u up to but not including time t, i.e.,
Hu (t) = {ei = (ui , mi , ti )|ui = u and ti < t},
(1)
and H(t) := ?u?V Hu (t) denotes the entire history of messages up to but not including time t.
Generative process. We represent users? latent opinions as a multidimensional stochastic process
x? (t), in which the u-th entry, x?u (t) ? R, represents the opinion of user u at time t and the sign ?
means that it may depend on the history H(t). Then, every time a user u posts a message at time t,
we draw its sentiment m from a sentiment distribution p(m|x?u (t)). Here, we can also think of the
sentiment m of each message as samples from a noisy stochastic process mu (t) ? p(mu (t)|x?u (t)).
Further, we represent the message times by a set of counting processes. In particular, we denote
the set of counting processes as a vector N (t), in which the u-th entry, Nu (t) ? {0} ? Z+ , counts
the number of sentiment messages user u posted up to but not including time t. Then, we can
characterize the message rate of the users using their corresponding conditional intensities as
E[dN (t) | H(t)] = ?? (t) dt,
(2)
where dN (t) := ( dNu (t) )u?V denotes the number of messages per user in the window [t, t + dt)
and ?? (t) := ( ??u (t) )u?V denotes the associated user intensities, which may depend on the history
H(t). We denote the set of user that u follows by N (u). Next, we specify the the intensity functions
?? (t), the dynamics of the users? opinions x? (t), and the sentiment distribution p(m|x?u (t)).
2
Intensity for messages. There is a wide variety of message intensity functions one can choose from
to model the users? intensity ?? (t) [1]. In this work, we consider two of the most popular functional
forms used in the growing literature on social activity modeling using point processes [10, 24, 5]:
I. Poisson process. The intensity is assumed to be independent of the history H(t) and
constant, i.e., ??u (t) = ?u .
II. Multivariate Hawkes processes. The intensity captures a mutual excitation phenomena between message events and depends on the whole history of message events
?v?{u?N (u)} Hv (t) before t:
X
X
X
??u (t) = ?u +
bvu
?(t ? ti ) = ?u +
bvu (?(t) ? dNv (t)), (3)
v?u?N (u)
ei ?Hv (t)
v?u?N (u)
where the first term, ?u > 0, models the publication of messages by user u on her own
initiative, and the second term, with bvu > 0, models the publication of additional messages
by user u due to the influence that previous messages posted by the users she follows have
on her intensity. Here, ?(t) = e??t is an exponential triggering kernel modeling the decay
of influence of the past events over time and ? denotes the convolution operation.
In both cases, the couple (N (t), ?? (t)) is a Markov process, i.e., future states of the process (conditional on past and present states) depends only upon the present state, and we can express the users?
intensity more compactly using the following jump stochastic differential equation (SDE):
d?? (t) = ?(? ? ?? (t))dt + BdN (t),
where the initial condition is ?? (0) = ?. The Markov property will become important later.
Stochastic process for opinion. The opinion x?u (t) of a user u at time t adopts the following form:
X
X
X
x?u (t) = ?u +
avu
mi g(t ? ti ) = ?u +
avu (g(t) ? (mv (t)dNv (t))), (4)
v?N (u)
ei ?Hv (t)
v?N (u)
where the first term, ?u ? R, models the original opinion a user u starts with, the second term,
with avu ? R, models updates in user u?s opinion due to the influence that previous messages with
opinions mi posted by the users that u follows has on her opinion. Here, g(t) = e??t (where
? > 0) denotes an exponential triggering kernel, which models the decay of influence over time.
The greater the value of ?, the greater the user?s tendency to retain her own opinion ?u . Under this
form, the resulting opinion dynamics are Markovian and can be compactly represented by a set of
coupled marked jumped stochastic differential equations (proven in Appendix A):
Proposition 1 The tuple (x? (t), ?? (t), N (t)) is a Markov process, whose dynamics are defined by
the following marked jumped stochastic differential equations (SDE):
dx? (t) = ?(? ? x? (t))dt + A(m(t) dN (t))
?
?
d? (t) = ?(? ? ? (t))dt + B dN (t)
(5)
(6)
where the initial conditions are ?? (0) = ? and x? (0) = ?, the marks are the sentiment messages
m(t) = ( mu (t) )u?V , with mu (t) ? p(m|x?u (t)), and the sign denotes pointwise product.
The above mentioned Markov property will be the key to the design of efficient model parameter
estimation and model simulation algorithms.
Sentiment distribution. The particular choice of sentiment distribution p(m|x?u (t)) depends on the
recorded marks. For example, one may consider:
I. Gaussian Distribution The sentiment is assumed to be a real random variable m ? R, i.e.,
p(m|xu (t)) = N (xu (t), ?u ). This fits well scenarios in which sentiment is extracted from
text using sentiment analysis [13].
II. Logistic. The sentiment is assumed to be a binary random variable m ? {?1, 1}, i.e.,
p(m|xu (t)) = 1/(1 + exp(?m ? xu (t))). This fits well scenarios in which sentiment is
measured by means of up votes, down votes or likes.
Our model estimation method can be easily adapted to any log-concave sentiment distribution. However, in the remainder of the paper, we consider the Gaussian distribution since, in our experiments,
sentiment is extracted from text using sentiment analysis.
3
2.1 Model parameter estimation
Given a collection of messages H(T ) = {(ui , mi , ti )} recorded during a time period [0, T ) in
a social network G = (V, E), we can find the optimal parameters ?, ?, A and B by solving a
maximum likelihood estimation (MLE) problem4 . To do so, it is easy to show that the log-likelihood
of the messages is given by
X
X
XZ T
??u (? ) d? .
L(?, ?, A, B) =
log p(mi |x?ui (ti )) +
log ??ui (ti ) ?
(7)
ei ?H(T )
|
{z
message sentiments
ei ?H(T )
|
}
maximize
L(?, ?, A, B).
u?V
message times
Then, we can find the optimal parameters (?, ?, A, B) using MLE as
?,??0,A,B?0
{z
0
}
(8)
Note that, as long as the sentiment distributions are log-concave, the MLE problem above is concave and thus can be solved efficiently. Moreover, the problem decomposes in 2|V| independent
subproblems, two per user u, since the first term in Eq. 7 only depends on (?, A) whereas the last
two terms only depend on (?, B), and thus can be readily parallelized. Then, we find (?? , B ? ) using spectral projected gradient descent [4], which works well in practice and achieves ? accuracy in
O(log(1/?)) iterations, and find (?? , A? ) analytically, since, for Gaussian sentiment distributions,
the problem reduces to a least-square problem. Fortunately, in each subproblem, we can use the
Markov property from Proposition 1 to precompute the sums and integrals in (8) in linear time, i.e.,
O(|Hu (T )| + | ?v?N (u) Hv (T )|). Appendix H summarizes the overall estimation algorithm.
2.2 Model simulation
We leverage the efficient sampling algorithm for multivariate Hawkes introduced by Farajtabar et
al. [11] to design a scalable algorithm to sample opinions from our model. The two key ideas that
allow us to adapt the procedure by Farajtabar et al. to our model of opinion dynamics, while keeping
its efficiency, are as follows: (i) the opinion dynamics, defined by Eqs. 5 and 6, are Markovian and
thus we can update individual intensities and opinions in O(1) ? let ti and ti+1 be two consecutive
events, then, we can compute ?? (ti+1 ) as (?? (ti ) ? ?) exp(??(ti+1 ? ti )) + ? and x? (ti+1 ) as
(x? (ti ) ? ?) exp(??(ti+1 ? ti )) + ?, respectively; and, (ii) social networks are typically sparse
and thus both A and B are also sparse, then, whenever a node expresses its opinion, only a small
number of opinions and intensity functions in its local neighborhood will change. As a consequence,
we can reuse the majority of samples from the intensity functions and sentiment distributions for the
next new sample. Appendix I summarizes the overall simulation algorithm.
3
Opinion forecasting
Our goal here is developing efficient methods that leverage our model to forecast a user u?s
opinion xu (t) at time t given the history H(t0 ) up to time t0 <t. In the context of our probabilistic model, we will forecast this opinion by efficiently computing the conditional expectation
EH(t)\H(t0 ) [x?u (t)|H(t0 )], where H(t)\H(t0 ) denotes the average across histories from t0 to t,
while conditioning on the history up to H(t0 ).
To this aim, we will develop analytical and sampling based methods to compute the above conditional expectation. Moreover, we will use the former to identify under which conditions users?
average opinion converges to a steady state and, if so, find the steady state opinion. In this section,
we write Ht = H(t) to lighten the notation and denote the eigenvalues of a matrix X by ?(X).
3.1 Analytical forecasting
In this section, we derive a set of formulas to compute the conditional expectation for both Poisson
and Hawkes messages intensities. However, since the derivation of such formulas for general multivariate Hawkes is difficult, we focus here on the case when bvu = 0 for all v, u ? G, v 6= u, and
rely on the efficient sampling based method for the general case.
I. Poisson intensity. Consider each user?s messages follow a Poisson process with rate ?u . Then,
the conditional average opinion is given by (proven in Appendix C):
4
Here, if one decides to model the message intensities with a Poisson process, B = 0.
4
Theorem 2 Given a collection of messages Ht0 recorded during a time period [0, t0 ) and ??u (t) =
?u for all u ? G, then,
EHt \Ht0 [x? (t)|Ht0 ] = e(A?1 ??I)(t?t0 ) x(t0 ) + ?(A?1 ? ?I)?1 e(A?1 ??I)(t?t0 ) ? I ?,
where ?1 := diag[?] and (x(t0 ))u?V = ?u +
P
v?N (u)
auv
P
ti ?Hv (t0 )
(9)
e
??(t0 ?ti )
mv (ti ).
Remarkably, we can efficiently compute both terms in Eq. 9 by using the iterative algorithm by AlMohy et al. [2] for the matrix exponentials and the well-known GMRES method [23] for the matrix
inversion. Given this predictive formula, we can easily study the stability condition and, for stable
systems, find the steady state conditional average opinion (proven in Appendix D):
Theorem 3 Given the conditions of Theorem 2, if Re[?(A?1 )] < ?, then,
?1
A?1
lim EHt \Ht0 [x? (t)|Ht0 ] = I ?
?.
t??
w
(10)
The above results indicate that the conditional average opinions are nonlinearly related to the parameter matrix A, which depends on the network structure, and the message rates ?, which in this case
are assumed to be constant and independent on the network structure. Figure 1 provides empirical
evidence of these results.
II. Multivariate Hawkes Process. Consider each user?s messages follow a multivariate Hawkes
process, given by Eq. 3, and bvu = 0 for all v, u ? G, v 6= u. Then, the conditional average opinion
is given by (proven in Appendix E):
Theorem P
4 Given a collection of messages Ht0 recorded during a time period [0, t0 ) and ??u (t) =
?u + buu ei ?Hu (t) e??(t?ti ) for all u ? G, then, the conditional average satisfies the following
differential equation:
dEHt \Ht0 [x? (t)|Ht0 ]
(11)
= [??I + A?(t)]EHt \Ht0 [x? (t)|Ht0 ] + ??,
dt
where
?(t) = diag EHt \Ht0 [?? (t)|Ht0 ] ,
EHt \Ht0 [?? (t)|Ht0 ] = e(B??I)(t?t0 ) ?(t0 ) + ?(B ? ?I)?1 e(B??I)(t?t0 ) ? I ? ?t ? t0 ,
X
X
(?(t0 ))u?V = ?u +
buv
e??(t0 ?ti ) ,
v?N (u)
ti ?Hv (t0 )
B = diag [b11 , . . . , b|V||V| ]> .
Here, we can compute the conditional average by solving numerically the differential equation
above, which is not stochastic, where we can efficiently compute the vector EHt [?? (t)] by using
again the algorithm by Al-Mohy et al. [2] and the GMRES method [23].
In this case, the stability condition and the steady state conditional average opinion are given by
(proven in Appendix F):
Theorem 5 Given the conditions of Theorem 4, if the transition matrix ?(t) associated to the timevarying linear system described by Eq. 11 satisfies that ||?(t)|| ? ?e?ct ?t > 0, where ?, c > 0,
then,
?1
A?2
?
lim EHt \Ht0 [x (t)|Ht0 ] = I ?
?,
(12)
t??
w
h
i
?1
where ?2 := diag I ? B
?
?
The above results indicate that the conditional average opinions are nonlinearly related to the parameter matrices A and B. This suggests that the effect of the temporal influence on the opinion
evolution, by means of the parameter matrix B of the multivariate Hawkes process, is non trivial.
We illustrate this result empirically in Figure 1.
5
Theoretical
0.01
E[xu (t)]
u?V ?
|V ? |
H:
Opinion-Trajectory?
P:
Opinion-Trajectory?
Network G1
0
-0.5
-1
-1.5
Theoretical
Experimental
-2
0.005
0
0.01
0.015
P
E[xu (t)]
u?V ?
|V ? |
Network G2
P:
Hawkes (+)
Hawkes (-)
30
20
10
0
-200
0.005
0.01
0.015
50
40
40
30
20
H:
P
u?V E[xu (t)]
|V |
0.005
0.01
Time
0.015
H: Temporal evolution
50
30
20
10
00
0.005 0.01
Time
Time
u?V E[xu (t)]
|V |
20
00
0.015
10
-10
Time
P
0.005 0.01
Time
P: Temporal evolution
40
30
10
00
0.015
Time
Time
P
0.01
0.005
20
10
Hawkes (-)
-40
0.015
30
Node-ID
0.005
40
0
-2 Hawkes (+)
Experimental
0
0
50
40
Node-ID
0.5
50
Node-ID
1
4
2
Node-ID
Opinion-Trajectory?
Opinion-Trajectory?
1.5
0.015
P: Temporal evolution
00
0.005
0.01
Time
0.015
H: Temporal evolution
Figure 1: Opinion dynamics on two 50-node networks G1 (top) and G2 (bottom) for Poisson (P)
and Hawkes (H) message intensities. The first column visualizes the two networks and opinion of
each node at t = 0 (positive/negative opinions in red/blue). The second column shows the temporal
evolution of the theoretical and empirical average opinion for Poisson intensities. The third column
shows the temporal evolution of the empirical average opinion for Hawkes intensities, where we
compute the average separately for positive (+) and negative (?) opinions in the steady state. The
fourth and fifth columns shows the polarity of average opinion per user over time.
3.2
Simulation based forecasting
Given the efficient simulation procedure described in Section 2.2, we can readily derive a general
simulation based formula for opinion forecasting:
n
1X ?
? (t) =
EHt \Ht0 [x (t)|Ht0 ] ? x
xl (t),
n
?
?
(13)
l=1
where n is the number of times that we simulate the opinion dynamics and x?l (t) gathers the users?
opinion at time t for the l-th simulation. Moreover, we have the following theoretical guarantee
(proven in Appendix G):
Theorem 6 Simulate the opinion dynamics up to time t > t0 the following number of times:
n?
1
2
(6?max
+ 4xmax ) log(2/?),
32
(14)
2
2
where ?max
= maxu?G ?H
(x?u (t)|Ht0 ) is the maximum variance of the users? opinions, which
t \Ht0
we analyze in Appendix G, and xmax ? |xu (t)|, ?u ? G is an upper bound on the users? (absolute)
opinions. Then, for each user u ? G, the error between her true and estimated average opinion
satisfies that |?
x?u (t) ? EHt \Ht0 [x?u (t)|Ht0 ]| ? with probability at least 1 ? ?.
4
4.1
Experiments
Experiments on synthetic data
We first provide empirical evidence that our model is able to produce different types of opinion
dynamics, which may or may not converge to a steady state of consensus or polarization. Then, we
show that our model estimation and simulation algorithms as well as our predictive formulas scale
to networks with millions of users and events. Appendix J contains an evaluation of the accuracy of
our model parameter estimation method.
Different types of opinion dynamics. We first simulate our model on two different small networks
using Poisson intensities, i.e., ??u (t) = ?u , ?u ? U (0, 1) ?u, and then simulate our model on the
same networks while using Hawkes intensities with bvu ? U (0, 1) on 5% of the nodes, chosen at
random, and the original Poisson intensities on the remaining nodes. Figure 1 summarizes the results, which show that (i) our model is able to produce opinion dynamics that converge to consensus
(second column) and polarization (third column); (ii) the opinion forecasting formulas described in
Section 3 closely match an simulation based estimation (second column); and, (iii) the evolution of
6
Nodes
105
Poisson
Hawkes
104
103
102
101
100
10?1
10?2
102 103 104
10
5
10
6
15
10
2
10
3
10
4
Nodes
10
5
10
6
Time (s)
Time (s)
105
104
103
102
101
101
10?1
10?2101
Time(s)
Time (s)
105
Informational
Temporal
104
103
102
1
10
100
10?1
10?2101 102 103 104
10
5
Nodes
10
6
10
7
(a) Estimation vs # nodes (b) Simulation vs # nodes (c) Forecast vs # nodes
Poisson
Hawkes
10
5
02
4
6
8
10
Forecast-Time[T(hr)]
(d) Forecast vs T
Figure 2: Panels (a) and (b) show running time of our estimation and simulation procedures against
number of nodes, where the average number of events per node is 10. Panels (c) and (d) show the
running time needed to compute our analytical formulas against number of nodes and time horizon
T = t ? t0 , where the number of nodes is 103 . In Panel (c), T = 6 hours. For all panels, the average
degree per node is 30. The experiments are carried out in a single machine with 24 cores and 64 GB
of main memory.
the average opinion and whether opinions converge to a steady state of consensus or polarization
depend on the functional form of message intensity5 .
Scalability. Figure 2 shows that our model estimation and simulation algorithms, described in Sections 2.1 and 2.2, and our analytical predictive formulas, described in Section 3.1, scale to networks
with millions of users and events. For example, our algorithm takes 20 minutes to estimate the
model parameters from 10 million events generated by one million nodes using a single machine
with 24 cores and 64 GB RAM.
4.2 Experiments on real data
We use real data gathered from Twitter to show that our model can forecast users? opinions more
accurately than six state of the art methods [7, 8, 9, 15, 19, 26] (see Appendix L).
Experimental Setup. We experimented with five Twitter datasets about current real-world events
(Politics, Movie, Fight, Bollywood and US), in which, for each recorded message i, we compute its
sentiment value mi using a popular sentiment analysis toolbox, specially designed for Twitter [13].
Here, the sentiment takes values m ? (?1, 1) and we consider the sentiment polarity to be simply
sign(m). Appendix K contains further details and statistics about these datasets.
Opinion forecasting. We first evaluate the performance of our model at predicting sentiment (expressed opinion) at a message level. To do so, for each dataset, we first estimate the parameters of
our model, SLANT, using messages from a training set containing the (chronologically) first 90%
of the messages. Here, we set the decay parameters of the exponential triggering kernels ?(t) and
g(t) by cross-validation. Then, we evaluate the predictive performance of our opinion forecasting
formulas using the last 10% of the messages6 . More specifically, we predict the sentiment value m
for each message posted by user u in the test set given the history up to T hours before the time
of the message as m
? = EHt \Ht?T [x?u (t)|Ht?T ]. We compare the performance of our model with
the asynchronous linear model (AsLM) [8], DeGroot?s model [9], the voter model [26], the biased
voter model [7], the flocking model [15], and the sentiment prediction method based on collaborative filtering by Kim et al. [19], in terms of: (i) the mean squared error between the true (m) and the
estimated (m)
? sentiment value for all messages in the held-out set, i.e., E[(m ? m)
? 2 ], and (ii) the
failure rate, defined as the probability that the true and the estimated polarity do not coincide, i.e.,
P(sign(m) 6= sign(m)).
?
For the baselines algorithms, which work in discrete time, we simulate NT
rounds in (t ? T, t), where NT is the number of posts in time T . Figure 3 summarizes the results,
which show that: (i) our opinion forecasting formulas consistently outperform others both in terms
of MSE (often by an order of magnitude) and failure rate;7 (ii) its forecasting performance degrades
gracefully with respect to T , in contrast, competing methods often fail catastrophically; and, (iii) it
achieves an additional mileage by using Hawkes processes instead of Poisson processes. To some
extent, we believe SLANT?s superior performance is due to its ability to leverage historical data to
learn its model parameters and then simulate realistic temporal patterns.
Finally, we look at the forecasting results at a network level and show that our forecasting formulas
can also predict the evolution of opinions macroscopically (in terms of the average opinion across
users). Figure 4 summarizes the results for two real world datasets, which show that the forecasted
5
For these particular networks, Poisson intensities lead to consensus while Hawkes intensities lead to polarization, however, we did find
other examples in which Poisson intensities lead to polarization and Hawkes intensities lead to consensus.
6
Here, we do not distinguish between analytical and sampling based forecasting since, in practice, they closely match each other.
7
The failure rate is very close to zero for those datasets in which most users post messages with the same polarity.
7
Flocking
Collab-Filter
BiasedVoter
DeGroot
Linear
SLANT (H)
SLANT (P)
Voter
101
MSE
100
10?1
Failure-Rate
10?20
2
1
4
6
8
10 0
2
4
6
8
10 0
2
T, hours
4
6
8
10 0
2
4
6
8
10 0
2
T, hours
4
6
8
10
0
2
4
6
8
10
0
2
T, hours
4
6
8
10
0
2
4
6
8
10
0
2
T, hours
4
6
8
10
4
6
8
10
T, hours
0.8
0.6
0.4
0.2
00
2
T, hours
(a) Politics
T, hours
(b) Movie
T, hours
(c) Fight
T, hours
(d) Bollywood
T, hours
(e) US
0.6
m(t)
?
x
?(t)
T = 1h
T = 3h
T = 5h
0.5
0.4
28 April
2 May
Time
5 May
0.8
0.7
0.6
m(t)
?
x
?(t)
T = 1h
T = 3h
T = 5h
0.5
0.4
28 April
2 May
Time
5 May
(a) Tw: Movie (Hawkes) (b) Tw: Movie (Poisson)
0.8 m(t)
?
x
?(t)
0.6
T = 1h
0.4 T = 3h
T
= 5h
0.2
0
-0.2
-0.47 April
10 April
Time
13 April
(c) Tw: US (Hawkes)
Average Opinion?
0.7
Average Opinion?
0.8
Average Opinion?
Average Opinion?
Figure 3: Sentiment prediction performance using a 10% held-out set for each real-world dataset.
Performance is measured in terms of mean squared error (MSE) on the sentiment value, E[(m ?
m)
? 2 ], and failure rate on the sentiment polarity, P(sign(m) 6= sign(m)).
?
For each message in the
held-out set, we predict the sentiment value m given the history up to T hours before the time of
the message, for different values of T . Nowcasting corresponds to T = 0 and forecasting to T > 0.
The sentiment value m ? (?1, 1) and the sentiment polarity sign (m) ? {?1, 1}.
0.8 m(t)
?
x
?(t)
0.6
T = 1h
0.4 T = 3h
T
= 5h
0.2
0
-0.2
-0.47 April
10 April
Time
13 April
(d) Tw: US (Poisson)
Figure 4: Macroscopic sentiment prediction given by our model for two real-world datasets. The
panels show the observed sentiment m(t)
?
(in blue, running average), inferred opinion x
?(t) on the
training set (in red), and forecasted opinion EHt \Ht?T [xu (t)|Ht?T ] for T = 1, 3, and 5 hours on
the test set (in black, green and gray, respectively), where the symbol ? denotes average across users.
opinions become less accurate as the time T becomes larger, since the average is computed on longer
time periods. As expected, our model is more accurate when the message intensities are modeled
using multivariate Hawkes. We found qualitatively similar results for the remaining datasets.
5
Conclusions
We proposed a modeling framework of opinion dynamics, whose key innovation is modeling users?
latent opinions as continuous-time stochastic processes driven by a set of marked jump stochastic
differential equations (SDEs) [14]. Such construction allows each user?s latent opinion to be modulated over time by the opinions asynchronously expressed by her neighbors as sentiment messages.
We then exploited a key property of our model, the Markov property, to design efficient parameter
estimation and simulation algorithms, which scale to networks with millions of nodes. Moreover, we
derived a set of novel predictive formulas for efficient and accurate opinion forecasting and identified
conditions under which opinions converge to a steady state of consensus or polarization. Finally, we
experimented with real data gathered from Twitter and showed that our framework achieves more
accurate opinion forecasting than state-of-the-arts.
Our model opens up many interesting venues for future work. For example, in Eq. 4, our model
assumes a linear dependence between users? opinions, however, in some scenarios, this may be a
coarse approximation. A natural follow-up to improve the opinion forecasting accuracy would be
considering nonlinear dependences between opinions. It would be interesting to augment our model
to jointly consider correlations between different topics. One could leverage our modeling framework to design opinion shaping algorithms based on stochastic optimal control [14, 25]. Finally, one
of the key modeling ideas is realizing that users? expressed opinions (be it in the form of thumbs
up/down or text sentiment) can be viewed as noisy discrete samples of the users? latent opinion localized in time. It would be very interesting to generalize this idea to any type of event data and
derive sampling theorems and conditions under which an underlying general continuous signal of
interest (be it user?s opinion or expertise) can be recovered from event data with provable guarantees.
Acknowledgement: Abir De is partially supported by Google India under the Google India PhD Fellowship
Award, and Isabel Valera is supported by a Humboldt post-doctoral fellowship.
8
References
[1] O. Aalen, ?. Borgan, and H. Gjessing. Survival and event history analysis: a process point of view.
Springer Verlag, 2008.
[2] A. H. Al-Mohy and N. J. Higham. Computing the action of the matrix exponential, with an application
to exponential integrators. SIAM journal on scientific computing, 33(2):488?511, 2011.
[3] R. Axelrod. The dissemination of culture a model with local convergence and global polarization. Journal
of conflict resolution, 41(2):203?226, 1997.
[4] E. G. Birgin, J. M. Mart??nez, and M. Raydan. Nonmonotone spectral projected gradient methods on
convex sets. SIAM Journal on Optimization, 10(4), 2000.
[5] C. Blundell, J. Beck, and K. A. Heller. Modelling reciprocating relationships with hawkes processes. In
Advances in Neural Information Processing Systems, pages 2600?2608, 2012.
[6] P. Clifford and A. Sudbury. A model for spatial conflict. Biometrika, 60(3):581?588, 1973.
[7] A. Das, S. Gollapudi, and K. Munagala. Modeling opinion dynamics in social networks. In WSDM, 2014.
[8] A. De, S. Bhattacharya, P. Bhattacharya, N. Ganguly, and S. Chakrabarti. Learning a linear influence
model from transient opinion dynamics. In CIKM, 2014.
[9] M. H. DeGroot. Reaching a consensus. Journal of the American Statistical Association, 69(345), 1974.
[10] M. Farajtabar, N. Du, M. Gomez-Rodriguez, I. Valera, L. Song, and H. Zha. Shaping social activity by
incentivizing users. In NIPS, 2014.
[11] M. Farajtabar, Y. Wang, M. Gomez-Rodriguez, S. Li, H. Zha, and L. Song. Coevolve: A joint point
process model for information diffusion and network co-evolution. In NIPS, 2015.
[12] M. Gomez-Rodriguez, D. Balduzzi, and B. Sch?olkopf. Uncovering the Temporal Dynamics of Diffusion
Networks. In ICML, 2011.
[13] A. Hannak, E. Anderson, L. F. Barrett, S. Lehmann, A. Mislove, and M. Riedewald. Tweetin?in the rain:
Exploring societal-scale effects of weather on mood. In ICWSM, 2012.
[14] F. B. Hanson. Applied Stochastic Processes and Control for Jump-Diffusions. SIAM, 2007.
[15] R. Hegselmann and U. Krause. Opinion dynamics and bounded confidence models, analysis, and simulation. Journal of Artificial Societies and Social Simulation, 5(3), 2002.
[16] D. Hinrichsen, A. Ilchmann, and A. Pritchard. Robustness of stability of time-varying linear systems.
Journal of Differential Equations, 82(2):219 ? 250, 1989.
[17] P. Holme and M. E. Newman. Nonequilibrium phase transition in the coevolution of networks and opinions. Physical Review E, 74(5):056108, 2006.
[18] T. Karppi and K. Crawford. Social media, financial algorithms and the hack crash. TC&S, 2015.
[19] J. Kim, J.-B. Yoo, H. Lim, H. Qiu, Z. Kozareva, and A. Galstyan. Sentiment prediction using collaborative
filtering. In ICWSM, 2013.
[20] J. Leskovec, D. Chakrabarti, J. M. Kleinberg, C. Faloutsos, and Z. Ghahramani. Kronecker graphs: An
approach to modeling networks. JMLR, 2010.
[21] B. Pang and L. Lee. Opinion mining and sentiment analysis. F&T in information retrieval, 2(1-2), 2008.
[22] B. H. Raven. The bases of power: Origins and recent developments. Journal of social issues, 49(4), 1993.
[23] Y. Saad and M. H. Schultz. Gmres: A generalized minimal residual algorithm for solving nonsymmetric
linear systems. SIAM Journal on scientific and statistical computing, 7(3):856?869, 1986.
[24] I. Valera and M. Gomez-Rodriguez. Modeling adoption and usage of competing products. In Proceedings
of the 2015 IEEE International Conference on Data Mining, 2015.
[25] Y. Wang, E. Theodorou, A. Verma, and L. Song. Steering opinion dynamics in information diffusion
networks. arXiv preprint arXiv:1603.09021, 2016.
[26] M. E. Yildiz, R. Pagliari, A. Ozdaglar, and A. Scaglione. Voting models in random networks. In Information Theory and Applications Workshop, pages 1?7, 2010.
9
| 6193 |@word inversion:1 open:1 hu:5 simulation:18 catastrophically:1 initial:2 contains:2 past:2 nonmonotone:1 current:2 com:2 nt:2 manuel:1 recovered:1 dx:1 readily:2 realistic:1 sdes:2 designed:2 update:4 v:4 generative:1 ivalera:1 realizing:1 core:2 record:1 bvu:6 provides:3 coarse:1 cse:1 node:22 org:1 five:1 dn:4 become:3 differential:9 chakrabarti:2 initiative:1 introduce:2 expected:1 behavior:1 xz:1 growing:1 integrator:1 wsdm:1 informational:1 window:1 considering:1 increasing:1 becomes:1 spain:1 dnv:2 underlying:2 moreover:4 notation:1 medium:10 panel:5 bounded:1 sde:2 corporation:1 guarantee:2 temporal:11 quantitative:1 every:2 multidimensional:1 ti:23 concave:3 voting:1 biometrika:1 control:2 ozdaglar:1 t1:1 before:3 manuelgr:1 local:2 positive:2 consequence:3 problem4:1 despite:2 analyzing:1 id:4 niloy:2 lure:1 black:1 voter:4 doctoral:1 suggests:1 co:1 campaign:1 adoption:1 directed:1 investment:1 practice:2 procedure:5 empirical:4 weather:1 confidence:1 close:1 context:3 influence:6 www:2 latest:1 starting:1 convex:2 formulate:1 resolution:1 financial:1 stability:3 updated:3 construction:2 user:57 programming:1 designing:1 origin:1 bottom:1 observed:1 subproblem:1 preprint:1 solved:1 capture:1 hv:6 wang:2 news:2 trade:1 gjessing:1 nytimes:2 mentioned:1 borgan:1 mu:4 ui:5 nowcasting:1 dynamic:28 depend:4 solving:3 predictive:8 upon:1 efficiency:1 compactly:2 easily:2 joint:1 isabel:2 iit:1 routinely:1 represented:1 derivation:1 mileage:1 buv:1 artificial:1 newman:1 formation:1 neighborhood:1 firm:1 whose:2 larger:1 ability:1 statistic:1 g1:2 ganguly:2 think:1 jointly:1 noisy:4 asynchronously:3 mood:1 eigenvalue:1 analytical:5 product:3 galstyan:1 remainder:1 achieve:2 intuitive:1 gollapudi:1 scalability:1 olkopf:1 convergence:1 produce:2 converges:1 derive:4 friend:3 develop:3 illustrate:1 measured:2 eq:6 indicate:2 closely:3 filter:1 stochastic:14 munagala:1 transient:2 opinion:133 humboldt:1 premise:1 proposition:2 exploring:1 exp:3 maxu:1 predict:3 jumped:2 achieves:4 consecutive:1 estimation:15 gaussian:3 aim:1 reaching:1 varying:1 timevarying:1 publication:2 forecasted:2 derived:1 focus:3 she:2 consistently:1 raydan:1 likelihood:3 modelling:1 political:3 contrast:1 kim:2 sense:2 baseline:1 twitter:7 inaccurate:1 typically:3 entire:1 yildiz:1 fight:2 hidden:1 her:8 issue:2 overall:2 html:2 uncovering:1 augment:1 development:1 gmres:3 art:2 spatial:1 ernet:1 mutual:1 sampling:5 represents:3 look:1 icml:1 future:2 others:1 lighten:1 individual:2 beck:1 phase:1 interest:2 message:49 mining:2 evaluation:1 held:3 accurate:10 tuple:1 integral:1 neglecting:1 culture:1 iv:1 re:1 theoretical:7 leskovec:1 minimal:1 column:7 modeling:12 markovian:2 entry:2 nonequilibrium:1 theodorou:1 characterize:1 synthetic:2 venue:1 international:1 siam:4 retain:1 coevolve:1 probabilistic:2 lee:1 again:1 squared:2 recorded:5 clifford:1 containing:1 choose:1 american:1 li:1 de:4 matter:1 mv:2 depends:5 later:1 view:2 doing:1 analyze:1 red:2 start:1 investor:1 macroscopically:1 zha:2 collaborative:2 square:1 pang:1 accuracy:3 variance:1 efficiently:4 gathered:4 identify:3 generalize:1 thumb:2 accurately:3 trajectory:4 expertise:1 visualizes:1 history:12 scaglione:1 networking:5 whenever:1 facebook:1 against:2 failure:5 hack:1 associated:2 mi:6 couple:1 dataset:2 popular:2 lim:3 shaping:2 dt:6 follow:3 specify:1 april:8 anderson:1 marketing:1 until:1 correlation:1 ei:6 nonlinear:1 rodriguez:5 google:2 logistic:1 gray:1 scientific:2 believe:1 dnu:1 effect:2 usage:1 true:3 former:2 analytically:1 polarization:8 evolution:10 round:1 during:3 steady:11 mpi:2 hawkes:23 excitation:1 generalized:1 abir:3 tn:1 novel:3 recently:1 superior:1 functional:2 empirically:1 physical:1 conditioning:1 million:7 association:1 nonsymmetric:1 m1:1 numerically:1 reciprocating:1 slant:6 stable:1 kozareva:1 longer:1 base:1 something:1 multivariate:7 own:2 recent:2 showed:1 driven:3 scenario:3 collab:1 verlag:1 binary:1 arbitrarily:1 societal:1 exploited:1 seen:1 additional:2 greater:2 fortunately:1 steering:1 parallelized:1 converge:6 maximize:2 period:4 signal:1 ii:10 multiple:1 corporate:1 reduces:1 match:2 adapt:1 cross:1 long:1 retrieval:1 post:5 e1:1 mle:3 award:1 prediction:6 scalable:2 expectation:3 poisson:16 arxiv:2 iteration:1 represent:2 kernel:3 xmax:2 younger:1 whereas:1 remarkably:1 fine:3 separately:1 fellowship:2 krause:1 crash:1 macroscopic:1 sch:1 biased:1 saad:1 specially:1 degroot:3 leveraging:1 leverage:6 counting:2 iii:4 easy:1 variety:1 fit:4 coevolution:1 competing:2 triggering:3 identified:1 idea:5 blundell:1 politics:2 t0:24 whether:1 six:1 gb:2 reuse:1 forecasting:25 effort:1 sentiment:51 song:3 action:1 ignored:1 http:2 outperform:1 sign:8 estimated:4 cikm:1 per:5 blue:2 discrete:4 write:1 express:4 group:1 key:6 diffusion:6 ht:5 ht0:22 ram:1 graph:1 chronologically:1 year:1 sum:1 fourth:1 lehmann:1 farajtabar:4 decide:1 draw:1 appendix:12 summarizes:5 bound:1 ct:1 gomez:5 distinguish:3 auv:1 activity:2 adapted:1 kronecker:1 software:1 kleinberg:1 u1:1 simulate:6 developing:1 precompute:1 dissemination:1 across:3 increasingly:1 tw:4 equation:9 count:1 fail:1 needed:1 operation:1 spectral:2 bhattacharya:3 alternative:3 robustness:1 faloutsos:1 original:2 denotes:8 top:1 remaining:2 running:3 assumes:1 rain:1 sw:1 exploit:1 balduzzi:1 ghahramani:1 society:1 degrades:1 dependence:2 gradient:2 majority:1 gracefully:1 topic:4 extent:1 consensus:8 trivial:1 provable:1 pointwise:1 polarity:6 modeled:1 relationship:1 innovation:1 difficult:2 setup:1 eht:11 subproblems:1 negative:2 design:5 upper:1 observation:1 convolution:1 markov:7 datasets:6 descent:1 axelrod:1 synchronously:2 pritchard:1 intensity:27 inferred:1 introduced:1 nonlinearly:2 toolbox:1 extensive:2 hanson:1 conflict:2 barcelona:1 nu:1 nip:3 hour:14 address:1 able:3 pattern:2 including:3 max:2 memory:1 green:1 hot:1 event:14 power:1 natural:1 eh:1 rely:1 predicting:1 valera:4 hr:1 scarce:1 residual:1 mn:1 improve:1 movie:4 technology:2 carried:1 coupled:1 crawford:1 text:4 heller:1 literature:2 acknowledgement:1 review:1 interesting:3 limitation:3 filtering:2 proven:6 localized:1 validation:1 degree:1 gather:2 verma:1 share:3 supported:2 last:2 keeping:1 sourangshu:2 asynchronous:1 allow:1 understand:1 india:2 neighbor:3 wide:1 fifth:1 sparse:2 absolute:1 overcome:1 transition:2 world:4 birgin:1 adopts:1 collection:4 jump:5 projected:2 mohy:2 coincide:1 historical:2 party:1 qualitatively:1 social:24 riedewald:1 schultz:1 mislove:1 global:2 decides:1 reveals:1 assumed:4 foursquare:1 continuous:3 latent:8 un:1 triplet:1 decomposes:1 iterative:1 learn:3 b11:1 du:1 mse:3 complex:1 posted:6 diag:4 bollywood:2 did:1 da:1 main:2 whole:1 qiu:1 xu:11 site:5 en:1 exponential:6 xl:1 breaking:1 jmlr:1 third:2 posting:1 grained:3 formula:16 minute:2 down:3 theorem:8 incentivizing:1 sensing:1 symbol:1 decay:3 experimented:2 barrett:1 evidence:2 survival:1 raven:1 workshop:1 higham:1 phd:1 magnitude:1 horizon:1 forecast:8 tc:1 simply:1 nez:1 expressed:5 sport:1 g2:2 partially:1 springer:1 corresponds:1 satisfies:3 extracted:2 mart:1 conditional:13 marked:5 goal:1 viewed:1 exposition:1 shared:2 change:1 specifically:1 tendency:1 experimental:3 brand:1 vote:2 aalen:1 people:2 mark:2 latter:1 modulated:2 icwsm:2 relevance:1 evaluate:2 yoo:1 phenomenon:1 |
5,740 | 6,194 | Generating Videos with Scene Dynamics
Carl Vondrick
MIT
[email protected]
Hamed Pirsiavash
UMBC
[email protected]
Antonio Torralba
MIT
[email protected]
Abstract
We capitalize on large amounts of unlabeled video in order to learn a model of
scene dynamics for both video recognition tasks (e.g. action classification) and
video generation tasks (e.g. future prediction). We propose a generative adversarial
network for video with a spatio-temporal convolutional architecture that untangles
the scene?s foreground from the background. Experiments suggest this model can
generate tiny videos up to a second at full frame rate better than simple baselines,
and we show its utility at predicting plausible futures of static images. Moreover,
experiments and visualizations show the model internally learns useful features for
recognizing actions with minimal supervision, suggesting scene dynamics are a
promising signal for representation learning. We believe generative video models
can impact many applications in video understanding and simulation.
1
Introduction
Understanding object motions and scene dynamics is a core problem in computer vision. For both
video recognition tasks (e.g., action classification) and video generation tasks (e.g., future prediction),
a model of how scenes transform is needed. However, creating a model of dynamics is challenging
because there is a vast number of ways that objects and scenes can change.
In this work, we are interested in the fundamental problem of learning how scenes transform with
time. We believe investigating this question may yield insight into the design of predictive models for
computer vision. However, since annotating this knowledge is both expensive and ambiguous, we
instead seek to learn it directly from large amounts of in-the-wild, unlabeled video. Unlabeled video
has the advantage that it can be economically acquired at massive scales yet contains rich temporal
signals ?for free? because frames are temporally coherent.
With the goal of capturing some of the temporal knowledge contained in large amounts of unlabeled
video, we present an approach that learns to generate tiny videos which have fairly realistic dynamics
and motions. To do this, we capitalize on recent advances in generative adversarial networks [9, 31, 4],
which we extend to video. We introduce a two-stream generative model that explicitly models the
foreground separately from the background, which allows us to enforce that the background is
stationary, helping the network to learn which objects move and which do not.
Our experiments suggest that our model has started to learn about dynamics. In our generation
experiments, we show that our model can generate scenes with plausible motions.1 We conducted a
psychophysical study where we asked over a hundred people to compare generated videos, and people
preferred videos from our full model more often. Furthermore, by making the model conditional
on an input image, our model can sometimes predict a plausible (but ?incorrect?) future. In our
recognition experiments, we show how our model has learned, without supervision, useful features
for human action classification. Moreover, visualizations of the learned representation suggest future
generation may be a promising supervisory signal for learning to recognize objects of motion.
1
See http://mit.edu/vondrick/tinyvideo for the animated videos.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
The primary contribution of this paper is showing how to leverage large amounts of unlabeled video
in order to acquire priors about scene dynamics. The secondary contribution is the development of a
generative model for video. The remainder of this paper describes these contributions in detail. In
section 2, we describe our generative model for video. In section 3, we present several experiments to
analyze the generative model. We believe that generative video models can impact many applications,
such as in simulations, forecasting, and representation learning.
1.1
Related Work
This paper builds upon early work in generative video models [29]. However, previous work
has focused mostly on small patches, and evaluated it for video clustering. Here, we develop a
generative video model for natural scenes using state-of-the-art adversarial learning methods [9, 31].
Conceptually, our work is related to studies into fundamental roles of time in computer vision
[30, 12, 2, 7, 24]. However, here we are interested in generating short videos with realistic temporal
semantics, rather than detecting or retrieving them.
Our technical approach builds on recent work in generative adversarial networks for image modeling
[9, 31, 4, 47, 28], which we extend to video. To our knowledge, there has been relatively little
work extensively studying generative adversarial networks for video. Most notably, [22] also uses
adversarial networks for video frame prediction. Our framework can generate videos for longer time
scales and learn representations of video using unlabeled data. Our work is also related to efforts
to predict the future in video [33, 22, 43, 50, 42, 17, 8, 54] as well as concurrent work in future
generation [6, 15, 20, 49, 55]. Often these works may be viewed as a generative model conditioned
on the past frames. Our work complements these efforts in two ways. Firstly, we explore how to
generate videos from scratch (not conditioned on the past). Secondly, while prior work has used
generative models in video settings mostly on a single frame, we jointly generate a sequence of
frames (32 frames) using spatio-temporal convolutional networks, which may help prevent drifts due
to errors accumulating.
We leverage approaches for recognizing actions in video with deep networks, but apply them for
video generation instead. We use spatio-temporal 3D convolutions to model videos [40], but we use
fractionally strided convolutions [51] instead because we are interested in generation. We also use
two-streams to model video [34], but apply them for video generation instead of action recognition.
However, our approach does not explicitly use optical flow; instead, we expect the network to learn
motion features on its own. Finally, this paper is related to a growing body of work that capitalizes
on large amounts of unlabeled video for visual recognition tasks [18, 46, 37, 13, 24, 25, 3, 32, 26, 27,
19, 41, 42, 1]. We instead leverage large amounts of unlabeled video for generation.
2
Generative Models for Video
In this section, we present a generative model for videos. We propose to use generative adversarial
networks [9], which have been shown to have good performance on image generation [31, 4].
2.1
Review: Generative Adversarial Networks
The main idea behind generative adversarial networks [9] is to train two networks: a generator
network G tries to produce a video, and a discriminator network D tries to distinguish between
?real? videos and ?fake? generated videos. One can train these networks against each other in a
min-max game where the generator seeks to maximally fool the discriminator while simultaneously
the discriminator seeks to detect which examples are fake:
min max Ex?px (x) [log D(x; wD )] + Ez?pz (z) [log (1 ? D(G(z; wG ); wD ))]
wG
wD
(1)
where z is a latent ?code? that is often sampled from a simple distribution (such as a normal
distribution) and x ? px (x) samples from the data distribution. In practice, since we do not know the
true distribution of data px (x), we can estimate the expectation by drawing from our dataset.
Since we will optimize Equation 1 with gradient based methods (SGD), the two networks G and
D can take on any form appropriate for the task as long as they are differentiable with respect to
parameters wG and wD . We design a G and D for video.
2
Foreground Stream
3D convolutions
Foreground
Tanh
Noise
m
100 dim
f + (1
m)
Mask
Sigmoid
b
Generated Video
Space-Time Cuboid
Replicate over Time
Background Stream
2D convolutions
Background
Tanh
Figure 1: Video Generator Network: We illustrate our network architecture for the generator. The
input is 100 dimensional (Gaussian noise). There are two independent streams: a moving foreground
pathway of fractionally-strided spatio-temporal convolutions, and a static background pathway of
fractionally-strided spatial convolutions, both of which up-sample. These two pathways are combined
to create the generated video using a mask from the motion pathway. Below each volume is its size
and the number of channels in parenthesis.
2.2
Generator Network
The input to the generator network is a low-dimensional latent code z ? Rd . In most cases, this code
can be sampled from a distribution (e.g., Gaussian). Given a code z, we wish to produce a video.
We design the architecture of the generator network with a few principles in mind. Firstly, we want the
network to be invariant to translations in both space and time. Secondly, we want a low-dimensional
z to be able to produce a high-dimensional output (video). Thirdly, we want to assume a stationary
camera and take advantage of the the property that usually only objects move. We are interested
in modeling object motion, and not the motion of cameras. Moreover, since modeling that the
background is stationary is important in video recognition tasks [44], it may be helpful in video
generation as well. We explore two different network architectures:
One Stream Architecture: We combine spatio-temporal convolutions [14, 40] with fractionally
strided convolutions [51, 31] to generate video. Three dimensional convolutions provide spatial
and temporal invariance, while fractionally strided convolutions can upsample efficiently in a deep
network, allowing z to be low-dimensional. We use an architecture inspired by [31], except extended
in time. We use a five layer network of 4 ? 4 ? 4 convolutions with a stride of 2, except for the first
layer which uses 2 ? 4 ? 4 convolutions (time ? width ? height). We found that these kernel sizes
provided an appropriate balance between training speed and quality of generations.
Two Stream Architecture: The one stream architecture does not model that the world is stationary
and usually only objects move. We experimented with making this behavior explicit in the model. We
use an architecture that enforces a static background and moving foreground. We use a two-stream
architecture where the generator is governed by the combination:
G2 (z) = m(z) f (z) + (1 ? m(z)) b(z).
(2)
Our intention is that 0 ? m(z) ? 1 can be viewed as a spatio-temporal mask that selects either
the foreground f (z) model or the background model b(z) for each pixel location and timestep. To
enforce a background model in the generations, b(z) produces a spatial image that is replicated over
time, while f (z) produces a spatio-temporal cuboid masked by m(z). By summing the foreground
model with the background model, we can obtain the final generation. Note that is element-wise
multiplication, and we replicate singleton dimensions to match its corresponding tensor. During
learning, we also add to the objective a small sparsity prior on the mask ?km(z)k1 for ? = 0.1,
which we found helps encourage the network to use the background stream.
3
We use fractionally strided convolutional networks for m(z), f (z), and b(z). For f (z), we use the
same network as the one-stream architecture, and for b(z) we use a similar generator architecture to
[31]. We only use their architecture; we do not initialize with their learned weights. To create the
mask m(z), we use a network that shares weights with f (z) except the last layer, which has only
one output channel. We use a sigmoid activation function for the mask. We visualize the two-stream
architecture in Figure 1. In our experiments, the generator produces 64 ? 64 videos for 32 frames,
which is a little over a second.
2.3
Discriminator Network
The discriminator needs to be able to solve two problems: firstly, it must be able to classify realistic
scenes from synthetically generated scenes, and secondly, it must be able to recognize realistic motion
between frames. We chose to design the discriminator to be able to solve both of these tasks with the
same model. We use a five-layer spatio-temporal convolutional network with kernels 4 ? 4 ? 4 so
that the hidden layers can learn both visual models and motion models. We design the architecture to
be reverse of the foreground stream in the generator, replacing fractionally strided convolutions with
strided convolutions (to down-sample instead of up-sample), and replacing the last layer to output a
binary classification (real or not).
2.4
Learning and Implementation
We train the generator and discriminator with stochastic gradient descent. We alternate between
maximizing the loss w.r.t. wD and minimizing the loss w.r.t. wG until a fixed number of iterations.
All networks are trained from scratch. Our implementation is based off a modified version of [31] in
Torch7. We used a more numerically stable implementation of cross entropy loss to prevent overflow.
We use the Adam [16] optimizer and a fixed learning rate of 0.0002 and momentum term of 0.5. The
latent code has 100 dimensions, which we sample from a normal distribution. We use a batch size
of 64. We initialize all weights with zero mean Gaussian noise with standard deviation 0.01. We
normalize all videos to be in the range [?1, 1]. We use batch normalization [11] followed by the
ReLU activation functions after every layer in the generator, except the output layers, which uses
tanh. Following [31], we also use batch normalization in the discriminator except for the first layer
and we instead use leaky ReLU [48]. Training typically took several days on a GPU.
3
Experiments
We experiment with the generative adversarial network for video (VGAN) on both generation and
recognition tasks. We also show several qualitative examples online.
3.1
Unlabeled Video Dataset
We use a large amount of unlabeled video to train our model. We downloaded over two million videos
from Flickr [39] by querying for popular Flickr tags as well as querying for common English words.
From this pool, we created two datasets:
Unfiltered Unlabeled Videos: We use these videos directly, without any filtering, for representation
learning. The dataset is over 5, 000 hours. Filtered Unlabeled Videos: To evaluate generations, we
use the Places2 pre-trained model [53] to automatically filter the videos by scene category. Since
image/video generation is a challenging problem, we assembled this dataset to better diagnose
strengths and weaknesses of approaches. We experimented with four scene categories: golf course,
hospital rooms (babies), beaches, and train station.
Stabilization: As we are interested in the movement of objects and not camera shake, we stabilize
the camera motion for both datasets. We extract SIFT keypoints [21], use RANSAC to estimate a
homography (rotation, translation, scale) between adjacent frames, and warp frames to minimize
background motion. When the homography moved out of the frame, we fill in the missing values
using the previous frames. If the homography has too large of a re-projection error, we ignore that
segment of the video for training, which only happened 3% of the time. The only other pre-processing
we do is normalizing the videos to be in the range [?1, 1]. We extract frames at native frame rate (25
fps). We use 32-frame videos of spatial resolution 64 ? 64.
4
Frame
?1
Beach
?Generated
?Videos
Frame
?16
Frame
?32
Frame
?1
Frame
?32
Frame
?1
Train
?Station
?Generated
?Videos
Frame
?1
Frame
?16
Golf
?Course
?Generated
?Videos
Frame
?16
Frame
?32
Hospital
?/
?Baby
?Generated
?Videos
Frame
?16
Frame
?32
Figure 2: Video Generations: We show some generations from the two-stream model. The red arrows highlight motions. Please see http://mit.edu/vondrick/tinyvideo for animated movies.
3.2
Video Generation
We evaluate both the one-stream and two-stream generator. We trained a generator for each scene
category in our filtered dataset. We perform both a qualitative evaluation as well as a quantitative
psychophysical evaluation to measure the perceptual quality of the generated videos.
Qualitative Results: We show several examples of the videos generated from our model in Figure
2. We observe that a) the generated scenes tend to be fairly sharp and that b) the motion patterns
are generally correct for their respective scene. For example, the beach model tends to produce
beaches with crashing waves, the golf model produces people walking on grass, and the train station
generations usually show train tracks and a train with windows rapidly moving along it. While the
model usually learns to put motion on the right objects, one common failure mode is that the objects
lack resolution. For example, the people in the beaches and golf courses are often blobs. Nevertheless,
we believe it is promising that our model can generate short motions. We visualize the behavior of
the two-stream architecture in Figure 3.
Baseline: Since to our knowledge there are no existing large-scale generative models of video ([33]
requires an input frame), we develop a simple but reasonable baseline for this task. We train an
autoencoder over our data. The encoder is similar to the discriminator network (except producing
100 dimensional code), while the decoder follows the two-stream generator network. Hence, the
baseline autoencoder network has a similar number of parameters as our full approach. We then feed
examples through the encoder and fit a Gaussian Mixture Model (GMM) with 256 components over
the 100 dimensional hidden space. To generate a novel video, we sample from this GMM, and feed
the sample through the decoder.
Evaluation Metric: We quantitatively evaluate our generation using a psychophysical two-alternative
forced choice with workers on Amazon Mechanical Turk. We show a worker two random videos,
5
Background
Foreground
Generation
Background
Foreground
Generation
+
=
+
=
+
=
+
=
Figure 3: Streams: We visualize the background, foreground, and masks for beaches (left) and golf
(right). The network generally learns to disentangle the foreground from the background.
Percentage of Trials
?Which video is more realistic??
Golf Beach Train Baby Mean
Random Preference
50
50
50
50
50
Prefer VGAN Two Stream over Autoencoder
88
83
87
71
82
Prefer VGAN One Stream over Autoencoder
85
88
85
73
82
Prefer VGAN Two Stream over VGAN One Stream
55
58
47
52
53
Prefer VGAN Two Stream over Real
21
23
23
6
18
Prefer VGAN One Stream over Real
17
21
19
8
16
Prefer Autoencoder over Real
4
2
4
2
3
Table 1: Video Generation Preferences: We show two videos to workers on Amazon Mechanical
Turk, and ask them to choose which video is more realistic. The table shows the percentage of times
that workers prefer one generations from one model over another. In all cases, workers tend to prefer
video generative adversarial networks over an autoencoder. In most cases, workers show a slight
preference for the two-stream model.
and ask them ?Which video is more realistic?? We collected over 13, 000 opinions across 150 unique
workers. We paid workers one cent per comparison, and required workers to historically have a 95%
approval rating on MTurk. We experimented with removing bad workers that frequently said real
videos were not realistic, but the relative rankings did not change. We designed this experiment
following advice from [38], which advocates evaluating generative models for the task at hand. In
our case, we are interested in perceptual quality of motion. We consider a model X better than model
Y if workers prefer generations from X more than generations from Y.
Quantitative Results: Table 1 shows the percentage of times that workers preferred generations
from one model over another. Workers consistently prefer videos from the generative adversarial
network more than an autoencoder. Additionally, workers show a slight preference for the two-stream
architecture, especially in scenes where the background is large (e.g., golf course, beach). Although
the one-stream architecture is capable of generating stationary backgrounds, it may be difficult to
find this solution, motivating a more explicit architecture. The one-stream architecture generally
produces high-frequency temporal flickering in the background. To evaluate whether static frames are
better than our generations, we also ask workers to choose between our videos and a static frame, and
workers only chose the static frame 38% of the time, suggesting our model produces more realistic
motion than static frames on average. Finally, while workers generally can distinguish real videos
from generated videos, the workers show the most confusion with our two-stream model compared to
baselines, suggesting the two-stream generations may be more realistic on average.
3.3
Video Representation Learning
We also experimented with using our model as a way to learn unsupervised representations for video.
We train our two-stream model with over 5, 000 hours of unfiltered, unlabeled videos from Flickr.
We then fine-tune the discriminator on the task of interest (e.g., action recognition) using a relatively
small set of labeled video. To do this, we replace the last layer (which is a binary classifier) with a
K-way softmax classifier. We also add dropout [36] to the penultimate layer to reduce overfitting.
Action Classification: We evaluated performance on classifying actions on UCF101 [35]. We
report accuracy in Figure 4a. Initializing the network with the weights learned from the generative
adversarial network outperforms a randomly initialized network, suggesting that it has learned
an useful internal representation for video. Interestingly, while a randomly initialized network
under-performs hand-crafted STIP features [35], the network initialized with our model significantly
6
50
40
2.1
VGAN Init
Random Init
Chance
2
Relative Accuracy Gain
45
Accuracy (percentage)
Method
Accuracy
Chance
0.9%
STIP Features [35]
43.9%
Temporal Coherence [10] 45.4%
Shuffle and Learn [24]
50.2%
VGAN + Random Init
36.7%
VGAN + Logistic Reg
49.3%
VGAN + Fine Tune
52.1%
ImageNet Supervision [45]91.4%
35
30
25
20
15
10
1.9
1.8
1.7
1.6
5
0
102
10
3
10
4
# of Labeled Training Videos
(b) Performance vs # Data
1.5
0
2000
4000
6000
8000
10000
# of Labeled Training Videos
(c) Relative Gain vs # Data
(a) Accuracy with Unsupervised Methods
Figure 4: Video Representation Learning: We evaluate the representation learned by the discriminator for action classification on UCF101 [35]. (a) By fine-tuning the discriminator on a relatively
small labeled dataset, we can obtain better performance than random initialization, and better than
hand-crafted space-time interest point (STIP) features. Moreover, our model slightly outperforms
another unsupervised video representation [24] despite using an order of magnitude fewer learned
parameters and only 64 ? 64 videos. Note unsupervised video representations are still far from
models that leverage external supervision. (b) Our unsupervised representation with less labeled data
outperforms random initialization with all the labeled data. Our results suggest that, with just 1/8th
of the labeled data, we can match performance to a randomly initialized network that used all of the
labeled data. (c) The fine-tuned model has larger relative gain over random initialization in cases
with less labeled data. Note that (a) is over all train/test splits of UCF101, while (b,c) is over the first
split in order to make experiments less expensive.
outperforms it. We also experimented with training a logistic regression on only the last layer,
which performed worse. Finally, our model slightly outperforms another recent unsupervised video
representation learning approach [24]. However, our approach uses an order of magnitude fewer
parameters, less layers (5 layers vs 8 layers), and low-resolution video.
Performance vs Data: We also experimented with varying the amount of labeled training data
available to our fine-tuned network. Figure 4b reports performance versus the amount of labeled
training data available. As expected, performance increases with more labeled data. The fine-tuned
model shows an advantage in low data regimes: even with one eighth of the labeled data, the finetuned model still beats a randomly initialized network. Moreover, Figure 4c plots the relative accuracy
gain over the fine-tuned model and the random initialization (fine-tuned performance divided by
random initialized performance). This shows that fine-tuning with our model has larger relative gain
over random initialization in cases with less labeled data, showing its utility in low-data regimes.
3.4
Future Generation
We investigate whether our approach can be used to generate the future of a static image. Specifically,
given a static image x0 , can we extrapolate a video of possible consequent frames?
Encoder: We utilize the same model as our two-stream model, however we must make one change
in order to input the static image instead of the latent code. We can do this by attaching a fivelayer convolutional network to the front of the generator which encodes the image into the latent
space, similar to a conditional generative adversarial network [23]. The rest of the generator and
discriminator networks remain the same. However, we add an additional loss term that minimizes
the L1 distance between the input and the first frame of the generated image. We do this so that the
generator creates videos consistent with the input image. We train from scratch with the objective:
min max Ex?px (x) [log D(x; wD )] + Ex0 ?px0 (x0 ) [log (1 ? D(G(x0 ; wG ); wD ))]
wG wD
+Ex0 ?px0 (x0 ) ?kx0 ? G0 (x0 ; wG )k22
(3)
where x0 is the first frame of the input, G0 (?) is the first frame of the generated video, and ? ? R is a
hyperparameter. The discriminator will try to classify realistic frames and realistic motions as before,
while the generator will try to produce a realistic video such that the first frame is reconstructed well.
Results: We qualitatively show a few examples of our approach in Figure 5 using held-out testing
videos. Although the extrapolations are rarely correct, they often have fairly plausible motions. The
7
Static
?
Input
Generated
?Video
Frame
?1
Frame
?16
Static
?
Input
Frame
?32
Frame
?1
Generated
?Video
Frame
?16
Frame
?32
Figure 5: Future Generation: We show one application of generative video models where we
predict videos given a single static image. The red arrows highlight regions of motion. Since this is
an ambiguous task, our model usually does not generate the correct video, however the generation is
often plausible. Please see http://mit.edu/vondrick/tinyvideo for animated movies.
(a) hidden unit that fires on ?person?
(b) hidden unit that fires on ?train tracks?
Figure 6: Visualizing Representation: We visualize some hidden units in the encoder of the future
generator, following the technique from [52]. We highlight regions of images that a particular
convolutional hidden unit maximally activates on. While not at all units are semantic, some units
activate on objects that are sources for motion, such as people and train tracks.
most common failure is that the generated video has a scene similar but not identical to the input
image, such as by changing colors or dropping/hallucinating objects. The former could be solved
by a color histogram normalization in post-processing (which we did not do for simplicity), while
we suspect the latter will require building more powerful generative models. The generated videos
are usually not the correct video, but we observe that often the motions are plausible. We are not
aware of an existing approach that can directly generate multi-frame videos from a single static image.
[33, 22] can generate video, but they require multiple input frames and empirically become blurry
after extrapolating many frames. [43, 50] can predict optic flow from a single image, but they do
not generate several frames of motion and may be susceptible to warping artifacts. We believe this
experiment shows an important application of generative video models.
Visualizing Representation: Since generating the future requires understanding how objects move,
the network may need learn to recognize some objects internally, even though it is not supervised to
do so. Figure 6 visualizes some activations of hidden units in the third convolutional layer. While not
at all units are semantic, some of the units tend to be selective for objects that are sources of motion,
such as people or train tracks. These visualizations suggest that scaling up future generation might be
a promising supervisory signal for object recognition and complementary to [27, 5, 46].
Conclusion: Understanding scene dynamics will be crucial for the next generation of computer vision
systems. In this work, we explored how to learn some dynamics from large amounts of unlabeled
video by capitalizing on adversarial learning methods. Since annotating dynamics is expensive, we
believe learning from unlabeled data is a promising direction. While we are still a long way from
fully harnessing the potential of unlabeled video, our experiments support that abundant unlabeled
video can be lucrative for both learning to generate videos and learning visual representations.
Acknowledgements: We thank Yusuf Aytar for dataset discussions. We thank MIT TIG, especially Garrett
Wollman, for troubleshooting issues on storing the 26 TB of video. We are grateful for the Torch7 community
for answering many questions. NVidia donated GPUs used for this research. This work was supported by NSF
grant #1524817 to AT, START program at UMBC to HP, and the Google PhD fellowship to CV.
8
References
[1] Yusuf Aytar, Carl Vondrick, and Antonio Torralba. Learning sound representations from unlabeled video. NIPS, 2016.
[2] Tali Basha, Yael Moses, and Shai Avidan. Photo sequencing. In ECCV. 2012.
[3] Chao-Yeh Chen and Kristen Grauman. Watching unlabeled video helps learn new human actions from very few labeled snapshots. In
CVPR, 2013.
[4] Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using a laplacian pyramid of adversarial networks.
In NIPS, 2015.
[5] Carl Doersch, Abhinav Gupta, and Alexei A Efros. Unsupervised visual representation learning by context prediction. In ICCV, 2015.
[6] Chelsea Finn, Ian Goodfellow, and Sergey Levine. Unsupervised learning for physical interaction through video prediction. arXiv, 2016.
[7] J?ozsef Fiser and Richard N Aslin. Statistical learning of higher-order temporal structure from visual shape sequences. JEP, 2002.
[8] Katerina Fragkiadaki, Sergey Levine, Panna Felsen, and Jitendra Malik. Recurrent network models for human dynamics. In ICCV, 2015.
[9] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio.
Generative adversarial nets. In NIPS, 2014.
[10] Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality reduction by learning an invariant mapping. In CVPR, 2006.
[11] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv,
2015.
[12] Phillip Isola, Joseph J Lim, and Edward H Adelson. Discovering states and transformations in image collections. In CVPR, 2015.
[13] Dinesh Jayaraman and Kristen Grauman. Learning image representations tied to ego-motion. In ICCV, 2015.
[14] Shuiwang Ji, Wei Xu, Ming Yang, and Kai Yu. 3d convolutional neural networks for human action recognition. PAMI, 2013.
[15] Nal Kalchbrenner, Aaron van den Oord, Karen Simonyan, Ivo Danihelka, Oriol Vinyals, Alex Graves, and Koray Kavukcuoglu. Video
pixel networks. arXiv, 2016.
[16] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv, 2014.
[17] Kris Kitani, Brian Ziebart, James Bagnell, and Martial Hebert. Activity forecasting. ECCV, 2012.
[18] Quoc V Le. Building high-level features using large scale unsupervised learning. In CASSP, 2013.
[19] Yin Li, Manohar Paluri, James M Rehg, and Piotr Doll?ar. Unsupervised learning of edges. arXiv, 2015.
[20] William Lotter, Gabriel Kreiman, and David Cox. Deep predictive coding networks for video prediction and unsupervised learning.
arXiv, 2016.
[21] David G Lowe. Object recognition from local scale-invariant features. In ICCV, 1999.
[22] Michael Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean square error. arXiv, 2015.
[23] Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv, 2014.
[24] Ishan Misra, C. Lawrence Zitnick, and Martial Hebert. Shuffle and Learn: Unsupervised Learning using Temporal Order Verification. In
ECCV, 2016.
[25] Hossein Mobahi, Ronan Collobert, and Jason Weston. Deep learning from temporal coherence in video. In ICML, 2009.
[26] Phuc Xuan Nguyen, Gregory Rogez, Charless Fowlkes, and Deva Ramanan. The open world of micro-videos. arXiv, 2016.
[27] Andrew Owens, Jiajun Wu, Josh H McDermott, William T Freeman, and Antonio Torralba. Ambient sound provides supervision for
visual learning. arXiv, 2016.
[28] Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting.
arXiv, 2016.
[29] Nikola Petrovic, Aleksandar Ivanovic, and Nebojsa Jojic. Recursive estimation of generative models of video. In CVPR, 2006.
[30] Lyndsey Pickup, Zheng Pan, Donglai Wei, YiChang Shih, Changshui Zhang, Andrew Zisserman, Bernhard Scholkopf, and William
Freeman. Seeing the arrow of time. In CVPR, 2014.
[31] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial
networks. arXiv, 2015.
[32] Vignesh Ramanathan, Kevin Tang, Greg Mori, and Li Fei-Fei. Learning temporal embeddings for complex video analysis. In CVPR,
2015.
[33] MarcAurelio Ranzato, Arthur Szlam, Joan Bruna, Michael Mathieu, Ronan Collobert, and Sumit Chopra. Video (language) modeling: a
baseline for generative models of natural videos. arXiv, 2014.
[34] Karen Simonyan and Andrew Zisserman. Two-stream convolutional networks for action recognition in videos. In NIPS, 2014.
[35] Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild.
arXiv, 2012.
[36] Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent
neural networks from overfitting. JMLR, 2014.
[37] Nitish Srivastava, Elman Mansimov, and Ruslan Salakhutdinov. Unsupervised learning of video representations using lstms. arXiv,
2015.
[38] Lucas Theis, A?aron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. arXiv, 2015.
[39] Bart Thomee, David A Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland, Damian Borth, and Li-Jia Li. Yfcc100m:
The new data in multimedia research. ACM, 2016.
[40] Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. Learning spatiotemporal features with 3d convolutional
networks. arXiv, 2014.
[41] Carl Vondrick, Donald Patterson, and Deva Ramanan. Efficiently scaling up crowdsourced video annotation. IJCV, 2013.
[42] Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Anticipating visual representations from unlabeled video. CVPR, 2015.
[43] Jacob Walker, Arpan Gupta, and Martial Hebert. Patch to the future: Unsupervised visual prediction. In CVPR, 2014.
[44] Heng Wang and Cordelia Schmid. Action recognition with improved trajectories. In ICCV, 2013.
[45] Limin Wang, Yuanjun Xiong, Zhe Wang, and Yu Qiao. Towards good practices for very deep two-stream convnets. arXiv, 2015.
[46] Xiaolong Wang and Abhinav Gupta. Unsupervised learning of visual representations using videos. In ICCV, 2015.
[47] Xiaolong Wang and Abhinav Gupta. Generative image modeling using style and structure adversarial networks. arXiv, 2016.
[48] Bing Xu, Naiyan Wang, Tianqi Chen, and Mu Li. Empirical evaluation of rectified activations in convolutional network. arXiv, 2015.
[49] Tianfan Xue, Jiajun Wu, Katherine L Bouman, and William T Freeman. Visual dynamics: Probabilistic future frame synthesis via cross
convolutional networks. arXiv, 2016.
[50] Jenny Yuen and Antonio Torralba. A data-driven approach for event prediction. In ECCV. 2010.
[51] Matthew D Zeiler, Dilip Krishnan, Graham W Taylor, and Rob Fergus. Deconvolutional networks. In CVPR, 2010.
[52] Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Object detectors emerge in deep scene cnns. arXiv,
2014.
[53] Bolei Zhou, Agata Lapedriza, Jianxiong Xiao, Antonio Torralba, and Aude Oliva. Learning deep features for scene recognition using
places database. In NIPS, 2014.
[54] Yipin Zhou and Tamara L Berg. Temporal perception and prediction in ego-centric video. In ICCV, 2015.
[55] Yipin Zhou and Tamara L Berg. Learning temporal transformations from time-lapse videos. In ECCV, 2016.
9
| 6194 |@word trial:1 economically:1 version:1 cox:1 replicate:2 open:1 km:1 simulation:2 seek:3 jacob:1 paid:1 sgd:1 inpainting:1 reduction:1 contains:1 tuned:5 interestingly:1 deconvolutional:1 animated:3 past:2 existing:2 outperforms:5 kx0:1 wd:8 places2:1 activation:4 yet:1 diederik:1 must:3 gpu:1 realistic:13 ronan:2 shape:1 christian:1 designed:1 plot:1 extrapolating:1 grass:1 stationary:5 generative:37 v:4 fewer:2 discovering:1 ivo:1 alec:1 amir:1 capitalizes:1 nebojsa:1 core:1 short:2 filtered:2 detecting:1 provides:1 location:1 preference:4 philipp:1 firstly:3 tianfan:1 zhang:1 five:2 height:1 along:1 become:1 retrieving:1 incorrect:1 qualitative:3 fps:1 ijcv:1 wild:2 pathway:4 combine:1 advocate:1 scholkopf:1 introduce:1 jayaraman:1 x0:6 acquired:1 mask:7 notably:1 expected:1 paluri:2 elman:1 frequently:1 growing:1 multi:2 behavior:2 inspired:1 approval:1 ming:1 freeman:3 automatically:1 salakhutdinov:2 little:2 soumith:2 window:1 spain:1 provided:1 moreover:5 lapedriza:2 minimizes:1 transformation:2 temporal:20 quantitative:2 every:1 donated:1 grauman:2 classifier:2 mansimov:1 sherjil:1 unit:9 internally:2 grant:1 ramanan:2 producing:1 szlam:1 danihelka:1 before:1 local:1 tends:1 despite:1 pami:1 might:1 chose:2 initialization:5 hpirsiav:1 challenging:2 luke:1 shamma:1 range:2 unique:1 camera:4 enforces:1 testing:1 lecun:2 practice:2 recursive:1 jep:1 empirical:1 significantly:1 projection:1 ucf101:4 intention:1 word:1 pre:2 seeing:1 donald:1 suggest:5 unlabeled:20 thomee:1 put:1 context:2 naiyan:1 accumulating:1 optimize:1 missing:1 maximizing:1 jimmy:1 emily:1 focused:1 resolution:3 hadsell:1 amazon:2 simplicity:1 pouget:1 insight:1 fill:1 rehg:1 massive:1 carl:5 us:4 goodfellow:2 element:1 ego:2 recognition:14 expensive:3 walking:1 finetuned:1 native:1 labeled:15 database:1 role:1 levine:2 initializing:1 solved:1 wang:6 zamir:1 region:2 ranzato:1 movement:1 shuffle:2 nikola:1 benjamin:1 mu:1 ziebart:1 asked:1 warde:1 dynamic:13 gerald:1 trained:3 grateful:1 deva:2 segment:1 predictive:2 upon:1 creates:1 patterson:1 train:17 forced:1 roshan:1 describe:1 activate:1 kevin:1 harnessing:1 kalchbrenner:1 jean:1 larger:2 plausible:6 solve:2 cvpr:9 drawing:1 annotating:2 wg:7 encoder:4 kai:1 simonyan:2 transform:2 jointly:1 final:1 online:1 advantage:3 sequence:2 differentiable:1 blob:1 net:2 took:1 propose:2 matthias:1 interaction:1 tran:1 remainder:1 rapidly:1 moved:1 normalize:1 sutskever:1 darrell:1 produce:11 generating:4 adam:2 xuan:1 tianqi:1 object:18 help:3 illustrate:1 develop:2 recurrent:1 andrew:3 bourdev:1 yuanjun:1 edward:1 direction:1 correct:4 filter:1 stochastic:2 cnns:1 stabilization:1 human:5 opinion:1 donglai:1 require:2 kristen:2 yuen:1 brian:1 secondly:3 manohar:2 helping:1 normal:2 lawrence:1 mapping:1 predict:4 visualize:4 matthew:1 efros:2 optimizer:1 torralba:8 early:1 ruslan:2 estimation:1 tanh:3 concurrent:1 ex0:2 changshui:1 create:2 mit:8 activates:1 gaussian:4 modified:1 rather:1 zhou:4 agata:2 varying:1 consistently:1 sequencing:1 adversarial:20 baseline:6 dilip:1 detect:1 helpful:1 dim:1 typically:1 hidden:7 selective:1 interested:6 semantics:1 selects:1 pixel:2 issue:1 classification:6 hossein:1 lucas:1 development:1 art:1 spatial:4 fairly:3 initialize:2 softmax:1 aware:1 cordelia:1 piotr:1 beach:8 koray:1 identical:1 capitalize:2 unsupervised:16 denton:1 adelson:1 yu:2 foreground:13 future:15 icml:1 report:2 aslin:1 quantitatively:1 richard:1 few:3 strided:8 mirza:2 randomly:4 yoshua:1 micro:1 simultaneously:1 recognize:3 fire:2 william:4 lotter:1 interest:2 investigate:1 alexei:2 zheng:1 evaluation:5 weakness:1 mixture:1 farley:1 behind:1 bart:1 elizalde:1 held:1 xiaolong:2 ambient:1 edge:1 encourage:1 worker:18 capable:1 arthur:1 respective:1 taylor:1 initialized:6 re:1 abundant:1 minimal:1 bouman:1 classify:2 modeling:5 ar:1 deviation:1 hundred:1 masked:1 recognizing:2 krizhevsky:1 conducted:1 sumit:2 too:1 front:1 motivating:1 osindero:1 encoders:1 spatiotemporal:1 gregory:1 xue:1 combined:1 petrovic:1 person:1 fundamental:2 stip:3 oord:2 khurram:1 probabilistic:1 off:1 lyndsey:1 felsen:1 pool:1 homography:3 michael:2 ilya:1 bethge:1 synthesis:1 choose:2 worse:1 external:1 creating:1 watching:1 style:1 li:5 szegedy:1 suggesting:4 potential:1 singleton:1 stride:1 coding:1 stabilize:1 jitendra:1 explicitly:2 ranking:1 collobert:2 stream:35 performed:1 try:4 extrapolation:1 diagnose:1 lowe:1 analyze:1 px0:2 red:2 wave:1 start:1 metz:1 crowdsourced:1 shai:1 annotation:1 simon:1 jia:1 contribution:3 minimize:1 square:1 ni:1 accuracy:6 convolutional:13 greg:1 efficiently:2 yield:1 conceptually:1 kavukcuoglu:1 trajectory:1 rectified:1 kris:1 visualizes:1 hamed:2 detector:1 flickr:3 trevor:1 against:1 failure:2 frequency:1 turk:2 james:2 tamara:2 chintala:2 static:14 sampled:2 umbc:3 dataset:8 gain:5 popular:1 ask:3 knowledge:4 color:2 dimensionality:1 lim:1 garrett:1 anticipating:1 centric:1 feed:2 higher:1 day:1 supervised:1 zisserman:2 maximally:2 wei:2 improved:1 evaluated:2 though:1 furthermore:1 just:1 fiser:1 until:1 convnets:1 hand:3 lstms:1 replacing:2 mehdi:2 lack:1 google:1 mode:1 logistic:2 quality:3 artifact:1 believe:6 aude:2 supervisory:2 building:2 phillip:1 k22:1 vignesh:1 true:1 former:1 hence:1 kitani:1 jojic:1 lapse:1 semantic:2 dinesh:1 adjacent:1 visualizing:2 game:1 width:1 during:1 please:2 ambiguous:2 confusion:1 performs:1 motion:26 l1:1 vondrick:8 image:21 wise:1 novel:1 charles:1 sigmoid:2 common:3 rotation:1 ishan:1 empirically:1 physical:1 ji:1 volume:1 thirdly:1 extend:2 million:1 slight:2 numerically:1 lucrative:1 cv:1 rd:1 tuning:2 doersch:1 hp:1 aytar:2 language:1 moving:3 stable:1 bruna:1 supervision:5 longer:1 add:3 disentangle:1 chelsea:1 own:1 recent:3 torresani:1 driven:1 reverse:1 aron:1 nvidia:1 misra:1 binary:2 baby:3 mcdermott:1 additional:1 isola:1 signal:4 jenny:1 full:3 multiple:1 keypoints:1 sound:2 technical:1 match:2 cross:2 long:2 bolei:2 divided:1 post:1 raia:1 laplacian:1 parenthesis:1 impact:2 prediction:10 ransac:1 oliva:2 regression:1 avidan:1 vision:4 expectation:1 metric:1 mturk:1 iteration:1 sometimes:1 kernel:2 normalization:4 histogram:1 pyramid:1 sergey:3 arxiv:22 damian:1 background:20 want:3 separately:1 fine:9 fellowship:1 jason:1 walker:1 crashing:1 source:2 crucial:1 yusuf:2 rest:1 suspect:1 tend:3 flow:2 chopra:2 leverage:4 synthetically:1 yang:1 split:2 bengio:1 embeddings:1 krishnan:1 relu:2 fit:1 architecture:20 reduce:1 idea:1 lubomir:1 golf:7 shift:1 whether:2 utility:2 hallucinating:1 torch7:2 accelerating:1 forecasting:2 effort:2 soomro:1 karen:2 action:15 antonio:7 gabriel:1 deep:11 useful:3 fake:2 fool:1 generally:4 tune:2 shake:1 amount:10 fragkiadaki:1 extensively:1 category:3 generate:15 http:3 percentage:4 nsf:1 happened:1 moses:1 jiajun:2 track:4 per:1 hyperparameter:1 dropping:1 four:1 fractionally:7 nevertheless:1 shih:1 changing:1 prevent:3 gmm:2 douglas:1 borth:1 nal:1 utilize:1 vast:1 timestep:1 powerful:1 place:1 reasonable:1 yann:2 wu:2 patch:2 coherence:2 prefer:10 scaling:2 krahenbuhl:1 graham:1 capturing:1 layer:16 dropout:2 followed:1 distinguish:2 courville:1 activity:1 kreiman:1 strength:1 optic:1 alex:2 fei:2 scene:23 encodes:1 tag:1 speed:1 nitish:2 min:3 optical:1 relatively:3 px:4 gpus:1 alternate:1 combination:1 describes:1 across:1 slightly:2 remain:1 pan:1 joseph:1 rob:3 making:2 yfcc100m:1 quoc:1 den:2 invariant:3 iccv:7 mori:1 equation:1 visualization:3 bing:2 needed:1 know:1 mind:1 finn:1 photo:1 capitalizing:1 studying:1 available:2 yael:1 doll:1 apply:2 observe:2 enforce:2 appropriate:2 blurry:1 fowlkes:1 xiong:1 batch:4 alternative:1 marcaurelio:1 shah:1 cent:1 clustering:1 zeiler:1 k1:1 build:2 overflow:1 especially:2 tig:1 psychophysical:3 move:4 tensor:1 question:2 objective:2 g0:2 warping:1 malik:1 primary:1 bagnell:1 said:1 gradient:2 friedland:1 distance:1 thank:2 penultimate:1 decoder:2 collected:1 ozair:1 code:7 balance:1 acquire:1 minimizing:1 difficult:1 mostly:2 susceptible:1 katherine:1 ba:1 design:5 implementation:3 perform:1 allowing:1 convolution:14 snapshot:1 datasets:2 descent:1 pickup:1 beat:1 extended:1 hinton:1 frame:50 station:3 sharp:1 camille:1 community:1 drift:1 rating:1 david:4 complement:1 mechanical:2 required:1 discriminator:14 imagenet:1 coherent:1 learned:7 extrapolate:1 qiao:1 barcelona:1 hour:2 nip:6 assembled:1 kingma:1 able:5 beyond:1 below:1 usually:6 pattern:1 eighth:1 shuiwang:1 regime:2 sparsity:1 perception:1 tb:1 program:1 pirsiavash:2 max:3 video:145 event:1 pathak:1 natural:2 predicting:1 movie:2 historically:1 lorenzo:1 abhinav:3 temporally:1 mathieu:2 martial:3 started:1 created:1 extract:2 autoencoder:7 schmid:1 poland:1 chao:1 review:1 understanding:4 prior:3 acknowledgement:1 yeh:1 multiplication:1 joan:1 relative:6 graf:1 theis:1 loss:4 expect:1 highlight:3 fully:1 generation:36 filtering:1 querying:2 unfiltered:2 versus:1 geoffrey:1 generator:21 downloaded:1 verification:1 consistent:1 xiao:1 principle:1 heng:1 tiny:2 share:1 classifying:1 translation:2 karl:1 storing:1 course:4 eccv:5 supported:1 last:4 free:1 english:1 hebert:3 warp:1 attaching:1 deepak:1 leaky:1 emerge:1 van:2 dimension:2 world:2 evaluating:1 rich:1 qualitatively:1 collection:1 replicated:1 nguyen:1 far:1 reconstructed:1 ignore:1 preferred:2 bernhard:1 cuboid:2 overfitting:2 investigating:1 ioffe:1 summing:1 spatio:8 fergus:3 zhe:1 latent:5 khosla:1 table:3 additionally:1 promising:5 learn:13 channel:2 init:3 troubleshooting:1 du:1 rogez:1 complex:1 zitnick:1 did:2 main:1 arrow:3 noise:3 complementary:1 body:1 xu:3 advice:1 crafted:2 momentum:1 wish:1 explicit:2 governed:1 perceptual:2 answering:1 tied:1 third:1 jmlr:1 mubarak:1 learns:4 donahue:1 ian:2 tang:1 down:1 removing:1 bad:1 covariate:1 showing:2 sift:1 mobahi:1 explored:1 pz:1 experimented:6 consequent:1 gupta:4 normalizing:1 abadie:1 ramanathan:1 phd:1 magnitude:2 conditioned:2 chen:2 entropy:1 yin:1 explore:2 ez:1 visual:10 josh:1 vinyals:1 limin:1 contained:1 aditya:1 upsample:1 g2:1 srivastava:2 radford:1 chance:2 acm:1 weston:1 conditional:3 goal:1 viewed:2 towards:1 room:1 flickering:1 replace:1 couprie:1 change:3 owen:1 jeff:1 specifically:1 except:6 reducing:1 multimedia:1 hospital:2 secondary:1 invariance:1 ozsef:1 katerina:1 rarely:1 aaron:2 berg:2 internal:2 people:6 support:1 latter:1 jianxiong:1 oriol:1 evaluate:5 aleksandar:1 reg:1 scratch:3 ex:2 |
5,741 | 6,195 | Causal Bandits: Learning Good Interventions via
Causal Inference
Finnian Lattimore
Australian National University and Data61/NICTA
[email protected]
Tor Lattimore
Indiana University, Bloomington
[email protected]
Mark D. Reid
Australian National University and Data61/NICTA
[email protected]
Abstract
We study the problem of using causal models to improve the rate at which good
interventions can be learned online in a stochastic environment. Our formalism
combines multi-arm bandits and causal inference to model a novel type of bandit
feedback that is not exploited by existing approaches. We propose a new algorithm
that exploits the causal feedback and prove a bound on its simple regret that is
strictly better (in all quantities) than algorithms that do not use the additional causal
information.
1
Introduction
Medical drug testing, policy setting, and other scientific processes are commonly framed and analysed
in the language of sequential experimental design and, in special cases, as bandit problems (Robbins,
1952; Chernoff, 1959). In this framework, single actions (also referred to as interventions) from a
pre-determined set are repeatedly performed in order to evaluate their effectiveness via feedback from
a single, real-valued reward signal. We propose a generalisation of the standard model by assuming
that, in addition to the reward signal, the learner observes the values of a number of covariates drawn
from a probabilistic causal model (Pearl, 2000). Causal models are commonly used in disciplines
where explicit experimentation may be difficult such as social science, demography and economics.
For example, when predicting the effect of changes to childcare subsidies on workforce participation,
or school choice on grades. Results from causal inference relate observational distributions to
interventional ones, allowing the outcome of an intervention to be predicted without explicitly
performing it. By exploiting the causal information we show, theoretically and empirically, how
non-interventional observations can be used to improve the rate at which high-reward actions can be
identified.
The type of problem we are concerned with is best illustrated with an example. Consider a farmer
wishing to optimise the yield of her crop. She knows that crop yield is only affected by temperature,
a particular soil nutrient, and moisture level but the precise effect of their combination is unknown.
In each season the farmer has enough time and money to intervene and control at most one of these
variables: deploying shade or heat lamps will set the temperature to be low or high; the nutrient
can be added or removed through a choice of fertilizer; and irrigation or rain-proof covers will keep
the soil wet or dry. When not intervened upon, the temperature, soil, and moisture vary naturally
from season to season due to weather conditions and these are all observed along with the final crop
yield at the end of each season. How might the farmer best experiment to identify the single, highest
yielding intervention in a limited number of seasons?
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Contributions We take the first step towards formalising and solving problems such as the one
above. In ?2 we formally introduce causal bandit problems in which interventions are treated as
arms in a bandit problem but their influence on the reward ? along with any other observations
? is assumed to conform to a known causal graph. We show that our causal bandit framework
subsumes the classical bandits (no additional observations) and contextual stochastic bandit problems
(observations are revealed before an intervention is chosen) before focusing on the case where, like
the above example, observations occur after each intervention is made.
Our focus is on the simple regret, which measures the difference between the return of the optimal
action and that of the action chosen by the algorithm after T rounds. In ?3 we analyse a specific
family of causal bandit problems that we call parallel bandit problems in which N factors affect the
reward independently and there are 2N possible interventions. We propose a simple causal best arm
identification algorithm for this problemp
and show that up to logarithmic factors it enjoys minimax
?
optimal simple regret guarantees of ?(
m/T ) where m depends on the causal model and
p may
be much smaller than N . In contrast, existing best arm identification algorithms suffer ?( N/T )
simple regret (Thm. 4 by Audibert and Bubeck (2010)). This shows theoretically the value of our
framework over the traditional bandit problem. Experiments in ?5 further demonstrate the value of
causal models in this framework.
In the general casual bandit problem interventions and observations may have a complex relationship.
In ?4 we propose a new algorithm inspired by importance-sampling that a) enjoys sub-linear regret
equivalent to the optimal rate in the parallel bandit setting and b) captures many of the intricacies of
sharing information in ap
causal graph in the general case. As in the parallel bandit case, the regret
guarantee scales like O( m/T ) where m depends on the underlying causal structure, with smaller
values corresponding to structures that are easier to learn. The value of m is always less than the
number of interventions N and in the special case of the parallel bandit (where we have lower bounds)
the notions are equivalent.
Related Work As alluded to above, causal bandit problems can be treated as classical multi-armed
bandit problems by simply ignoring the causal model and extra observations and applying an existing
best-arm identification algorithm with well understood simple regret guarantees (Jamieson et al.,
2014). However, as we show in ?3, ignoring the extra information available in the non-intervened
variables yields sub-optimal performance.
A well-studied class of bandit problems with side information are ?contextual bandits? Langford
and Zhang (2008); Agarwal et al. (2014). Our framework bears a superficial similarity to contextual
bandit problems since the extra observations on non-intervened variables might be viewed as context
for selecting an intervention. However, a crucial difference is that in our model the extra observations
are only revealed after selecting an intervention and hence cannot be used as context.
There have been several proposals for bandit problems where extra feedback is received after an
action is taken. Most recently, Alon et al. (2015), Koc?k et al. (2014) have considered very general
models related to partial monitoring games (Bart?k et al., 2014) where rewards on unplayed actions
are revealed according to a feedback graph. As we discuss in ?6, the parallel bandit problem can be
captured in this framework, however the regret bounds are not optimal in our setting. They also focus
on cumulative regret, which cannot be used to guarantee low simple regret (Bubeck et al., 2009). The
partial monitoring approach taken by Wu et al. (2015) could be applied (up to modifications for the
simple regret) to the parallel bandit, but the resulting strategy would need to know the likelihood
of each factor in advance, while our strategy learns this online. Yu and Mannor (2009) utilize
extra observations to detect changes in the reward distribution, whereas we assume fixed reward
distributions and use extra observations to improve arm selection. Avner et al. (2012) analyse bandit
problems where the choice of arm to pull and arm to receive feedback on are decoupled. The main
difference from our present work is our focus on simple regret and the more complex information
linking rewards for different arms via causal graphs. To the best of our knowledge, our paper is the
first to analyse simple regret in bandit problems with extra post-action feedback.
Two pieces of recent work also consider applying ideas from causal inference to bandit problems.
Bareinboim et al. (2015) demonstrate that in the presence of confounding variables the value that a
variable would have taken had it not been intervened on can provide important contextual information.
Their work differs in many ways. For example, the focus is on the cumulative regret and the context
is observed before the action is taken and cannot be controlled by the learning agent.
2
Ortega and Braun (2014) present an analysis and extension of Thompson sampling assuming actions
are causal interventions. Their focus is on causal induction (i.e., learning an unknown causal model)
instead of exploiting a known causal model. Combining their handling of causal induction with our
analysis is left as future work.
The truncated importance weighted estimators used in ?4 have been studied before in a causal
framework by Bottou et al. (2013), where the focus is on learning from observational data, but
not controlling the sampling process. They also briefly discuss some of the issues encountered in
sequential design, but do not give an algorithm or theoretical results for this case.
2
Problem Setup
We now introduce a novel class of stochastic sequential decision problems which we call causal
bandit problems. In these problems, rewards are given for repeated interventions on a fixed causal
model Pearl (2000). Following the terminology and notation in Koller and Friedman (2009), a causal
model is given by a directed acyclic graph G over a set of random variables X = {X1 , . . . , XN }
and a joint distribution P over X that factorises over G. We will assume each variable only takes
on a finite number of distinct values. An edge from variable Xi to Xj is interpreted to mean that a
change in the value of Xi may directly cause a change to the value of Xj . The parents of a variable
Xi , denoted PaXi , is the set of all variables Xj such that there is an edge from Xj to Xi in G. An
intervention or action (of size n), denoted do(X = x), assigns the values x = {x1 , . . . , xn } to the
corresponding variables X = {X1 , . . . , Xn } ? X with the empty intervention (where no variable is
set) denoted do(). The intervention also ?mutilates? the graph G by removing all edges from Pai
to Xi for each Xi ? X. The resulting graph defines a probability distribution P {X c |do(X = x)}
over X c := X ? X. Details can be found in Chapter 21 of Koller and Friedman (2009).
A learner for a casual bandit problem is given the casual model?s graph G and a set of allowed actions
A. One variable Y ? X is designated as the reward variable and takes on values in {0, 1}. We denote
the expected reward for the action a = do(X = x) by ?a := E [Y |do(X = x)] and the optimal
expected reward by ?? := maxa?A ?a . The causal bandit game proceeds over T rounds. In round t,
the learner intervenes by choosing at = do(X t = xt ) ? A based on previous observations. It then
observes sampled values for all non-intervened variables X ct drawn from P {X ct |do(X t = xt )},
including the reward Yt ? {0, 1}. After T observations the learner outputs an estimate of the optimal
action a
??T ? A based on its prior observations.
The objective of the learner is to minimise the simple regret RT = ?? ? E ?a??T . This is sometimes
refered to as a ?pure exploration? (Bubeck et al., 2009) or ?best-arm identification? problem (Gabillon
et al., 2012) and is most appropriate when, as in drug and policy testing, the learner has a fixed
experimental budget after which its policy will be fixed indefinitely.
Although we will focus on the intervene-then-observe ordering of events within each round, other
scenarios are possible. If the non-intervened variables are observed before an intervention is selected
our framework reduces to stochastic contextual bandits, which are already reasonably well understood (Agarwal et al., 2014). Even if no observations are made during the rounds, the causal model
may still allow offline pruning of the set of allowable interventions thereby reducing the complexity.
We note that classical K-armed stochastic bandit problem can be recovered in our framework by
considering a simple causal model with one edge connecting a single variable X that can take on K
values to a reward variable Y ? {0, 1} where P {Y = 1|X} = r(X) for some arbitrary but unknown,
real-valued function r. The set of allowed actions in this case is A = {do(X = k) : k ? {1, . . . , K}}.
Conversely, any causal bandit problem can be reduced to a classical stochastic |A|-armed bandit
problem by treating each possible intervention as an independent arm and ignoring all sampled values
for the observed variables except for the reward. Intuitively though, one would expect to perform
better by making use of the extra structure and observations.
3
Regret Bounds for Parallel Bandit
In this section we propose and analyse an algorithm for achieving the optimal regret in a natural
special case of the causal bandit problem which we call the parallel bandit. It is simple enough to
admit a thorough analysis but rich enough to model the type of problem discussed in ?1, including
3
X1
...
X2
XN
X1
X2
Y
(a) Parallel graph
Y
X1
(b) Confounded graph
X2
...
XN
Y
(c) Chain graph
Figure 1: Causal Models
the farming example. It also suffices to witness the regret gap between algorithms that make use of
causal models and those which do not.
The causal model for this class of problems has N binary variables {X1 , . . . , XN } where each
Xi ? {0, 1} are independent causes of a reward variable Y ? {0, 1}, as shown in Figure 1a. All
variables are observable and the set of allowable actions are all size 0 and size 1 interventions: A =
{do()} ? {do(Xi = j) : 1 ? i ? N and j ? {0, 1}} In the farming example from the introduction,
X1 might represent temperature (e.g., X1 = 0 for low and X1 = 1 for high). The interventions
do(X1 = 0) and do(X1 = 1) indicate the use of shades or heat lamps to keep the temperature low or
high, respectively.
In each round the learner either purely observes by selecting do() or sets the value of a single variable.
The remaining variables are simultaneously set by independently biased coin flips. The value of all
variables are then used to determine the distribution of rewards for that round. Formally, when not
intervened upon we assume that each Xi ? Bernoulli(qi ) where q = (q1 , . . . , qN ) ? [0, 1]N so that
qi = P {Xi = 1}. The value of the reward variable is distributed as P {Y = 1|X} = r(X) where
r : {0, 1}N ? [0, 1] is an arbitrary, fixed, and unknown function. In the farming example, this choice
of Y models the success or failure of a seasons crop, which depends stochastically on the various
environment variables.
The Parallel Bandit Algorithm The algorithm operates as follows. For the first T /2 rounds it
chooses do() to collect observational data. As the only link from each X1 , . . . , XN to Y is a direct,
causal one, P {Y |do(Xi = j)} = P {Y |Xi = j}. Thus we can create good estimators for the returns
of the actions do(Xi = j) for which P {Xi = j} is large. The actions for which P {Xi = j} is small
may not be observed (often) so estimates of their returns could be poor. To address this, the remaining
T /2 rounds are evenly split to estimate the rewards for these infrequently observed actions. The
difficulty of the problem depends on q and, in particular,
how many of the variables
are unbalanced
(i.e., small qi or (1 ? qi )). For ? ? [2...N ] let I? = i : min {qi , 1 ? qi } < ?1 . Define
m(q) = min {? : |I? | ? ? } .
I? is the set of variables considered
unbalanced and we tune ? to trade
off identifying the low probability actions against not having too many of
them, so as to minimize the worst-case
simple regret. When q = ( 12 , . . . , 21 )
we have m(q) = 2 and when q =
(0, . . . , 0) we have m(q) = N . We do
not assume that q is known, thus Algorithm 1 also utilizes the samples captured during the observational phase
to estimate m(q). Although very simple, the following two theorems show
that this algorithm is effectively optimal.
Theorem 1. Algorithm 1 satisfies
s
!
m(q)
NT
RT ? O
log
.
T
m(q)
Algorithm 1 Parallel Bandit Algorithm
1: Input: Total rounds T and N .
2: for t ? 1, . . . , T /2 do
3:
Perform empty intervention do()
4:
Observe X t and Yt
5: for a = do(Xi = x) ? A do
6:
7:
PT /2
Count times Xi = x seen: Ta = t=1 1{Xt,i = x}
PT /2
Estimate reward: ?
?a = T1a t=1 1{Xt,i = x} Yt
a
8:
Estimate probabilities: p?a =2T
?i = p?do(Xi =1)
T , q
1
9: Compute m
? = m(?
q ) and A = a ? A : p?a ? m
? .
T
10: Let TA := 2|A| be times to sample each a ? A.
11: for a = do(Xi = x) ? A do
12:
for t ? 1, . . . , TA do
13:
Intervene with a andPobserve Yt
TA
14:
Re-estimate ?
?a = T1A t=1
Yt
15: return estimated optimal a
??T ? arg maxa?A ?
?a
r
Theorem 2. For all strategies and T , q, there exist rewards such that RT ? ?
4
!
m(q)
.
T
The proofs of Theorems 1 and 2 follow by carefully analysing the concentration of p?a and m
? about
their true values and may be found in the supplementary material. By utilizing knowledge of the
causal structure, Algorithm 1 effectively only has to explore the m(q) ?difficult? actions.
Standard
p
multi-armed bandit algorithms must explore all 2N actions and thus achieve regret ?( N/T ). Since
m is typically much smaller than N , the new algorithm can significantly outperform classical bandit
algorithms in this setting. In practice, you would combine the data from both phases to estimate
rewards for the low probability actions. We do not do so here as it slightly complicates the proofs and
does not improve the worst case regret.
4
Regret Bounds for General Graphs
We now consider the more general problem where the graph structure is known, but arbitrary. For
general graphs, P {Y |Xi = j} =
6 P {Y |do(Xi = j)} (correlation is not causation). However, if all
the variables are observable, any causal distribution P {X1 ...XN |do(Xi = j)} can be expressed in
terms of observational distributions via the truncated factorization formula (Pearl, 2000).
Y
P {X1 ...XN |do(Xi = j)} =
P {Xk | PaXk } ?(Xi ? j) ,
k6=i
where PaXk denotes the parents of Xk and ? is the dirac delta function.
We could naively generalize our approach for parallel bandits by observing for T /2 rounds, applying
the truncated product factorization to write an expression for each P {Y |a} in terms of observational
quantities and explicitly playing the actions for which the observational estimates were poor. However,
it is no longer optimal to ignore the information we can learn about the reward for intervening on one
variable from rounds in which we act on a different variable. Consider the graph in Figure 1c and
suppose each variable deterministically takes the value of its parent, Xk = Xk?1 for k ? 2, . . . , N
and P {X1 } = 0. We can learn the reward for all the interventions do(Xi = 1) simultaneously by
selecting do(X1 = 1), but not from do(). In addition, variance of the observational estimator for
a = do(Xi = j) can be
P high even if P {Xi = j} is large. Given the causal graph in Figure 1b,
P {Y |do(X2 = j)} = X1 P {X1 } P {Y |X1 , X2 = j}. Suppose X2 = X1 deterministically, no
matter how large P {X2 = 1} is we will never observe (X2 = 1, X1 = 0) and so cannot get a good
estimate for P {Y |do(X2 = 1)}.
To solve the general problem we need an estimator for each action that incorporates information
obtained from every other action and a way to optimally allocate samples to actions. To address
this difficult problem, we assume the conditional interventional distributions P {PaY |a} (but not
P {Y |a}) are known. These could be estimated from experimental data on the same covariates but
where the outcome of interest differed, such that Y was not included, or similarly from observational
data subject to identifiability constraints. Of course this is a somewhat limiting assumption, but
seems like a natural place to start. The challenge of estimating the conditional distributions for
all variables in an optimal way is left as an interesting
future direction. Let
P
P ? be a distribution on
available interventions a ? A so ?a ? 0 and a?A ?a = 1. Define Q = a?A ?a P {PaY |a} to
be the mixture distribution over the interventions with respect to ?.
Our algorithm samples T actions from ? and Algorithm 2 General Algorithm
uses them to estimate the returns ?a for all a ?
Input: T , ? ? [0, 1]A , B ? [0, ?)A
A simultaneously via a truncated importance
for t ? {1, . . . , T } do
weighted estimator. Let PaY (X) denote the
Sample action at from ?
realization of the variables in X that are parents
P{PaY (X)|a}
Do action at and observe Xt and Yt
of Y and define Ra (X) = Q{PaY (X)}
for a ? A do
T
1X
?
?a =
Yt Ra (Xt )1{Ra (Xt ) ? Ba }
T t=1
T
1X
?
?a =
Yt Ra (Xt )1{Ra (Xt ) ? Ba } ,
T t=1
return a
??T = arg maxa ?
?a
where Ba ? 0 is a constant that tunes the level
of truncation to be chosen subsequently. The truncation introduces a bias in the estimator, but
simultaneously chops the potentially heavy tail that is so detrimental to its concentration guarantees.
5
The distribution over actions, ? plays the role of allocating samples to actions and is optimized to
minimize the worst-case simple regret. Abusing notation we define m(?) by
P {PaY (X)|a}
m(?) = max Ea
, where Ea is the expectation with respect to P {.|a}
a?A
Q {PaY (X)}
We will show shortly that m(?) is a measure of the difficulty of the problem that approximately
coincides with the version for parallel bandits, justifying the name overloading.
q
m(?)T
Theorem 3. If Algorithm 2 is run with B ? RA given by Ba = log(2T
|A|) .
r
RT ? O
!
m(?)
log (2T |A|) .
T
The proof is in the supplementary materials. Note the regret has the same form as that obtained
for Algorithm 1, with m(?) replacing m(q). Algorithm 1 assumes only the graph structure and not
knowledge of the conditional distributions on X. Thus it has broader applicability to the parallel
graph than the generic algorithm given here. We believe that Algorithm 2 with the optimal choice of
? is close to minimax optimal, but leave lower bounds for future work.
Choosing the Sampling Distribution Algorithm 2 depends on a choice of sampling distribution
Q that is determined by ?. In light of Theorem 3 a natural choice of ? is the minimiser of m(?).
P {PaY (X)|a}
?
? = arg min m(?) = arg min max Ea P
.
a?A
?
?
b?A ?b P {PaY (X)|b}
|
{z
}
m(?)
Since the mixture of convex functions is convex and the maximum of a set of convex functions is
convex, we see that m(?) is convex (in ?). Therefore the minimisation problem may be tackled
using standard techniques from convex optimisation. The quantity m(? ? ) may be interpreted as the
minimum achievable worst-case variance of the importance weighted estimator. In the experimental
section we present some special cases, but for now we give two simple results. The first shows that
|A| serves as an upper bound on m(? ? ).
Proposition 4. m(? ? ) ? |A|. Proof. By definition, m(? ? ) ? m(?) for all ?. Let ?a = 1/|A| ?a.
P {PaY (X)|a}
P {PaY (X)|a}
1
m(?) = max Ea
? max Ea
= max Ea
= |A|
a
a
a
Q {PaY (X)}
?a P {PaY (X)|a}
?a
The second observation is that, in the parallel bandit setting, m(? ? ) ? 2m(q). This is easy to see
by letting ?a = 1/2 for a = do() and ?a = 1{P {Xi = j} ? 1/m(q)} /2m(q) for the actions
corresponding to do(Xi = j), and applying an argument like that for Proposition 4. The proof is in
the supplementary materials.
Remark 5. The choice of Ba given in Theorem 3 is not the only possibility. As we shall see in the
experiments, it is often possible to choose Ba significantly larger when there is no heavy tail and
this can drastically improve performance by eliminating the bias. This is especially true when the
ratio Ra is never too large and Bernstein?s inequality could be used directly without the truncation.
For another discussion see the article by Bottou et al. (2013) who also use importance weighted
estimators to learn from observational data.
5
Experiments
We compare Algorithms 1 and 2 with the Successive Reject algorithm of Audibert and Bubeck (2010),
Thompson Sampling and UCB under a variety of conditions. Thomson sampling and UCB are
optimized to minimize cumulative regret. We apply them in the fixed horizon, best arm identification
setting by running them upto horizon T and then selecting the arm with the highest empirical mean.
The importance weighted estimator used by Algorithm 2 is not truncated, which is justified in this
setting by Remark 5.
6
0.5
0.30
0.25
0.30
0.25
0.4
0.20
0.20
Regret
Regret
Regret
0.3
0.15
0.15
0.2
0.10
Algorithm 2
Algorithm 1
Successive Reject
UCB
Thompson Sampling
0.05
0.00
0
10
20
30
40
0.10
0.1
0.05
0.0
0
200
400
(a) Simple regret vs m(q) for
fixed horizon T = 400 and number of variables N = 50
600
800
T
m
(b) Simple regret vs horizon, T ,
with
q N = 50, m = 2 and ? =
N
8T
0.00
0
100
200
300
T
400
500
(c) Simple regret vs horizon, T ,
with N = 50, m = 2 and fixed
? = .3
Figure 2: Experimental results
Throughout we use a model in which Y depends only on a single variable X1 (this is unknown to
the algorithms). Yt ? Bernoulli( 12 + ?) if X1 = 1 and Yt ? Bernoulli( 12 ? ?0 ) otherwise, where
?0 = q1 ?/(1 ? q1 ). This leads to an expected reward of 12 + ? for do(X1 = 1), 12 ? ?0 for do(X1 = 0)
and 12 for all other actions. We set qi = 0 for i ? m and 12 otherwise. Note that changing m and
thus q has no effect on the reward distribution. For each experiment, we show the average regret
over 10,000 simulations with error bars displaying three standard errors. The code is available from
<https://github.com/finnhacks42/causal_bandits>
In Figure 2a we fix the number of variables N and the horizon T and compare the performance
of the algorithms as m increases. The regret for the Successive Reject algorithm is constant as it
depends only on the reward distribution and?has no knowledge of the causal structure. For the causal
algorithms it increases approximately with m. As m approaches N , the gain the causal algorithms
obtain from knowledge of the structure is outweighed by fact they do not leverage the observed
rewards to focus sampling effort on actions with high pay-offs.
Figure 2b demonstrates the performance of the algorithms in the worst case
p environment for standard
bandits, where the gap between the optimal and sub-optimal arms, ? = N/(8T ) , is just too small
to be learned. This gap is learnable by the causal algorithms, for which the worst case ? depends on
m N . In Figure 2c we fix N and ? and observe that, for sufficiently large T , the regret decays
exponentially. The decay constant is larger for the causal algorithms as they have observed a greater
effective number of samples for a given T .
For the parallel bandit problem, the regression estimator used in the specific algorithm outperforms the
truncated importance weighted estimator in the more general algorithm, despite the fact the specific
algorithm must estimate q from the data. This is an interesting phenomenon that has been noted
before in off-policy evaluation where the regression (and not the importance weighted) estimator is
known to be minimax optimal asymptotically (Li et al., 2014).
6
Discussion & Future Work
Algorithm 2 for general causal bandit problems estimates the reward for all allowable interventions
a ? A over T rounds by sampling and applying interventions from apdistribution ?. Theorem 3 shows
that this algorithm has (up to log factors) simple regret that is O( m(?)/T ) where the parameter
m(?) measures the difficulty of learning the causal model and is always less than N . The value of
m(?) is a uniform bound on the variance of the reward estimators ?
?a and, intuitively, problems where
all variables? values in the causal model ?occur naturally? when interventions are sampled from ?
will have low values of m(?).
The main practical drawback of Algorithm 2 is that both the estimator ?
?a and the optimal sampling
distribution ? ? (i.e., the one that minimises m(?)) require knowledge of the conditional distributions
P {PaY |a} for all a ? A. In contrast, in the special case of parallel bandits, Algorithm 1 uses
the do() action to effectively estimate m(?) and the rewards then re-samples the interventions with
variances that are not bound by m(?).
?
Despite these extra estimates, Theorem 2 shows that this
7
approach is optimal (up to log factors). Finding an algorithm that only requires the causal graph and
lower bounds for its simple regret in the general case is left as future work.
Making Better Use of the Reward Signal Existing algorithms for best arm identification are
based on ?successive rejection? (SR) of arms based on UCB-like bounds on their rewards (Even-Dar
et al., 2002). In contrast, our algorithms completely ignore the reward signal when developing their
arm sampling policies and only use the rewards when estimating ?
?a . Incorporating the reward signal
into our sampling techniques or designing more adaptive reward estimators that focus on high reward
interventions is an obvious next step. This would likely improve the poor performance of our causal
algorithm relative to the sucessive rejects algorithm for large m, as seen in Figure 2a. For the parallel
bandit the required modifications should be quite straightforward. The idea would be to adapt the
algorithm to essentially use successive elimination in the second phase so arms are eliminated as
soon as they are provably no longer optimal with high probability. In the general case a similar
modification is also possible by dividing the budget T into phases and optimising the sampling
distribution ?, eliminating arms when their confidence intervals are no longer overlapping. Note
that these modifications will not improve the minimax regret, which at least for the parallel bandit is
already optimal. For this reason we prefer to emphasize the main point that causal structure should
be exploited when available. Another observation is that Algorithm 2 is actually using a fixed design,
which in some cases may be preferred to a sequential design for logistical reasons. This is not possible
for Algorithm 1, since the q vector is unknown.
Cumulative Regret Although we have focused on simple regret in our analysis, it would also be
natural to consider the cumulative regret. In the case of the parallel bandit problem we can slightly
modify the analysis from (Wu et al., 2015) on bandits with side information to get near-optimal
cumulative regret guarantees. They consider a finite-armed bandit model with side information where
in reach round the learner chooses an action and receives a Gaussian reward signal for all actions, but
with a known variance that depends on the chosen action. In this way the learner can gain information
about actions it does not take with varying levels of accuracy. The reduction follows by substituting
the importance weighted estimators in place of the Gaussian reward. In the case that q is known
this would lead to a known variance and the only (insignificant) difference is the Bernoulli noise
model. In the parallel bandit case we believe this would lead to near-optimal cumulative regret, at
least asymptotically.
The parallel bandit problem can also be viewed as an instance of a time varying graph feedback
problem (Alon et al., 2015; Koc?k et al., 2014), where at each timestep the feedback graph Gt is
selected stochastically, dependent on q, and revealed after an action has been chosen. The feedback
graph is distinct from the causal graph. A link A ? B in Gt indicates that selecting the action A
reveals the reward for action B. For this parallel bandit problem, Gt will always be a star graph
with the action do() connected to half the remaining actions. However, Alon et al. (2015); Koc?k
et al. (2014) give adversarial algorithms, which when applied to the parallel bandit problem obtain
the standard bandit regret. A malicious adversary can select the same graph each time, such that
the rewards for half the arms are never revealed by the informative action. This is equivalent to a
nominally stochastic selection of feedback graph where q = 0.
Causal Models with Non-Observable Variables If we assume knowledge of the conditional interventional distributions P {PaY |a} our analysis applies unchanged to the case of causal models with
non-observable variables. Some of the interventional distributions may be non-identifiable meaning
we can not obtain prior estimates for P {PaY |a} from even an infinite amount of observational
data. Even if all variables are observable and the graph is known, if the conditional distributions
are unknown, then Algorithm 2 cannot be used. Estimating these quantities while simultaneously
minimising the simple regret is an interesting and challenging open problem.
Partially or Completely Unknown Causal Graph A much more difficult generalisation would
be to consider causal bandit problems where the causal graph is completely unknown or known to
be a member of class of models. The latter case arises naturally if we assume free access to a large
observational dataset, from which the Markov equivalence class can be found via causal discovery
techniques. Work on the problem of selecting experiments to discover the correct causal graph from
within a Markov equivalence class Eberhardt et al. (2005); Eberhardt (2010); Hauser and B?hlmann
(2014); Hu et al. (2014) could potentially be incorporated into a causal bandit algorithm. In particular,
Hu et al. (2014) show that only O (log log n) multi-variable interventions are required on average to
recover a causal graph over n variables once purely observational data is used to recover the ?essential
graph?. Simultaneously learning a completely unknown causal model while estimating the rewards
of interventions without a large observational dataset would be much more challenging.
8
References
Agarwal, A., Hsu, D., Kale, S., Langford, J., Li, L., and Schapire, R. E. (2014). Taming the monster: A fast and
simple algorithm for contextual bandits. In ICML, pages 1638?1646.
Alon, N., Cesa-Bianchi, N., Dekel, O., and Koren, T. (2015). Online learning with feedback graphs: Beyond
bandits. In COLT, pages 23?35.
Audibert, J.-Y. and Bubeck, S. (2010). Best arm identification in multi-armed bandits. In COLT, pages 13?p.
Avner, O., Mannor, S., and Shamir, O. (2012). Decoupling exploration and exploitation in multi-armed bandits.
In ICML, pages 409?416.
Bareinboim, E., Forney, A., and Pearl, J. (2015). Bandits with unobserved confounders: A causal approach. In
NIPS, pages 1342?1350.
Bart?k, G., Foster, D. P., P?l, D., Rakhlin, A., and Szepesv?ri, C. (2014). Partial monitoring-classification, regret
bounds, and algorithms. Mathematics of Operations Research, 39(4):967?997.
Bottou, L., Peters, J., Quinonero-Candela, J., Charles, D. X., Chickering, D. M., Portugaly, E., Ray, D., Simard,
P., and Snelson, E. (2013). Counterfactual reasoning and learning systems: The example of computational
advertising. JMLR, 14(1):3207?3260.
Bubeck, S., Munos, R., and Stoltz, G. (2009). Pure exploration in multi-armed bandits problems. In ALT, pages
23?37.
Chernoff, H. (1959). Sequential design of experiments. The Annals of Mathematical Statistics, pages 755?770.
Eberhardt, F. (2010). Causal Discovery as a Game. In NIPS Causality: Objectives and Assessment, pages 87?96.
Eberhardt, F., Glymour, C., and Scheines, R. (2005). On the number of experiments sufficient and in the worst
case necessary to identify all causal relations among n variables. In UAI.
Even-Dar, E., Mannor, S., and Mansour, Y. (2002). Pac bounds for multi-armed bandit and markov decision
processes. In Computational Learning Theory, pages 255?270.
Gabillon, V., Ghavamzadeh, M., and Lazaric, A. (2012). Best arm identification: A unified approach to fixed
budget and fixed confidence. In NIPS, pages 3212?3220.
Hauser, A. and B?hlmann, P. (2014). Two optimal strategies for active learning of causal models from
interventional data. International Journal of Approximate Reasoning, 55(4):926?939.
Hu, H., Li, Z., and Vetta, A. R. (2014). Randomized experimental design for causal graph discovery. In NIPS,
pages 2339?2347.
Jamieson, K., Malloy, M., Nowak, R., and Bubeck, S. (2014). lil?UCB: An optimal exploration algorithm for
multi-armed bandits. In COLT, pages 423?439.
Koc?k, T., Neu, G., Valko, M., and Munos, R. (2014). Efficient learning by implicit exploration in bandit
problems with side observations. In NIPS, pages 613?621.
Koller, D. and Friedman, N. (2009). Probabilistic graphical models: principles and techniques. MIT Press.
Langford, J. and Zhang, T. (2008). The epoch-greedy algorithm for multi-armed bandits with side information.
In NIPS, pages 817?824.
Li, L., Munos, R., and Szepesvari, C. (2014). On minimax optimal offline policy evaluation. arXiv preprint
arXiv:1409.3653.
Ortega, P. A. and Braun, D. A. (2014). Generalized thompson sampling for sequential decision-making and
causal inference. Complex Adaptive Systems Modeling, 2(1):2.
Pearl, J. (2000). Causality: models, reasoning and inference. MIT Press, Cambridge.
Robbins, H. (1952). Some aspects of the sequential design of experiments. Bulletin of the American Mathematical
Society, 58(5):527?536.
Wu, Y., Gy?rgy, A., and Szepesv?ri, C. (2015). Online Learning with Gaussian Payoffs and Side Observations.
In NIPS, pages 1360?1368.
Yu, J. Y. and Mannor, S. (2009). Piecewise-stationary bandit problems with side observations. In ICML, pages
1177?1184.
9
| 6195 |@word exploitation:1 version:1 briefly:1 achievable:1 seems:1 eliminating:2 dekel:1 open:1 hu:3 simulation:1 q1:3 thereby:1 reduction:1 selecting:7 outperforms:1 existing:4 recovered:1 com:3 contextual:6 nt:1 analysed:1 gmail:2 must:2 informative:1 treating:1 bart:2 v:3 half:2 selected:2 greedy:1 stationary:1 xk:4 lamp:2 indefinitely:1 mannor:4 successive:5 zhang:2 mathematical:2 along:2 direct:1 prove:1 combine:2 ray:1 introduce:2 theoretically:2 ra:7 expected:3 multi:10 grade:1 inspired:1 armed:11 considering:1 spain:1 estimating:4 underlying:1 notation:2 discover:1 interpreted:2 maxa:3 unified:1 finding:1 indiana:1 unobserved:1 guarantee:6 thorough:1 every:1 act:1 braun:2 demonstrates:1 farmer:3 control:1 medical:1 intervention:34 jamieson:2 reid:2 before:6 understood:2 modify:1 despite:2 ap:1 approximately:2 might:3 au:1 studied:2 equivalence:2 conversely:1 collect:1 challenging:2 limited:1 factorization:2 directed:1 practical:1 testing:2 irrigation:1 regret:45 practice:1 differs:1 empirical:1 drug:2 significantly:2 weather:1 reject:4 pre:1 confidence:2 get:2 cannot:5 close:1 selection:2 context:3 influence:1 applying:5 equivalent:3 yt:10 straightforward:1 economics:1 kale:1 independently:2 thompson:4 convex:6 focused:1 identifying:1 assigns:1 pure:2 estimator:16 utilizing:1 pull:1 notion:1 limiting:1 annals:1 controlling:1 pt:2 suppose:2 play:1 shamir:1 us:2 designing:1 infrequently:1 observed:8 role:1 monster:1 preprint:1 capture:1 worst:7 connected:1 ordering:1 trade:1 highest:2 removed:1 observes:3 environment:3 complexity:1 covariates:2 reward:44 ghavamzadeh:1 solving:1 purely:2 upon:2 learner:9 completely:4 joint:1 chapter:1 various:1 heat:2 distinct:2 effective:1 fast:1 outcome:2 choosing:2 quite:1 supplementary:3 valued:2 solve:1 larger:2 otherwise:2 statistic:1 analyse:4 final:1 online:4 propose:5 product:1 combining:1 realization:1 achieve:1 intervening:1 dirac:1 rgy:1 exploiting:2 parent:4 empty:2 leave:1 alon:4 minimises:1 school:1 received:1 dividing:1 predicted:1 indicate:1 australian:2 direction:1 drawback:1 correct:1 stochastic:7 subsequently:1 nutrient:2 exploration:5 observational:14 material:3 elimination:1 require:1 suffices:1 fix:2 proposition:2 sucessive:1 strictly:1 extension:1 sufficiently:1 considered:2 substituting:1 tor:2 vary:1 wet:1 robbins:2 create:1 weighted:8 offs:1 mit:2 always:3 gaussian:3 season:6 varying:2 broader:1 minimisation:1 focus:9 she:1 bernoulli:4 likelihood:1 indicates:1 contrast:3 adversarial:1 wishing:1 detect:1 inference:6 dependent:1 typically:1 her:1 bandit:71 koller:3 relation:1 provably:1 issue:1 arg:4 colt:3 classification:1 denoted:3 k6:1 among:1 special:5 once:1 never:3 having:1 sampling:15 chernoff:2 eliminated:1 pai:1 optimising:1 yu:2 icml:3 future:5 piecewise:1 causation:1 simultaneously:6 national:2 phase:4 friedman:3 interest:1 possibility:1 evaluation:2 introduces:1 mixture:2 yielding:1 light:1 chain:1 allocating:1 edge:4 nowak:1 partial:3 necessary:1 minimiser:1 decoupled:1 stoltz:1 re:2 causal:72 theoretical:1 complicates:1 instance:1 formalism:1 modeling:1 cover:1 hlmann:2 portugaly:1 applicability:1 uniform:1 formalising:1 too:3 optimally:1 hauser:2 data61:2 chooses:2 confounders:1 international:1 randomized:1 probabilistic:2 off:2 discipline:1 connecting:1 gabillon:2 intervenes:1 cesa:1 choose:1 admit:1 farming:3 stochastically:2 simard:1 american:1 return:6 li:4 gy:1 star:1 subsumes:1 matter:1 explicitly:2 audibert:3 depends:9 piece:1 performed:1 candela:1 observing:1 start:1 recover:2 parallel:24 identifiability:1 contribution:1 minimize:3 accuracy:1 variance:6 who:1 yield:4 identify:2 dry:1 outweighed:1 generalize:1 identification:8 monitoring:3 advertising:1 casual:3 deploying:1 koc:4 reach:1 sharing:1 neu:1 definition:1 failure:1 against:1 obvious:1 naturally:3 proof:6 sampled:3 bloomington:1 gain:2 dataset:2 hsu:1 counterfactual:1 knowledge:7 carefully:1 ea:6 actually:1 focusing:1 ta:4 follow:1 though:1 just:1 implicit:1 langford:3 correlation:1 receives:1 replacing:1 overlapping:1 assessment:1 abusing:1 defines:1 scientific:1 believe:2 name:1 effect:3 true:2 hence:1 illustrated:1 round:14 game:3 during:2 chop:1 noted:1 coincides:1 generalized:1 ortega:2 allowable:3 thomson:1 demonstrate:2 temperature:5 reasoning:3 meaning:1 snelson:1 lattimore:4 novel:2 recently:1 charles:1 empirically:1 exponentially:1 linking:1 discussed:1 tail:2 cambridge:1 framed:1 mathematics:1 similarly:1 language:1 had:1 access:1 intervene:3 similarity:1 money:1 longer:3 gt:3 recent:1 confounding:1 scenario:1 inequality:1 binary:1 success:1 exploited:2 unplayed:1 captured:2 seen:2 additional:2 somewhat:1 minimum:1 greater:1 determine:1 signal:6 reduces:1 adapt:1 minimising:1 justifying:1 post:1 controlled:1 qi:7 crop:4 regression:2 optimisation:1 expectation:1 essentially:1 arxiv:2 sometimes:1 represent:1 agarwal:3 proposal:1 addition:2 whereas:1 receive:1 justified:1 interval:1 szepesv:2 malicious:1 crucial:1 extra:10 biased:1 sr:1 subject:1 member:1 incorporates:1 effectiveness:1 call:3 near:2 presence:1 leverage:1 revealed:5 split:1 enough:3 concerned:1 easy:1 bernstein:1 affect:1 xj:4 variety:1 identified:1 idea:2 minimise:1 expression:1 allocate:1 effort:1 suffer:1 peter:1 cause:2 action:45 repeatedly:1 remark:2 dar:2 tune:2 amount:1 reduced:1 http:1 schapire:1 outperform:1 exist:1 estimated:2 delta:1 lazaric:1 conform:1 write:1 shall:1 affected:1 terminology:1 achieving:1 drawn:2 interventional:6 changing:1 utilize:1 timestep:1 graph:34 asymptotically:2 run:1 you:1 place:2 family:1 throughout:1 wu:3 utilizes:1 decision:3 prefer:1 forney:1 bound:13 ct:2 pay:17 koren:1 tackled:1 encountered:1 identifiable:1 occur:2 constraint:1 x2:9 ri:2 aspect:1 argument:1 min:4 performing:1 glymour:1 designated:1 according:1 developing:1 combination:1 poor:3 smaller:3 slightly:2 modification:4 making:3 avner:2 refered:1 intuitively:2 taken:4 alluded:1 scheines:1 discus:2 count:1 know:2 flip:1 letting:1 finn:1 end:1 confounded:1 serf:1 available:4 operation:1 malloy:1 experimentation:1 apply:1 observe:5 appropriate:1 generic:1 upto:1 coin:1 shortly:1 rain:1 remaining:3 denotes:1 assumes:1 running:1 graphical:1 exploit:1 especially:1 classical:5 society:1 unchanged:1 objective:2 added:1 quantity:4 already:2 strategy:4 concentration:2 rt:4 traditional:1 detrimental:1 link:2 quinonero:1 evenly:1 reason:2 nicta:2 induction:2 assuming:2 code:1 relationship:1 ratio:1 difficult:4 setup:1 potentially:2 relate:1 bareinboim:2 ba:6 design:7 lil:1 policy:6 unknown:10 perform:2 allowing:1 workforce:1 upper:1 observation:21 bianchi:1 markov:3 finite:2 truncated:6 payoff:1 witness:1 incorporated:1 precise:1 mansour:1 arbitrary:3 thm:1 required:2 optimized:2 learned:2 barcelona:1 pearl:5 nip:8 address:2 beyond:1 bar:1 proceeds:1 adversary:1 challenge:1 optimise:1 including:2 max:5 event:1 logistical:1 treated:2 natural:4 difficulty:3 predicting:1 participation:1 valko:1 arm:22 minimax:5 improve:7 github:1 factorises:1 subsidy:1 taming:1 prior:2 epoch:1 discovery:3 relative:1 expect:1 bear:1 interesting:3 acyclic:1 agent:1 sufficient:1 article:1 displaying:1 foster:1 principle:1 playing:1 heavy:2 course:1 soil:3 free:1 truncation:3 soon:1 enjoys:2 offline:2 side:7 allow:1 bias:2 drastically:1 bulletin:1 munos:3 distributed:1 feedback:12 xn:9 cumulative:7 rich:1 qn:1 commonly:2 made:2 adaptive:2 social:1 pruning:1 observable:5 ignore:2 emphasize:1 preferred:1 approximate:1 keep:2 active:1 reveals:1 uai:1 assumed:1 xi:29 learn:4 reasonably:1 szepesvari:1 superficial:1 decoupling:1 ignoring:3 eberhardt:4 bottou:3 complex:3 main:3 noise:1 repeated:1 allowed:2 x1:26 causality:2 referred:1 differed:1 sub:3 explicit:1 deterministically:2 intervened:7 chickering:1 jmlr:1 learns:1 removing:1 theorem:9 formula:1 shade:2 specific:3 xt:9 pac:1 learnable:1 rakhlin:1 decay:2 insignificant:1 alt:1 naively:1 incorporating:1 essential:1 sequential:7 effectively:3 importance:9 overloading:1 budget:3 anu:1 horizon:6 gap:3 easier:1 rejection:1 intricacy:1 logarithmic:1 simply:1 explore:2 bubeck:7 likely:1 expressed:1 partially:1 nominally:1 applies:1 satisfies:1 moisture:2 conditional:6 viewed:2 vetta:1 towards:1 change:4 analysing:1 included:1 determined:2 generalisation:2 reducing:1 except:1 operates:1 infinite:1 total:1 experimental:6 ucb:5 formally:2 select:1 demography:1 mark:2 latter:1 arises:1 unbalanced:2 evaluate:1 phenomenon:1 handling:1 |
5,742 | 6,196 | Optimal Cluster Recovery
in the Labeled Stochastic Block Model
Se-Young Yun
CNLS, Los Alamos National Lab.
Los Alamos, NM 87545
[email protected]
Alexandre Proutiere
Automatic Control Dept., KTH
Stockholm 100-44, Sweden
[email protected]
Abstract
We consider the problem of community detection or clustering in the labeled
Stochastic Block Model (LSBM) with a finite number K of clusters of sizes
linearly growing with the global population of items n. Every pair of items is
labeled independently at random, and label ` appears with probability p(i, j, `)
between two items in clusters indexed by i and j, respectively. The objective is to
reconstruct the clusters from the observation of these random labels.
Clustering under the SBM and their extensions has attracted much attention recently.
Most existing work aimed at characterizing the set of parameters such that it is
possible to infer clusters either positively correlated with the true clusters, or with
a vanishing proportion of misclassified items, or exactly matching the true clusters.
We find the set of parameters such that there exists a clustering algorithm with
at most s misclassified items in average under the general LSBM and for any
s = o(n), which solves one open problem raised in [2]. We further develop
an algorithm, based on simple spectral methods, that achieves this fundamental
performance limit within O(npolylog(n)) computations and without the a-priori
knowledge of the model parameters.
1
Introduction
Community detection consists in extracting (a few) groups of similar items from a large global
population, and has applications in a wide spectrum of disciplines including social sciences, biology,
computer science, and statistical physics. The communities or clusters of items are inferred from the
observed pair-wise similarities between items, which, most often, are represented by a graph whose
vertices are items and edges are pairs of items known to share similar features.
The stochastic block model (SBM), introduced three decades ago in [12], constitutes a natural
performance benchmark for community detection, and has been, since then, widely studied. In the
SBM, the set of items V = {1, . . . , n} are partitioned into K non-overlapping clusters V1 , . . . , VK ,
that have to be recovered from an observed realization of a random graph. In the latter, an edge
between two items belonging to clusters Vi and Vj , respectively, is present with probability p(i, j),
independently of other edges. The analyses presented in this paper apply to the SBM, but also to the
labeled stochastic block model (LSBM) [11], a more general model to describe the similarities of
items. There, the observation of the similarity between two items comes in the form of a label taken
from a finite set L = {0, 1, . . . , L}, and label ` is observed between two items in clusters Vi and Vj ,
respectively, with probability p(i, j, `), independently of other labels. The standard SBM can be seen
as a particular instance of its labeled counterpart with two possible labels 0 and 1, and where the
edges present (resp. absent) in the SBM correspond to item pairs with label 1 (resp. 0). The problem
of cluster recovery under the LSBM consists in inferring the hidden partition V1 , . . . , VK from the
observation of the random labels on each pair of items.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Over the last few years, we have seen remarkable progresses for the problem of cluster recovery under
the SBM (see [7] for an exhaustive literature review), highlighting its scientific relevance and richness.
Most recent work on the SBM aimed at characterizing the set of parameters (i.e., the probabilities
p(i, j) that there exists an edge between nodes in clusters i and j for 1 ? i, j ? K) such that some
qualitative recovery objectives can or cannot be met. For sparse scenarios where the average degree
of items in the graph is O(1), parameters under which it is possible to extract clusters positively
correlated with the true clusters have been identified [5, 18, 16]. When the average degree of the
graph is ?(1), one may predict the set of parameters allowing a cluster recovery with a vanishing (as
n grows large) proportion of misclassified items [22, 17], but one may also characterize parameters
for which an asymptotically exact cluster reconstruction can be achieved [1, 21, 8, 17, 2, 3, 13].
In this paper, we address the finer and more challenging question of determining, under the general
LSBM, the minimal number of misclassified items given the parameters of the model. Specifically,
for any given s = o(n), our goal is to identify the set of parameters such that it is possible to devise a
clustering algorithm with at most s misclassified items. Of course, if we achieve this goal, we shall
recover all the aforementioned results on the SBM.
Main results. We focus on the labeled SBM as described above, and where each item is assigned
to cluster Vk with probability ?k > 0, independently of other items. We assume w.l.o.g. that
?1 ? ?2 ? ? ? ? ? ?K . We further assume that ? = (?1 , . . . , ?K ) does not depend on the total
population of items n. Conditionally on the assignment of items to clusters, the pair or edge
(v, w) ? V 2 has label ` ? L = {0, 1, . . . , L} with probability p(i, j, `), when v ? Vi and w ? Vj .
PK PK
W.l.o.g., 0 is the most frequent label, i.e., 0 = arg max` i=1 j=1 ?i ?j p(i, j, `). Throughout the
paper, we typically assume that p? = o(1) and p?n = ?(1) where p? = maxi,j,`?1 p(i, j, `) denotes the
maximum probability of observing a label different than 0. We shall explicitly state whether these
assumption are made when deriving our results. In the standard SBM, the second assumption means
that the average degree of the corresponding random graph is ?(1). This also means that we can hope
to recover clusters with a vanishing proportion of misclassified items. We finally make the following
assumption: there exist positive constants ? and ? such that for every i, j, k ? [K] = {1, . . . , K},
PK PL
2
p(i, j, `)
k=1
`=1 (p(i, k, `) ? p(j, k, `))
(A1) ?` ? L,
? ? and (A2)
? ?.
p(i, k, `)
p?2
(A2) imposes a certain separation between the clusters. For example, in the standard SBM with two
communities, p(1, 1, 1) = p(2, 2, 1) = ?, and p(1, 2, 1) = ?, (A2) is equivalent to 2(? ? ?)2 /? 2 ? .
In summary, the LSBM is parametrized by ? and p = (p(i, j, `))1?i,j?K,0?`?L , and recall that ?
does not depend on n, whereas p does.
For the above LSBM, we derive, for any arbitrary s = o(n), a necessary condition under which
there exists an algorithm inferring clusters with s misclassified items. We further establish that
under this condition, a simple extension of spectral algorithms extract communities with less than s
misclassified items. To formalize these results, we introduce the divergence of (?, p). We denote by
p(i) the K ? (L + 1) matrix whose element on the j-th row and the (` + 1)-th column is p(i, j, `)
and denote by p(i, j) ? [0, 1]L+1 the vector describing the probability distribution of the label of a
pair of items in Vi and Vj , respectively. Let P K?(L+1) denote the set of K ? (L + 1) matrices such
that each row represents a probability distribution. The divergence D(?, p) of (?, p) is defined as
follows: D(?, p) = mini,j:i6=j DL+ (?, p(i), p(j)) with
(K
)
K
X
X
DL+ (?, p(i), p(j)) =
min
max
?k KL(y(k), p(i, k)),
?k KL(y(k), p(j, k))
y?P K?(L+1)
k=1
k=1
where KL denotes the Kullback-Leibler divergence between two label distributions, i.e.,
PL
y(k,`)
KL(y(k), p(i, k)) = `=0 y(k, `) log p(i,k,`)
. Finally, we denote by ?? (n) the number of misclassified items under the clustering algorithm ?, and by E[?? (n)] its expectation (with respect to the
randomness in the LSBM and in the algorithm).
We first derive a tight lower bound on the average number of misclassified items when the latter is
o(n). Note that such a bound was unknown even for the SBM [2].
Theorem 1 Assume that (A1) and (A2) hold, and that p?n = ?(1). Let s = o(n). If there
exists a clustering algorithm ? misclassifying in average less than s items asymptotically, i.e.,
2
lim supn??
E[?? (n)]
s
? 1, then the parameters (?, p) of the LSBM satisfy:
lim inf
n??
nD(?, p)
? 1.
log(n/s)
(1)
To state the corresponding positive result (i.e., the existence of an algorithm misclassifying only
s items), we make an additional assumption to avoid extremely sparse labels: (A3) there exists a
constant ? > 0 such that np(j, i, `) ? (n?
p)? for all i, j and ` ? 1.
Theorem 2 Assume that (A1), (A2), and (A3) hold, and that p? = o(1), p?n = ?(1). Let s = o(n). If
the parameters (?, p) of the LSBM satisfy (1), then the Spectral Partition (SP ) algorithm presented
in Section 4 misclassifies at most s items with high probability, i.e., limn?? P[?SP (n) ? s] = 1.
These theorems indicate that under the LSBM with parameters satisfying (A1) and (A2), the number
of misclassified items scales at least as n exp(?nD(?, p)(1 + o(1)) under any clustering algorithm,
irrespective of its complexity. They further establish that the Spectral Partition algorithm reaches this
fundamental performance limit under the additional condition (A3). We note that the SP algorithm
runs in polynomial time, i.e., it requires O(n2 p? log(n)) floating-point operations.
We further establish a necessary and sufficient condition on the parameters of the LSBM for the
existence of a clustering algorithm recovering the clusters exactly with high probability. Deriving
such a condition was also open [2].
Theorem 3 Assume that (A1) and (A2) hold. If there exists a clustering algorithm that does
not misclassify any item with high probability, then the parameters (?, p) of the LSBM satisfy:
lim inf n?? nD(?,p)
log(n) ? 1. If this condition holds, then under (A3), the SP algorithm recovers the
clusters exactly with high probability.
The paper is organized as follows. Section 2 presents the related work and example of application
of our results. In Section 3, we sketch the proof of Theorem 1, which leverages change-of-measure
and coupling arguments. We present in Section 4 the Spectral Partition algorithm, and analyze
its performance (we outline the proof of Theorem 2). All results are proved in details in the
supplementary material.
2
2.1
Related Work and Applications
Related work
Cluster recovery in the SBM has attracted a lot of attention recently. We summarize below existing
results, and compare them to ours. Results are categorized depending on the targeted level of
performance. First, we consider the notion of detectability, the lowest level of performance requiring
that the extracted clusters are just positively correlated with the true clusters. Second, we look at
asymptotically accurate recovery, stating that the proportion of misclassified items vanishes as n
grows large. Third, we present existing results regarding exact cluster recovery, which means that no
item is misclassified. Finally, we report recent work whose objective, like ours, is to characterize the
optimal cluster recovery rate.
Detectability. Necessary and sufficient conditions for detectability have been studied for the binary
symmetric SBM (i.e., L = 1, K = 2, ?1 = ?2 , p(1, 1, 1) = p(2, 2, 1) = ?, and p(1, 2, 1) =
p(2, 1, 1) = ?). In the sparse regime where ?, ? = o(1), and for the binary symmetric SBM, the main
focus has been on identifying the phase transition
p threshold (a condition on ? and ?) for detectability:
It was conjectured in [5] that if n(? ? ?) < 2n(? + ?) (i.e., under the threshold), no algorithm
can perform better than a simple random assignment of items to clusters, and above the threshold,
clusters can partially be recovered. The conjecture was recently proved in [18] (necessary condition),
and [16] (sufficient condition). The problem of detectability has been also recently studied in [24]
for the asymmetric SBM with more than two clusters of possibly different sizes. Interestingly, it is
shown that in most cases, the phase transition for detectability disappears.
3
The present paper is not concerned with conditions for detectability. Indeed detectability means that
only a strictly positive proportion of items can be correctly classified, whereas here, we impose that
the proportion of misclassified items vanishes as n grows large.
Asymptotically accurate recovery. A necessary and sufficient condition for asymptotically accurate
recovery in the SBM (with any number of clusters of different but linearly increasing sizes) has been
derived in [22] and [17]. Using our notion of divergence specialized to the SBM, this condition is
nD(?, p) = ?(1). Our results are more precise since the minimal achievable number of misclassified
items is characterized, and apply to a broader setting since they are valid for the generic LSBM.
Asymptotically exact recovery. Conditions for exact cluster recovery in the SBM have been also
recently studied. [1, 17, 8] provide a necessary and sufficient condition for asymptotically exact
recovery in the binary symmetric SBM. For example, it is shown that when ? = a log(n)
and
n
?
b log(n)
a+b
? =
for a > b, clusters can be recovered exactly if and only if 2 ? ab ? 1. In [2, 3],
n
the authors consider a more general SBM corresponding to our LSBM with L = 1. They define
CH-divergence as:
D+ (?, p(i), p(j))
K
=
X
n
?k (1 ? ?)p(i, k, 1) + ?p(j, k, 1) ? p(i, k, 1)1?? p(j, k, 1)? ,
max
log(n) ??[0,1]
k=1
and show that mini6=j D+ (?, p(i), p(j)) > 1 is a necessary and sufficient condition for asymptotically
exact reconstruction. The following claim, proven in the supplementary material, relates D+ to DL+ .
Claim 4 When p? = o(1), we have for all i, j:
DL+ (?, p(i), p(j))
n??
?
max
??[0,1]
L X
K
X
?k (1 ? ?)p(i, k, `) + ?p(j, k, `) ? p(i, k, `)1?? p(j, k, `)? .
`=1 k=1
Thus, the results in [2, 3] are obtained by applying Theorem 3 and Claim 4.
In [13], the authors consider a symmetric labeled SBM where communities are balanced (i.e.,
1
?k = K
for all k) and where label probabilities are simply defined as p(i, i, `) = p(`) for all i and
nI
p(i, j, `) = q(`) for all i 6= j. It is shown that log(n)
> 1 is necessary and sufficient for asymptotically
P
p
L
2
exact recovery, where I = ? K log
p(`)q(`) . We can relate I to D(?, p):
`=0
Claim 5 In the LSBM with K clusters, if p? = o(1), and for all i, j, `such that i 6= j, ?i =
PL p
n??
2
p(i, i, `) = p(`), and p(j, k, `) = q(`), we have: D(?, p) ? ? K
log
p(`)q(`) .
`=0
1
K,
Again from this claim, the results derived in [13] are obtained by applying Theorem 3 and Claim 5.
Optimal recovery rate. In [6, 19], the authors consider the binary SBM in the sparse regime where
the average degree of items in the graph is O(1), and identify the minimal number of misclassified
items for very specific intra- and inter-cluster edge probabilities ? and ?. Again the sparse regime
is out of the scope of the present paper. [23, 7] are concerned with the general SBM corresponding
to our LSBM with L = 1, and with regimes where asympotically accurate recovery is possible.
The authors first characterize the optimal recovery rate in a minimax framework. More precisely,
they consider a (potentially large) set of possible parameters (?, p), and provide a lower bound on
the expected number of misclassified items for the worst parameters in this set. Our lower bound
(Theorem 1) is more precise as it is model-specific, i.e., we provide the minimal expected number
of misclassified items for a given parameter (?, p) (and for a more general class of models). Then
the authors propose a clustering algorithm, with time complexity O(n3 log(n)), and achieving their
minimax recovery rate. In comparison, our algorithm yields an optimal recovery rate O(n2 p? log(n))
for any given parameter (?, p), exhibits a lower running time, and applies to the generic LSBM.
4
2.2
Applications
We provide here a few examples of application of our results, illustrating their versatility. In all
examples, f (n) is a function such that f (n) = ?(1), and a, b are fixed real numbers such that a > b.
The binary SBM. Consider the binary SBM where the average item degree is ?(f (n)), and represented by a LSBM with parameters L = 1, K = 2, ? = (?1 , 1??1 ), p(1, 1, 1) = p(2, 2, 1) = afn(n) ,
and p(1, 2, 1) = p(2, 1, 1) = bfn(n) . From Theorems 1 and 2, the optimal number of misclassified
vertices scales as n exp(?g(?1 , a, b)f (n)(1 + o(1))) when ?1 ? 1/2 (w.l.o.g.) and where
g(?1 , a, b) := max (1 ? ?1 ? ? + 2?1 ?)a + (?1 + ? ? 2??)b ? ?1 a? b(1??) ? (1 ? ?1 )a(1??) b? .
??[0,1]
?
?
It can be easily checked that g(?1 , a, b) ? g(1/2, a, b) = 12 ( a ? b)2 (letting ? = 12 ). The worst
case is hence obtained when the two clusters are of equal sizes. When f (n) = log(n), we also note
that the condition for asymptotically exact recovery is g(?1 , a, b) ? 1.
Recovering a single hidden community. As in [9], consider a random graph model with a hidden
community consisting of ?n vertices, edges between vertices belonging the hidden community are
present with probability afn(n) , and edges between other pairs are present with probability bfn(n) .
This is modeled by a LSBM with parameters K = 2, L = 1, ?1 = ?, p(1, 1, 1) = afn(n) , and
p(1, 2, 1) = p(2, 1, 1) = p(2, 2, 1) = bfn(n) . The minimal number of misclassified items when
searching for the hidden community scales as n exp(?h(?, a, b)f (n)(1 + o(1))) where
1 + log(a ? b) ? log(a log(a/b))
h(?, a, b) := ? a ? (a ? b)
.
log(a/b)
When f (n) = log(n), the condition for asymptotically exact recovery of the hidden community is
h(?, a, b) ? 1.
Optimal sampling for community detection under the SBM. Consider a dense binary symmetric
SBM with intra- and inter-cluster edge probabilities a and b. In practice, to recover the clusters,
one might not be able to observe the entire random graph, but sample its vertex (here item) pairs as
considered in [22]. Assume for instance that any pair of vertices is sampled with probability ?fn(n)
for some fixed ? > 0, independently of other pairs. We can model such scenario using a LSBM with
three labels, namely ?, 0 and 1, corresponding to the absence of observation (the vertex pair is not
sampled), the observation of the absence of an edge and of the presence of an edge, respectively,
and with parameters for all i, j ? {1, 2}, p(i, j, ?) = 1 ? ?fn(n) , p(1, 1, 1) = p(2, 2, 1) = a ?fn(n) ,
and p(1, 2, 1) = p(2, 1, 1) = b ?fn(n) . The minimal number of misclassified vertices scales as
p
?
n exp(?l(?, a, b)f (n)(1 + o(1))) where l := ?(1 ? ab ? (1 ? a)(1 ? b)). When f (n) = log(n),
the condition for asymptotically exact recovery is l(?, a+ , a? , b+ , b? ) ? 1.
Signed networks. Signed networks [15, 20] are used in social sciences to model positive and negative
interactions between individuals. These networks can be represented by a LSBM with three possible
labels, namely 0, + and -, corresponding to the absence of interaction, positive and negative interaction,
respectively. Consider such LSBM with parameters: K = 2, ?1 = ?2 , p(1, 1, +) = p(2, 2, +) =
a+ f (n)
, p(1, 1, ?) = p(2, 2, ?) = a? fn(n) , p(1, 2, +) = p(2, 1, +) = b+ fn(n) , and p(1, 2, ?) =
n
p(2, 1, ?) = b? fn(n) , for some fixed a+ , a? , b+ , b? such that a+ > b+ and a? < b? . The minimal
number of misclassified individuals here scales as n exp(?m(?, a+ , a? , b+ , b? )f (n)(1 + o(1)))
where
p
p
1 ?
?
m(?, a+ , a? , b+ , b? ) :=
( a+ ? b+ )2 + ( a? ? b? )2 .
2
When f (n) = log(n), the condition for asymptotically exact recovery is l(?, a+ , a? , b+ , b? ) ? 1.
3
Fundamental Limits: Change of Measures through Coupling
In this section, we explain the construction of the proof of Theorem 1. The latter relies on an
appropriate change-of-measure argument, frequently used to identify upper performance bounds in
5
online stochastic optimization problems [14]. In the following, we refer to ?, defined by parameters
(?, p), as the true stochastic model under which all the observed random labels are generated, and
denote by P? = P (resp. E? [?] = E[?]) the corresponding probability measure (resp. expectation). In
our change-of-measure argument, we construct a second stochastic model ? (whose corresponding
probability measure and expectation are P? and E? [?], respectively). Using a change of measures
from P? to P? , we relate the expected number of misclassified items E? [?? (n)] under any clustering
algorithm ? to the expected (w.r.t. P? ) log-likelihood ratio Q of the observed labels under P? and
P? . Specifically, we show that, roughly, log(n/E? [?? (n)]) must be smaller than E? [Q] for n large
enough.
Construction of ?. Let (i? , j ? ) = arg mini,j:i<j DL+ (?, p(i), p(j)), and let v ? denote the smallest
item index that belongs to cluster i? or j ? . If both Vi? and Vj ? are empty, we define v ? = n. Let
PK
PK
q ? P K?(L+1) such that: D(?, p) = k=1 ?k KL(q(k), p(i? , k)) = k=1 ?k KL(q(k), p(j ? , k)).
The existence of such q is proved in Lemma 7 in the supplementary material. Now to define the
stochastic model ?, we couple the generation of labels under ? and ? as follows.
1. We first generate the random clusters V1 , . . . , VK under ?, and extract i? , j ? , and v ? . The clusters
generated under ? are the same as those generated under ?. For any v ? V, we denote by ?(v) the
cluster of item v.
2. For all pairs (v, w) such that v 6= v ? and w 6= v ? , the labels generated under ? are the same as those
generated under ?, i.e., the label ` is observed on the edge (v, w) with probability p(?(v), ?(w), `).
3. Under ?, for any v 6= v ? , the observed label on the edge (v, v ? ) under ? is ` with probability
q(?(v), `).
Let xv,w denote the label observed for the pair (v, w). We introduce Q, the log-likelihood ratio of
the observed labels under P? and P? as:
?
vX
?1
Q=
v=1
log
n
X
q(?(v), xv? ,v )
q(?(v), xv? ,v )
log
+
.
p(?(v ? ), ?(v), xv? ,v ) v=v? +1
p(?(v ? ), ?(v), xv? ,v )
(2)
S
Let ? be a clustering algorithm with output (V?k )1?k?K , and let E = 1?k?K V?k \ Vk be the set
of misclassified items under ?. Note that in general in our analysis, we always assume without
S
S
loss of generality that | 1?k?K V?k \ Vk | ? | 1?k?K V??(k) \ Vk | for any permutation ?, so that
the set of misclassified items is indeed E. By definition, ?? (n) = |E|. Since under ?, items are
interchangeable (remember that items are assigned to the various clusters in an i.i.d. manner), we
have: nP? {v ? E} = E? [?? (n)] = E[?? (n)].
Next, we establish a relationship between E[?? (n)] and the distribution of Q under P? . For any
?
?j ?
function f (n), we can prove that: P? {Q ? f (n)} ? exp(f (n)) (?Ei??[?+?(n)]
+ ?i? +?
. Using
j ? )n
j?
?
this result with f (n) = log (n/E? [?? (n)]) ? log(2/?
),
and
Chebyshev?s
inequality,
we
deduce
qi
that: log (n/E? [?? (n)]) ? log(2/?i? ) ? E? [Q] +
4
?i?
E? [(Q ? E? [Q])2 ], and thus, a necessary
r
4
E? [(Q ? E? [Q])2 ].
?i?
?
condition for E[? (n)] ? s is:
log (n/s) ? log(2/?i? ) ? E? [Q] +
(3)
Analysis of Q. In view of (3), we can obtain a necessary condition for E[?? (n)] ? s if we evaluate
E? [Q] and E? [(Q ? E? [Q])2 ]. To evaluate E? [Q], we can first prove that v ? ? log(n)2 with high
Pn
q(?(v),xv? ,v )
probability. From this, we can approximate E? [Q] by E? [ v=v? +1 log p(?(v? ),?(v),x
], which
v ? ,v )
is itself well-approximated by nD(?, p). More formally, we can show that:
E? [Q]
?
log ?
n + 2 log(?) log(n)2 D(?, p) + 3 .
n
(4)
Similarly, we prove that E? [(Q ? E? [Q])2 ] = O(n?
p), which in view of Lemma 8 (refer to the
supplementary material) and assumption (A2), implies that: E? [(Q ? E? [Q])2 ] = o(nD(?, p)).
6
We complete the proof of Theorem 1 by putting the above arguments together: From (3), (4) and
the above analysis of Q, when the expected number of misclassified items is less than s (i.e.,
E[?? (n)] ? s), we must have: lim inf n?? nD(?,p)
log(n/s) ? 1.
4
The Spectral Partition Algorithm and its Optimality
In this section, we sketch the proof of Theorem 2. To this aim, we present the Spectral Partition
(SP) algorithm and analyze its performance. The SP algorithm consists in two parts, and its detailed
pseudo-code is presented at the beginning of the supplementary document (see Algorithm 1).
The first part of the algorithm can be interpreted as an initialization for its second part, and consists in
applying a spectral decomposition of a n ? n random matrix A constructed from the observed labels.
PL
More precisely, A = `=1 w` A` , where A` is the binary matrix identifying the item pairs with
observed label `, i.e., for all v, w ? V, A`vw = 1 if and only if (v, w) has label `. The weight w` for
label ` ? {1, . . . , L} is generated uniformly at random in [0, 1], independently of other weights. From
the spectral decomposition of A, we estimate the number of communities and provide asymptotically
accurate estimates S1 , . . . , SK of the hidden clusters asymptotically accurately, i.e., we show that
? = K and there exists a permutation ? of {1, . . . , K}
when n?
p = ?(1), with high probability,
K
2
log(n
p)
?
1 K
such that
?
Vk \ S?(k) = O
. This first part of the SP algorithm is adapted from
n
k=1
np?
algorithms proposed for the standard SBM in [4, 22] to handle the additional labels in the model
without the knowledge of the number K of clusters.
The second part is novel, and is critical to ensure the optimality of the SP algorithm. It consists in first constructing an estimate p? of the true parameters p of the model from the matrices
(A` )1?`?L and the estimated clusters S1 , . . . , SK provided in the first part of SP. We expect p to
be well estimated since S1 , . . . , SK are asymptotically accurate. Then our cluster estimates are
(t)
(t)
iteratively improved. We run blog(n)c iterations. Let S1 , . . . , SK denote the clusters estimated
(0)
(0)
after the t-th iteration, initialized with (S1 , . . . , SK ) = (S1 , . . . , SK ). The improved clusters
(t+1)
(t+1)
S1
, . . . , SK
are obtained by assigning each item v ? V to the cluster maximizing a log(t)
(t)
(t+1)
likelihood formed from p?, S1 , . . . , SK , and the observations (A` )1?`?L : v is assigned to Sk?
P
P
P
K
L
where k ? = arg maxk { i=1 w?S (t?1) `=0 A`vw log p?(k, i, `)}.
i
Part 1: Spectral Decomposition. The spectral decomposition is described in Lines 1 to 4 in
Algorithm 1. As usual in spectral methods, the matrix A is first trimmed (to remove lines and columns
corresponding to items with too many observed labels ? as they would perturb the spectral analysis).
To this aim, we estimate the average number of labels per item, and use this estimate, denoted by p? in
Algorithm 1, as a reference for the trimming process. ? and A? denote the set of remaining items
after trimming, and the corresponding trimmed matrix, respectively.
If the number of clusters K is known and if we do not account for time complexity, the two step
algorithm in [4] can extract the clusters from A? : first the optimal rank-K approximation A(K) of
A? is derived using the SVD; then, one applies the k-mean algorithm to the columns of A(K) to
reconstruct the clusters. The number of misclassified items after this two step algorithm is obtained
PL
as follows. Let M ` = E[A`? ], and M = `=1 w` M ` (using the same weights as those defining
A). Then, M is of rank K. If v and w are in the same cluster, Mv = Mw and if v and w do?not
belong to the same cluster, from (A2), we must have with high probability: kMv ? Mw k2 = ?(?
p n).
?
(K)
Thus, the k-mean algorithm misclassifies v only if kAv ? Mv k2 = ?(?
p n). By leveraging
P
(k)
elements of random graph and random matrix theories, we can establish that v kAv ? Mv k22 =
kA(k) ? M k2F = O(n?
p) with high probability. Hence the algorithm misclassifies O(1/?
p) items with
high probability.
Here the number of clusters K is not given a-priori. In this scenario, Algorithm 2 estimates the rank
of M using a singular value thresholding procedure. To reduce the complexity of the algorithm, the
singular values and singular vectors are obtained using the iterative power method instead of a direct
SVD. It is known from [10] that with ? (log(n)) iterations, the iterative power method find singular
values and the rank-K approximation very accurately. Hence, when n?
p = ?(1), we can easily
7
?
estimate the rank of M by looking at the number of singular values above the threshold n?
p log(n?
p),
since?we know from random matrix theory that the (K + 1)-th singular value of A? is much less
than n?
p log(n?
p) with high probability. In the pseudo-code of Algorithm 2, the estimated rank of
?
M is denoted by K.
? approximation of A? obtained by the iterative power method is A? = U
? V? = U
?U
? > A? .
The rank-K
? we can estimate the number of clusters and classify items. Almost every
From the columns of A,
q
np?2
?
column of A is located around the corresponding column of M within a distance 12 log(n
p)
? , since
P ?
2
2
2
?
p log(n?
p) ) with high probability (we rigorously analyze
v kAv ? Mv k2 = kA ? M kF = O(n?
this distance in the supplementary material Section D.2). From this observation, the columns can be
categorised into K groups. To find these groups, we randomlyqpick log(n) reference columns and for
2
np?
each reference column, search all columns within distance log(n
p)
? . Then, with high probability,
each cluster has at least one reference column and each reference column can find most of its cluster
members. Finally, the K groups are identified using the reference columns. To this aim, we compute
the distance of n log(n) column pairs A?v , A?w . Observe that kA?v ? A?w k2 = kV?v ? V?w k2 for any
? are orthonormal. Now V?v is of dimension K,
? and hence we can
u, v ? ?, since the columns of U
? log(n)) operations.
identify the groups using O(nK
Theorem 6 Assume that (A1) and (A2) hold, and that n?
p = ?(1). After Step 4 (spectral decom? = K and there exists a permutation ? of
position) in the SP algorithm, with high probability,
K
log(np)
?2
{1, . . . , K} such that: ?K
.
k=1 Vk \ S?(k) = O
p?
Part 2: Successive clusters improvements. Part 2 of the SP algorithm is described in Lines 5 and
6 in Algorithm 1. To analyze the performance of each improvement iteration, we introduce the set
of items H as the largest subset of V such that for all v ? H: (H1) e(v, V) ? 10?n?
pL; (H2) when
PK PL
np?
v ? Vk , i=1 `=0 e(v, Vi , `) log p(k,i,`)
for
all
j
=
6
k;
(H3)
e(v,
V
\
H)
? 2 log(n?
p)2 ,
?
4
p(j,i,`)
log(np)
?
P
P
L
where for any S ? V and `, e(v, S, `) = w?S A`vw , and e(v, S) = `=1 e(v, S, `). Condition
(H1) means that there are not too many observed labels ` ? 1 on pairs including v, (H2) means that
an item v ? Vk must be classified to Vk when considering the log-likelihood, and (H3) states that v
does not share too many labels with items outside H.
np?
We then prove that |V \ H| ? s with high probability when nD(?, p) ? log(n
p)
? 3 ? log(n/s) +
p
log(n/s). This is mainly done using concentration arguments to relate the quantity
PK PL
p(k,i,`)
i=1
`=0 e(v, Vi , `) log p(j,i,`) involved in (H2) to nD(?, p).
Finally, we establish that if the clusters provided after the first part of the SP algorithm are asymptotically accurate, then after log(n) improvement iterations, there is no misclassified items in H. To that
aim, we denote by E (t) the set of misclassified items after the t-th iteration, and show that with high
(t+1)
?H|
probability, for all t, |E|E (t) ?H|
? ?1np? . This completes the proof of Theorem 2, since after log(n)
iterations, the only misclassified items are those in V \ H.
Acknowledgments
We gratefully acknowledge the support of the U.S. Department of Energy through the LANL/LDRD
Program for this work.
8
References
[1] E. Abbe, A. Bandeira, and G. Hall. Exact recovery in the stochastic block model. CoRR, abs/1405.3267,
2014.
[2] E. Abbe and C. Sandon. Community detection in general stochastic block models: fundamental limits and
efficient recovery algorithms. In FOCS, 2015.
[3] E. Abbe and C. Sandon. Recovering communities in the general stochastic block model without knowing
the parameters. In NIPS, 2015.
[4] A. Coja-Oghlan. Graph partitioning via adaptive spectral techniques. Combinatorics, Probability &
Computing, 19(2):227?284, 2010.
[5] A. Decelle, F. Krzakala, C. Moore, and L. Zdeborov?. Inference and phase transitions in the detection of
modules in sparse networks. Phys. Rev. Lett., 107, Aug 2011.
[6] Y. Deshpande, E. Abbe, and A. Montanari. Asymptotic mutual information for the two-groups stochastic
block model. CoRR, abs/1507.08685, 2015.
[7] C. Gao, Z. Ma, A. Zhang, and H. Zhou. Achieving optimal misclassification proportion in stochastic block
model. CoRR, abs/1505.03772, 2015.
[8] B. Hajek, Y. Wu, and J. Xu. Achieving exact cluster recovery threshold via semidefinite programming.
CoRR, abs/1412.6156, 2014.
[9] B. Hajek, Y. Wu, and J. Xu. Information limits for recovering a hidden community. CoRR, abs/1509.07859,
2015.
[10] N. Halko, P. Martinsson, and J. Tropp. Finding structure with randomness: Probabilistic algorithms for
constructing approximate matrix decompositions. SIAM review, 53(2):217?288, 2011.
[11] S. Heimlicher, M. Lelarge, and L. Massouli?. Community detection in the labelled stochastic block model.
In NIPS Workshop on Algorithmic and Statistical Approaches for Large Social Networks, 2012.
[12] P. Holland, K. Laskey, and S. Leinhardt. Stochastic blockmodels: First steps. Social Networks, 5(2):109 ?
137, 1983.
[13] V. Jog and P. Loh. Information-theoretic bounds for exact recovery in weighted stochastic block models
using the renyi divergence. CoRR, abs/1509.06418, 2015.
[14] T. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics,
6(1):4?22, 1985.
[15] J. Leskovec, D. Huttenlocher, and J. Kleinberg. Signed networks in social media. In CHI, 2010.
[16] L. Massouli?. Community detection thresholds and the weak ramanujan property. In STOC, 2014.
[17] E. Mossel, J. Neeman, and A. Sly. Consistency thresholds for binary symmetric block models. In STOC,
2015.
[18] E. Mossel, J. Neeman, and A. Sly. Reconstruction and estimation in the planted partition model. Probability
Theory and Related Fields, 162(3-4):431?461, 2015.
[19] E. Mossel and J. Xu. Density evolution in the degree-correlated stochastic block model. CoRR,
abs/1509.03281, 2015.
[20] V. Traag and J. Bruggeman. Community detection in networks with positive and negative links. Physical
Review E, 80(3):036115, 2009.
[21] S. Yun and A. Proutiere. Accurate community detection in the stochastic block model via spectral
algorithms. CoRR, abs/1412.7335, 2014.
[22] S. Yun and A. Proutiere. Community detection via random and adaptive sampling. In COLT, 2014.
[23] A. Zhang and H. Zhou. Minimax rates of community detection in stochastic block models. CoRR,
abs/1507.05313, 2015.
[24] P. Zhang, C. Moore, and M. Newman. Community detection in networks with unequal groups. CoRR,
abs/1509.00107, 2015.
9
| 6196 |@word illustrating:1 achievable:1 polynomial:1 proportion:7 cnls:1 nd:9 open:2 decomposition:5 neeman:2 ours:2 interestingly:1 document:1 existing:3 recovered:3 ka:3 assigning:1 attracted:2 must:4 fn:7 partition:7 remove:1 afn:3 item:71 beginning:1 vanishing:3 node:1 successive:1 zhang:3 constructed:1 direct:1 qualitative:1 consists:5 prove:4 focs:1 manner:1 krzakala:1 introduce:3 inter:2 expected:5 indeed:2 roughly:1 frequently:1 growing:1 chi:1 gov:1 considering:1 increasing:1 spain:1 provided:2 medium:1 lowest:1 interpreted:1 finding:1 pseudo:2 remember:1 every:3 exactly:4 k2:5 control:1 partitioning:1 positive:6 xv:6 limit:5 decelle:1 might:1 signed:3 initialization:1 studied:4 challenging:1 acknowledgment:1 practice:1 block:15 procedure:1 matching:1 cannot:1 applying:3 equivalent:1 maximizing:1 ramanujan:1 attention:2 independently:6 recovery:28 identifying:2 rule:1 sbm:30 deriving:2 orthonormal:1 population:3 searching:1 notion:2 handle:1 resp:4 construction:2 exact:14 programming:1 element:2 satisfying:1 approximated:1 located:1 asymmetric:1 labeled:7 huttenlocher:1 observed:13 module:1 worst:2 richness:1 balanced:1 vanishes:2 complexity:4 rigorously:1 depend:2 tight:1 interchangeable:1 easily:2 represented:3 various:1 describe:1 newman:1 outside:1 exhaustive:1 whose:4 widely:1 supplementary:6 reconstruct:2 itself:1 online:1 reconstruction:3 propose:1 interaction:3 leinhardt:1 frequent:1 realization:1 achieve:1 kv:1 los:2 cluster:65 empty:1 derive:2 develop:1 coupling:2 depending:1 stating:1 h3:2 progress:1 aug:1 solves:1 recovering:4 come:1 indicate:1 met:1 implies:1 mini6:1 stochastic:19 vx:1 material:5 stockholm:1 extension:2 pl:8 strictly:1 hold:5 around:1 considered:1 hall:1 exp:6 scope:1 predict:1 algorithmic:1 claim:6 achieves:1 a2:10 smallest:1 estimation:1 label:34 robbins:1 largest:1 weighted:1 hope:1 decom:1 always:1 aim:4 avoid:1 pn:1 zhou:2 broader:1 derived:3 focus:2 vk:12 improvement:3 rank:7 likelihood:4 mainly:1 ldrd:1 inference:1 typically:1 entire:1 hidden:8 proutiere:3 misclassified:30 arg:3 aforementioned:1 colt:1 denoted:2 priori:2 misclassifies:3 raised:1 mutual:1 equal:1 construct:1 field:1 sampling:2 biology:1 represents:1 look:1 k2f:1 constitutes:1 abbe:4 np:10 report:1 few:3 national:1 divergence:6 individual:2 floating:1 phase:3 consisting:1 versatility:1 ab:12 detection:13 misclassify:1 trimming:2 intra:2 semidefinite:1 accurate:8 edge:14 necessary:10 sweden:1 indexed:1 initialized:1 minimal:7 leskovec:1 instance:2 column:15 classify:1 assignment:2 vertex:8 subset:1 alamo:2 too:3 characterize:3 density:1 fundamental:4 siam:1 probabilistic:1 physic:1 discipline:1 together:1 again:2 nm:1 possibly:1 account:1 satisfy:3 combinatorics:1 explicitly:1 mv:4 vi:7 view:2 lot:1 lab:1 h1:2 observing:1 analyze:4 recover:3 formed:1 ni:1 correspond:1 identify:4 yield:1 weak:1 accurately:2 finer:1 ago:1 randomness:2 classified:2 explain:1 reach:1 phys:1 checked:1 definition:1 lelarge:1 energy:1 involved:1 deshpande:1 proof:6 recovers:1 couple:1 sampled:2 proved:3 recall:1 knowledge:2 lim:4 organized:1 formalize:1 oghlan:1 hajek:2 appears:1 alexandre:1 improved:2 done:1 generality:1 just:1 sly:2 sketch:2 tropp:1 ei:1 overlapping:1 laskey:1 scientific:1 grows:3 k22:1 requiring:1 true:6 counterpart:1 evolution:1 hence:4 assigned:3 symmetric:6 leibler:1 moore:2 iteratively:1 conditionally:1 yun:3 outline:1 theoretic:1 complete:1 wise:1 novel:1 recently:5 specialized:1 physical:1 belong:1 martinsson:1 refer:2 automatic:1 consistency:1 mathematics:1 i6:1 similarly:1 gratefully:1 similarity:3 deduce:1 recent:2 conjectured:1 inf:3 belongs:1 scenario:3 certain:1 bandeira:1 inequality:1 binary:9 blog:1 devise:1 seen:2 additional:3 impose:1 relates:1 infer:1 jog:1 characterized:1 dept:1 lai:1 a1:6 qi:1 expectation:3 iteration:7 achieved:1 whereas:2 completes:1 singular:6 limn:1 member:1 leveraging:1 extracting:1 vw:3 leverage:1 presence:1 mw:2 enough:1 concerned:2 identified:2 reduce:1 regarding:1 knowing:1 chebyshev:1 absent:1 whether:1 trimmed:2 loh:1 se:2 aimed:2 detailed:1 generate:1 exist:1 misclassifying:2 estimated:4 correctly:1 per:1 detectability:8 shall:2 group:7 putting:1 threshold:7 achieving:3 v1:3 graph:10 asymptotically:18 year:1 run:2 massouli:2 throughout:1 almost:1 wu:2 separation:1 bound:6 adapted:1 precisely:2 n3:1 kleinberg:1 argument:5 min:1 extremely:1 optimality:2 conjecture:1 department:1 belonging:2 smaller:1 partitioned:1 rev:1 s1:8 taken:1 describing:1 know:1 letting:1 operation:2 apply:2 observe:2 spectral:16 generic:2 appropriate:1 existence:3 denotes:2 clustering:12 running:1 ensure:1 remaining:1 perturb:1 establish:6 objective:3 question:1 quantity:1 concentration:1 planted:1 usual:1 exhibit:1 zdeborov:1 kth:2 supn:1 distance:4 link:1 parametrized:1 code:2 modeled:1 index:1 mini:2 ratio:2 relationship:1 potentially:1 relate:3 stoc:2 negative:3 unknown:1 perform:1 allowing:1 upper:1 coja:1 observation:7 benchmark:1 finite:2 acknowledge:1 maxk:1 defining:1 looking:1 precise:2 arbitrary:1 community:24 inferred:1 introduced:1 pair:17 namely:2 lanl:2 kl:6 sandon:2 kav:3 unequal:1 barcelona:1 nip:3 address:1 able:1 below:1 regime:4 summarize:1 program:1 including:2 max:5 power:3 critical:1 misclassification:1 natural:1 minimax:3 mossel:3 disappears:1 irrespective:1 extract:4 review:3 literature:1 kf:1 determining:1 asymptotic:1 loss:1 expect:1 permutation:3 generation:1 allocation:1 proven:1 remarkable:1 h2:3 degree:6 sufficient:7 imposes:1 thresholding:1 share:2 row:2 course:1 summary:1 last:1 wide:1 characterizing:2 sparse:6 dimension:1 lett:1 transition:3 valid:1 author:5 made:1 adaptive:3 social:5 approximate:2 kullback:1 categorised:1 global:2 spectrum:1 search:1 iterative:3 decade:1 sk:9 constructing:2 vj:5 sp:12 pk:7 main:2 dense:1 linearly:2 montanari:1 blockmodels:1 alepro:1 n2:2 categorized:1 positively:3 xu:3 inferring:2 position:1 third:1 renyi:1 young:1 theorem:15 specific:2 maxi:1 a3:4 dl:5 exists:8 workshop:1 corr:10 nk:1 halko:1 simply:1 gao:1 highlighting:1 partially:1 holland:1 applies:2 ch:1 relies:1 extracted:1 ma:1 goal:2 targeted:1 labelled:1 absence:3 change:5 specifically:2 uniformly:1 lemma:2 total:1 svd:2 formally:1 support:1 latter:3 relevance:1 evaluate:2 correlated:4 |
5,743 | 6,197 | Multi-step learning and
underlying structure in statistical models
Maia Fraser
Dept. of Mathematics and Statistics
Brain and Mind Research Institute
University of Ottawa
Ottawa, ON K1N 6N5, Canada
[email protected]
Abstract
In multi-step learning, where a final learning task is accomplished via a sequence of
intermediate learning tasks, the intuition is that successive steps or levels transform
the initial data into representations more and more ?suited" to the final learning
task. A related principle arises in transfer-learning where Baxter (2000) proposed a
theoretical framework to study how learning multiple tasks transforms the inductive
bias of a learner. The most widespread multi-step learning approach is semisupervised learning with two steps: unsupervised, then supervised. Several authors
(Castelli-Cover, 1996; Balcan-Blum, 2005; Niyogi, 2008; Ben-David et al, 2008;
Urner et al, 2011) have analyzed SSL, with Balcan-Blum (2005) proposing a
version of the PAC learning framework augmented by a ?compatibility function"
to link concept class and unlabeled data distribution. We propose to analyze SSL
and other multi-step learning approaches, much in the spirit of Baxter?s framework,
by defining a learning problem generatively as a joint statistical model on X ? Y .
This determines in a natural way the class of conditional distributions that are
possible with each marginal, and amounts to an abstract form of compatibility
function. It also allows to analyze both discrete and non-discrete settings. As tool
for our analysis, we define a notion of -uniform shattering for statistical models.
We use this to give conditions on the marginal and conditional models which
imply an advantage for multi-step learning approaches. In particular, we recover a
more general version of a result of Poggio et al (2012): under mild hypotheses a
multi-step approach which learns features invariant under successive factors of a
finite group of invariances has sample complexity requirements that are additive
rather than multiplicative in the size of the subgroups.
1
Introduction
The classical PAC learning framework of Valiant (1984) considers a learning problem with unknown
true distribution p on X ? Y , Y = {0, 1} and fixed concept class C consisting of (deterministic)
functions f : X ! Y . The aim of learning is to select a hypothesis h : X ! Y , say from C itself
(realizable case), that best recovers f . More formally, the class C is said to be PAC learnable if there
is a learning algorithm that with high probability selects h 2 C having arbitrarily low generalization
error for all possible distributions D on X. The distribution D governs both the sampling of points
z = (x, y) 2 X ? Y by which the algorithm obtains a training sample and also the cumulation of
error over all x 2 X which gives the generalization error. A modification of this model, together
with the notion of learnable with a model of probability (resp. decision rule) (Haussler, 1989;
Kearns and Schapire, 1994), allows to treat non-deterministic functions f : X ! Y and the case
Y = [0, 1] analogously. Polynomial dependence of the algorithms on sample size and reciprocals
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
of probability bounds is further required in both frameworks for efficient learning. Not only do
these frameworks consider worst case error, in the sense of requiring the generalization error to be
small for arbitrary distributions D on X, they assume the same concept class C regardless of the true
underlying distribution D. In addition, choice of the hypothesis class is taken as part of the inductive
bias of the algorithm and not addressed.
Various, by now classic, measures of complexity of a hypothesis space (e.g., VC dimension or
Rademacher complexity, see Mohri et al. (2012) for an overview) allow to prove upper bounds on
generalization error in the above setting, and distribution-specific variants of these such as annealed
VC-entropy (see Devroye et al. (1996)) or Rademacher averages (beginning with Koltchinskii (2001))
can be used to obtain more refined upper bounds.
The widespread strategy of semi-supervised learning (SSL) is known not to fit well into PAC-style
frameworks (Valiant, 1984; Haussler, 1989; Kearns and Schapire, 1994). SSL algorithms perform a
first step using unlabeled training data drawn from a distribution on X, followed by a second step
using labeled training data from a joint distribution on X ? Y . This has been studied by several
authors (Balcan and Blum, 2005; Ben-David et al., 2008; Urner et al., 2011; Niyogi, 2013) following
the seminal work of Castelli and Cover (1996) comparing the value of unlabeled and labeled data.
One immediate observation is that without some tie between the possible marginals D on X and the
concept class C which records possible conditionals p(y|x), there is no benefit to unlabeled data: if D
can be arbitrary then it conveys no information about the true joint distribution that generated labeled
data. Within PAC-style frameworks, however, C and D are completely independent. Balcan and
Blum therefore proposed augmenting the PAC learning framework by the addition of a compatibility
function : C ? D ! [0, 1], which records the amount of compatibility we believe each concept
from C to have with each D 2 D, the class of ?all" distributions on X. This function is required to be
learnable from D and is then used to reduce the concept class from C to a sub-class which will be
used for the subsequent (supervised) learning step. If is a good compatible function this sub-class
should have lesser complexity than C (Balcan and Blum, 2005). While PAC-style frameworks in
essence allow the true joint distribution to be anything in C ? D, the existence of a good compatibility
function in the sense of Balcan and Blum (2005) implicitly assumes the joint model that we believe
in is smaller. We return to this point in Section 2.1.
In this paper we study properties of multi-step learning strategies ? those which involve multiple
training steps ? by considering the advantages of breaking a single learning problem into a sequence
of two learning problems. We start by assuming a true distribution which comes from a class of
joint distributions, i.e. statistical model, P on X ? Y . We prove that underlying structure of a
certain kind in P, together with differential availability of labeled vs. unlabeled data, imply a
quantifiable advantage to multi-step learning at finite sample size. The structure we need is the
existence of a representation t(x) of x 2 X which is a sufficient statistic for the classification or
regression of interest. Two common settings where this assumption holds are: manifold learning and
group-invariant feature learning. In these settings we have respectively
1. t = tpX is determined by the marginal pX and pX is concentrated on a submanifold of X,
2. t = tG is determined by a group action on X and p(y|x) is invariant1 under this action.
Learning t in these cases corresponds respectively to learning manifold features or group-invariant
features; various approaches exist (see (Niyogi, 2013; Poggio et al., 2012) for more discussion) and
we do not assume any fixed method. Our framework is also not restricted to these two settings. As a
tool for analysis we define a variant of VC dimension for statistical models which we use to prove
a useful lower bound on generalization error even2 under the assumption that the true distribution
comes from P. This allows us to establish a gap at finite sample size between the error achievable
by a single-step purely supervised learner and that achievable by a semi-supervised learner. We do
not claim an asymptotic gap. The purpose of our analysis is rather to show that differential finite
availability of data can dictate a multi-step learning approach. Our applications are respectively
a strengthening of a manifold learning example analyzed by Niyogi (2013) and a group-invariant
features example related to a result of Poggio et al. (2012). We also discuss the relevance of these to
biological learning.
Our framework has commonalities with a framework of Baxter (2000) for transfer learning. In that
work, Baxter considered learning the inductive bias (i.e., the hypothesis space) for an algorithm for a
1
2
This means there is a group G of transformations of X such that p(y|x) = p(y|g?x) for all g 2 G.
(distribution-specific lower bounds are by definition weaker than distribution-free ones)
2
?target" learning task, based on experience from previous ?source" learning tasks. For this purpose he
defined a learning environment E to be a class of probability distributions on X ? Y together with an
unknown probability distribution Q on E, and assumed E to restrict the possible joint distributions
which may arise. We also make a generative assumption, assuming joint distributions come from P,
but we do not use a prior Q. Within his framework Baxter studied the reduction in generalization
error for an algorithm to learn a new task, defined by p 2 E, when given access to a sample from p
and a sample from each of m other learning tasks, p1 , . . . , pm 2 E, chosen randomly according to Q,
compared with an algorithm having access to only a sample from p. The analysis produced upper
bounds on generalization error in terms of covering numbers and a lower bound was also obtained in
terms of VC dimension in the specific case of shallow neural networks. In proving our lower bound
in terms of a variant of VC dimension we use a minimax analysis.
2
Setup
We assume a learning problem is specified by a joint probability distribution p on Z = X ? Y and a
particular (regression, classification or decision) function fp : X ! R determined entirely by p(y|x).
Moreover, we postulate a statistical model P on X ? Y and assume p 2 P. Despite the simplified
notation, fp (x) depends on the conditionals p(y|x) and not the entire joint distribution p.
There are three main types of learning problem our framework addresses (reflected in three types
of fp ). When y is noise-free, i.e. p(y|x) is concentrated at a single y-value vp (x) 2 {0, 1},
fp = vp : X ! {0, 1} (classification); here fp (x) = Ep (y|x). When y is noisy, then either
fp : X ! {0, 1} (classification/decision) or fp : X ! [0, 1] (regression) and fp (x) = Ep (y|x). In
all three cases the parameters which define fp , the learning goal, depend only on p(y|x) = Ep (y|x).
We assume the learner knows the model P and the type of learning problem, i.e., the hypothesis class
is the ?concept class" C := {fp : p 2 P}. To be more precise, for the first type of fp listed above, this
is the concept class (Kearns and Vazirani, 1994); for the second type, it is a class of decision rules
and for the third type, it is a class of p-concepts (Kearns and Schapire, 1994). For specific choice of
loss functions, we seek worst-case bounds on learning rates, over all distributions p 2 P.
Our results for all three types of learning problem are stated in Theorem 3. To keep the presentation
simple, we give a detailed proof for the first two types, i.e., assuming labels are binary. This shows
how classic PAC-style arguments for discrete X can be adapted to our framework where X may be
smooth. Extending these arguments to handle non-binary Y proceeds by the same modifications as
for discrete X (c.f. Kearns and Schapire (1994)). We remark that in the presence of noise, better
bounds can be obtained (see Theorem 3 for details) if a more technical version of Definition 1 is used
but we leave this for a subsequent paper.
We define the following probabilistic version of fat shattering dimension:
Definition 1. Given P, a class of probability distributions on X ? {0, 1}, let 2 (0, 1), ? 2 (0, 1/2)
and n 2 N = {0, 1, . . . , ...}. Suppose there exist (disjoint) sets Si ? X, i 2 {1, . . . , n} with
S = [i Si , a reference probability measure q on X, and a sub-class Pn ? P of cardinality
|Pn | = 2n with the following properties:
1. q(Si )
/n for every i 2 {1, . . . , n}
2. q lower bounds the marginals of all p 2 Pn on S, i.e.
subset B ? S
n
R
B
dpX
R
B
dq for any p-measurable
3. 8 e 2 {0, 1} , 9 p 2 Pn such that Ep (y|x) > 1/2 + ? for x 2 Si when ei = 1 and
Ep (y|x) < 1/2 ? for x 2 Si when ei = 0
then we say P ?-shatters S1 , . . . , Sn -uniformly using Pn . The -uniform ?-shattering dimension of P is the largest n such that P ?-shatters some collection of n subsets of X -uniformly.
This provides a measure of complexity of the class P of distributions in the sense that it indicates
the variability of the expected y-values for x constrained to lie in the region S with measure at
least under corresponding marginals. The reference measure q serves as a lower bound on the
marginals and ensures that they ?uniformly" assign probabilty at least to S. Richness (variability)
of conditionals is thus traded off against uniformity of the corresponding marginal distributions.
3
Remark 2 (Uniformity of measure). The technical requirement of a reference distribution q is
automatically satisfied if all marginals pX for p 2 Pn are uniform over S. For simplicity this is the
situation considered in all our examples. The weaker condition (in terms of q) that we postulate in
Definition 1 is however sufficient for our main result, Theorem 3.
If fp is binary and y is noise-free then P shatters S1 , . . . , Sn -uniformly if and only if there is
a sub-class Pn ? P with the specified uniformity of measure, such that each fp (?) = Ep (y|?),
p 2 Pn is constant on each Si and the induced set-functions shatter {S1 , . . . , Sn } in the usual
(Vapnik-Chervonenkis) sense. In that case, ? may be chosen arbitrarily in (0, 1/2) and we omit
mention of it. If fp takes values in [0, 1] or fp is binary and y noisy then -uniform shattering can be
expressed in terms of fat-shattering (both at scale ?).
We show that the -uniform ?-shattering dimension of P can be used to lower bound the sample
size required by even the most powerful learner of this class of problems. The proof is in the same
spirit as purely combinatorial proofs of lower bounds using VC-dimension. Essentially the added
condition on P in terms of allows to convert the risk calculation to a combinatorial problem. As a
counterpoint to the lower bound result, we consider an alternative two step learning strategy which
makes use of underlying structure in X implied by the model P and we obtain upper bounds for the
corresponding risk.
2.1
Underlying structure
We assume a representation t : X ! Rk of the data, such that p(y|x) can be expressed in terms
of p(y|t(x)), say fp (x) = g? (t(x)) for some parameter ? 2 ?. Such a t is generally known in
Statistics as a sufficient dimension reduction for fp but here we make no assumption on the dimension
k (compared with the dimension of X). This is in keeping with the paradigm of feature extraction for
use in kernel machines, where the dimension of t(X) may even be higher than the original dimension
of X. As in that setting, what will be important is rather that the intermediate representation t(x)
reduce the complexity of the concept space. While t depends on p we will assume it does so only
via X. For example t could depend on p through the marginal pX on X or possible group action on
X; it is a manifestation in the data X, possibly over time, of underlying structure in the true joint
distribution p 2 P. The representation t captures structure in X induced by p. On the other hand, the
regression function itself depends only on the conditional p(y|t(x)).
In general, the natural factorization ? : P ! PX , p 7! pX determines for each marginal q 2 PX
a collection ? 1 (q) of possible conditionals, namely those p(y|x) arising from joint p 2 P that
have marginal pX = q. More generally any sufficient statistic t induces a similar factorization (c.f.
Fisher-Neyman characterization) ?t : P ! Pt , p 7! pt , where Pt is the marginal model with respect
to t, and only conditionals p(y|t) are needed for learning. As before, given a known marginal q 2 Pt ,
this implies a collection ?t 1 (q) of possible conditionals p(y|t) relevant to learning.
Knowing q thus reduces the original problem where p(y|x) or p(y|t) can come from any p 2 P to
one where it comes from p in a reduced class ? 1 (q) or ?t 1 (q) ( P. Note the similarity with the
assumption of Balcan and Blum (2005) that a good compatibility function reduce the concept class. In
our case the concept class C consists of fp defined by p(y|t) in [t PY |t with PY |t :={p(y|t) : p 2 P},
and marginals come from Pt . The joint model P that we postulate, meanwhile, corresponds to
a subset of C ? Pt (pairs (fp , q) where fp uses p 2 ?t 1 (q)). The indicator function for this
subset is an abstract (binary) version of compatibility function (recall the compatibility function of
Balcan-Blum should be a [0, 1]-valued function on C ? D, satisfying further practical conditions that
our function typically would not). Thus, in a sense, our assumption of a joint model P and sufficient
statistic t amounts to a general form of compatibility function that links C and D without making
assumptions on how t might be learned. This is enough to imply the original learning problem can be
factored into first learning the structure t and then learning the parameter ? for fp (x) = g? (t(x)) in a
reduced hypothesis space. Our goal is to understand when and why one should do so.
2.2
Learning rates
We wish to quantify the benefits achieved by using such a factorization in terms of the bounds on
the expected loss (i.e. risk) for a sample of size m 2 N drawn iid from any p 2 P. We assume the
learner is provided with a sample z? = (z1 , z2 ? ? ? zm ), with zi = (xi , yi ) 2 X ? Y = Z, drawn iid
from the distribution p and uses an algorithm A : Z m ! C = H to select A(?
z ) to approximate fp .
4
Let `(A(?
z ), fp ) denote a specific loss. It might be 0/1, absolute, squared, hinge or logistic loss. We
define L(A(?
z ), fp ) to be the global expectation or L2 -norm of one of those pointwise losses `:
Z
L(A(?
z ), fp ) := Ex `(A(?
z )(x), fp (x)) =
`(A(?
z )(x), fp (x))dpX (x)
(1)
X
or
L(A(?
z ), fp ) := ||`(A(?
z ), fp )||L2 (pX ) =
sZ
`(A(?
z )(x), fp (x))2 dpX .
(2)
X
Then the worst case expected loss (i.e. minimax risk) for the best learning algorithm with no
knowledge of tpX is
R(m) := inf sup Ez?L(A(?
z ), fp ) = inf sup
A
A
p2P
q2PX
sup
p(y|tq )s.t.
p2P,pX =q
Ez?L(A(?
z , fp ) .
(3)
while for the best learning algorithm with oracle knowledge of tpX it is
Q(m) := sup inf
q2PX
A
sup
p(y|tq )s.t.
p2P,pX =q
Ez?L(A(?
z , fp ) .
(4)
Some clarification is in order regarding the classes over which the suprema are taken. In principle
the worst case expected loss for a given A is the supremum over P of the expected loss. Since fp (x)
is determined by p(y|tpX (x)), and tpX is determined by pX this is a supremum over q 2 PX of
a supremum over p(y|tq (?)) such that pX = q. Finding the worst case expected error for the best
A therefore means taking the infimum of the supremum just described. In the case of Q(m) since
the algorithm knows tq , the order of the supremum over t changes with respect to the infimum: the
learner can select the best algorithm A using knowledge of tq .
Clearly R(m) Q(m) by definition. In the next section, we lower bound R(m) and upper bound
Q(m) to establish a gap between R(m) and Q(m).
3
Main Result
We show that -uniform shattering dimension n or more implies a lower bound on the worst case
expected error, R(m), when the sample size m ? n. In particular - in the setup specified in the
previous section - if {g? (?) : ? 2 ?} has much smaller VC dimension than n this results in a distinct
gap between rates for a learner with oracle access to tpX and a learner without.
Theorem 3. Consider the framework defined in the previous Section with Y = {0, 1}. Assume
{g? (?) : ? 2 ?} has VC dimension d < mqand P has -uniform ?-shattering dimension n (1+?)m.
8+1
Then, for sample size m, Q(m) ? 16 d log(m+1)+log
while R(m) > ?bc m+1 /8 where b
2m
depends both on the type of loss and the presence of noise, while c depends on noise.
Assume the standard definition in (1). If fp are binary (in the noise-free or noisy setting) b = 1
for absolute, squared, 0-1, hinge or logistic loss. In the noisy setting, if fp = E(y|x) 2 [0, 1],
b = ? for absolute loss and b = ?2 for squared loss. In general, c = 1 in the noise-free setting
and c = (1/2 + ?)m in the noisy setting. By requiring P to satisfy a stronger notion of -uniform
?-shattering one can obtain c = 1 even in the noisy case.
Note that for sample size m and -uniform ?-shattering dimension 2m, we have ? = 1, so the lower
bound in its simplest form becomes m+1 /8. This is the bound we will use in the next Section to
derive implications of Theorem 3.
Remark 4. We have stated in the Theorem a simple upper bound, sticking to Y = {0, 1} and using
VC dimension, in order to focus the presentation on the lower bound which uses the new complexity
measure. The upper bound could be improved. It could also be replaced with a corresponding upper
bound assuming instead Y = [0, 1] and fat shattering dimension d.
Proof. The upper bound on Q(m) holds for an ERM algorithm (by the classic argument, see for
example Corollary 12.1 in Devroye et al. (1996)). We focus here on the lower bound for R(m).
Moreover, we stick to the simpler definition of -uniform shattering in Definition 1 and omit proof of
the final statement of the Theorem, which is slightly more involved. We let n = 2m (i.e. ? = 1) and
5
we comment in a footnote on the result for general ?. Let S1 , . . . , S2m be sets which are -uniformly
?-shattered using the family P2m ? P and denote their union by S. By assumption S has measure
at least under a reference measure q which is dominated by all marginals pX for p 2 P2m (see
Definition 1). We divide our argument into three parts.
1. If we prove a lower bound for the average over P2m ,
1 X
8A, 2m
Ez?L(A(?
z ), fp ) bc m+1 /8
(5)
2
p2P2m
it will also be a lower bound for the supremum over P2m :
8A, sup Ez?L(A(?
z ), fp )
bc
m+1
/8 .
p2P2m
and hence for the supremum over P. It therefore suffices to prove (5).
2. Given x 2 S, define vp (x) to be the more likely label for x under the joint distribution p 2 P2m .
This notation extends to the noisy case the definition of vp already given for the noise-free case. The
uniform shattering condition implies p(vp (x)|x) > 1/2 + ? in the noisy case and p(vp (x)|x) = 1
in the noise-free case. Given x
? = (x1 , . . . , xm ) 2 S m , write z?p (?
x) := (z1 , . . . , zm ) where zj =
(xj , vp (xj )). Then
Z
Ez?L(A(?
z ), fp ) =
L(A(?
z ), fp )dpm (?
z)
Z
Zm
m
S m ?Y m
L(A(?
z ), fp )dp (?
z)
c
Z
Sm
L(A(?
zp (?
x)), fp )dpm
x)
X (?
where c is as specified in the Theorem. Note the sets
Vl := {?
x 2 S m ? X m : the xj occupy exactly l of the Si }
for l = 1, . . . , m define a partition of S m . Recall that dpX dq on S for all p 2 P2m so
Z
m Z
1 X X
m
L(A(?
zp (?
x)), fp )dpX (?
x)
L(A(?
zp (?
x)), fp ) dq m (?
x)
22m
p2P
l=1
2m
m
S
x
?2Vl
0
1
=
C
m Z B
X
B 1 X
C m
B
C
L(A(?
z
(?
x
)),
f
)
x).
p
p C dq (?
B 22m
@
A
p2P
l=1 x
2m
?2Vl
|
{z
}
I
We claim the integrand, I, is bounded below by b /8 (this computation is performed in part 3, and
depends on knowing x
? 2 Vl ). At the same time, S has measure at least under q so
Z
m Z
X
m
dq m (?
x) =
dq m (?
x)
l=1 x
?2V
l
x
?2S m
which will complete the proof of (5).
3. We now assume a fixed but arbitrary x
? 2 Vl and prove I b /8. To simplify the discussion,
we will refer to sets Si which contain a component xj of x
? as Si with data. We also need notation
for the elements of P2m : for each L ? [2m] denote by p(L) the unique element of P2m such that
vp(L) |Si = 1 if i 2 L, and vp(L) |Si = 0 if i 2
/ L. Now, let Lx? := {i 2 [2m] : x
? \ Si 6= ;}. These
are the indices of sets Si with data. By assumption |Lx? | = l, and so |Lcx? | = 2m l.
Every subset L ? [2m] and hence every p 2 P2m is determined by L \ Lx? and L \ Lcx? . We will
collect together all p(L) having the same L \ Lx? , namely for each D ? Lx? define
PD := {p(L) 2 P2m : L \ Lx? = D}.
These 2l families partition P2m and in each PD there are 22m l probability distributions. Most
importantly, z?p (?
x) is the same for all p 2 PD (because D determines vp on the Si with data). This
6
implies A(?
zp (?
x)) : X ! R is the same function3 of X for all p in a given PD . To simplify notation,
since we will be working within a single PD , we write f := A(?
z (?
x)).
While f is the hypothesized regression function given data x
?, fp is the true regression function when
p is the underlying distribution. For each set Si let vi be 1 if f is above 1/2 on a majority of Si using
reference measure q (a q-majority) and 0 otherwise.
We now focus on the ?unseen" Si where no data lie (i.e., i 2 Lcx? ) and use the vi to specify a 1-1
correspondence between elements p 2 PD and subsets K ? Lcx? :
p 2 PD
! Kp := {i 2 Lcx? : vp 6= vi }.
Take a specific p 2 PD with its associated Kp . We have |f (x) fp (x)| > ? on the q-majority of the
set Si for all i 2 Kp .
The condition |f (x) fp (x)| > ? with f (x) and fp (x) on opposite sides of 1/2 implies a lower
bound on `(f (x), fp (x)) for each of the pointwise loss functions ` that we consider (0/1, absolute,
square, hinge, logistic). The value of b, however, differs from case to case (see Appendix).
For now we
Z have,
`(f (x), fp (x)) dpX (x)
Si
Z
`(f (x), fp (x)) dq(x)
Si
1
b
2
Z
dq(x)
b
.
4m
Si
Summing over all i 2 Kp , and letting k = |Kp |, we obtain (still for the same p)
b
L(f (x), fp (x)) k
4m
(assuming L is defined by equation (1))4 . There are 2mk ` possible K with cardinality k, for any
k = 0, . . . , 2m `. Therefore,
2m
X
X` ?2m `? b
22m ` (2m `) b
b
L(f (x), fp (x))
k
=
22m `
k
4m
2
4m
8
p2PD
k=0
(using 2m ` 2m m = m)5 . Since D was an arbitrary subset of Lx? , this same lower bound
holds for each of the 2` families PD and so
1 X
b
I = 2m
L(f (x), fp (x))
.
2
8
p2P2m
In the constructions of the next Section it is often the case that one can prove a different level of
shattering for different n, namely (n)-uniform shattering of n subsets for various n. The following
Corollary is an immediate consequence of the Theorem for such settings. We state it for binary fp
without noise.
Corollary 5. Let C 2 (0, 1) and M 2 N. If P (n)-uniformly ?-shatters n subsets of X and
(n)n+1 /8 > C for all n < M then no learning algorithm can achieve worst case expected error
below ?C, using a training sample of size less than M/2. If such uniform shattering holds for all
n 2 N then the same lower bound applies regardless of sample size.
Even when (n)-uniform shattering holds for all n 2 N and limn!1 (n) = 1, if (n) approaches
1 sufficiently slowly then it is possible (n)n+1 ! 0 and there is no asymptotic obstacle to learning.
By contrast, the next Section shows an extreme situation where limn!1 (n)n+1 e > 0. In that
case, learning is impossible.
4
Applications and conclusion
Manifold learning We now describe a simpler, finite dimensional version of the example in Niyogi
(2013). Let X = RD , D
2 and Y = {0, 1}. Fix N 2 N and consider a very simple type of
1-dimensional manifold in X, namely the union of N linear segments, connected in circular fashion
(see Figure 1). Let PX be the collection of marginal distributions, each of which is supported on and
assigns uniform probability along a curve of this type. There is a 1-1 correspondence between the
elements of PX and curves just described.
3
Warning: f need not be an
p element of {fp : p 2 P2n }; we only know f 2 H = {fp : p 2 P}.
In the L2 version, using x x, the reader can verify the same lower bound holds.
5
In the case where we use (1 + ?)m instead of 2m, we would have (1 + ?)m ` ?m here.
4
7
Figure 1: An example of M with N = 12. The
dashed curve is labeled 1, the solid curve 0 (in
next Figure as well).
Figure 2: M with N = 28 = 4(n + 1) pieces,
used to prove uniform shattering of n sets
(shown for the case n = 6 with e = 010010).
On each curve M, choose two distinct points x0 , x00 . Removing these disconnects M. Let one
component be labeled 0 and the other 1, then label x0 and x00 oppositely. Let P be the class of joint
distributions on X ? Y with conditionals as described and marginals in PX . This is a noise-free
setting and fp is binary. Given M (or circular coordinates on M), consider the reduced class
P 0 := {p 2 P : support(pX ) = M}. Then H0 := {fp : p 2 P 0 } has VC dimension 3. On the
other hand, for n < N/4 1 it can be shown that P (n)-uniformly shatters n sets with fp , where
1
1
(n) = 1 n+1
(see Appendix and Figure 2). Since (1 n+1
)n+1 ! e > 0 as n ! 1, it follows
from Corollary 5 that the worst case expected error is bounded below by e/8 for any sample of size
n ? N/8 1/2. If many linear pieces are allowed (i.e. N is high) this could be an impractical
number of labeled examples. By contrast with this example, (n) in Niyogi?s example cannot be
made arbitrarily close to 1.
Group-invariant features We give a simplified, partially-discrete example (for a smooth version
and Figures, see Appendix). Let Y = {0, 1} and let X = J ? I where J = {0, 1, . . . , n1
1} ? {0, 1, . . . , n2 1} is an n1 by n2 grid (ni 2 N) and I = [0, 1] is a real line segment. One
should picture X as a rectangular array of vertical sticks. Above each grid point (j1 , j2 ) consider
two special points on the stick I, one with i = i+ := 1 ? and the other with i = i := 0 + ?.
Let PX contain only the uniform distribution on X and assume the noise-free setting. For each
e? 2 {+, }n1 n2 , on each segment (j1 , j2 ) ? I assign, via pe?, the label 1 above the special point
(determined by e?) and 0 below the point. This determines a family of n1 n2 conditional distributions
and thus a family P := {pe? : e? 2 {+, }n1 n2 } of n1 n2 joint distributions. The reader can verify
that P has 2?-uniform shattering dimension n1 n2 . Note that when the true distribution is pe? for some
e? 2 {+, }n1 n2 the labels will be invariant under the action ae? of Zn1 ? Zn2 defined as follows.
Given (z1 , z2 ) 2 Zn1 ? Zn2 and (j1 , j2 ) 2 J, let the group element (z1 , z2 ) move the vertical stick at
(j1 , j2 ) to the one at (z1 + j1 mod n1 , z2 + j2 mod n2 ) without flipping the stick over, just stretching
it as needed so the special point i? determined by e? on the first stick goes to the one on the second
stick. The orbit space of the action can be identified with I. Let t : X ? Y ! I be the projection
of X ? Y to this orbit space, then there is an induced labelling of this orbit space (because labels
were invariant under the action of the group). Given access to t, the resulting concept class has VC
dimension 1. On the other hand, given instead access to a projection s for the action of the subgroup
e := {p(?|s) : p 2 P} has 2?-uniform shattering dimension n2 . Thus we have
Zn1 ? {0}, the class P
a general setting where the over-all complexity requirements for two-step learning are n1 + n2 while
for single-step learning they are n1 n2 .
Conclusion We used a notion of uniform shattering to demonstrate both manifold learning and
invariant feature learning situations where learning becomes impossible unless the learner has access
to very large amounts of labeled data or else uses a two-step semi-supervised approach in which
suitable manifold- or group-invariant features are learned first in unsupervised fashion. Our examples
also provide a complexity manifestation of the advantages, observed by Poggio and Mallat, of forming
intermediate group-invariant features according to sub-groups of a larger transformation group.
Acknowledgements The author is deeply grateful to Partha Niyogi for the chance to have been his
student. This paper is directly inspired by discussions with him which were cut short much too soon.
The author also thanks Ankan Saha and Misha Belkin for very helpful input on preliminary drafts.
8
References
M. Ahissar and S. Hochstein. The reverse hierarchy theory of visual perceptual learning. Trends
Cogn Sci., Oct;8(10):457?64, 2004.
G. Alain and Y. Bengio. What regularized auto-encoders learn from the data generating distribution.
Technical report, 2012. http://arXiv:1211.4246[cs.LG].
M.-F. Balcan and A. Blum. A pac-style model for learning from labeled and unlabeled data. In
Learning Theory, volume 3559, pages 111?126. Springer LNCS, 2005.
J. Baxter. A model of inductive bias learning. Journal of Artificial Intelligence Research, 12:149?198,
2000.
M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: a geometric framework for learning
from labeled and unlabeled examples. Journal of Machine Learning Research, 7:2399?2434, 2006.
S. Ben-David, T. Lu, and D. P?l. Does unlabeled data provably help? worst-case analysis of the
sample complexity of semi-supervised learning. In COLT, pages 33?44, 2008.
J. Bourne and M. Rosa. Hierarchical development of the primate visual cortex, as revealed by
neurofilament immunoreactivity: early maturation of the middle temporal area (mt). Cereb Cortex,
Mar;16(3):405?14, 2006. Epub 2005 Jun 8.
V. Castelli and T. Cover. The relative value of labeled and unlabeled samples in pattern recognition.
IEEE Transactions on Information Theory, 42:2102?2117, 1996.
L. Devroye, L. Gy?rfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition, volume 31 of
Applications of mathematics. Springer, New York, 1996.
D. Haussler. Generalizing the pac model: Sample size bounds from metric dimension-based uniform
convergence results. pages 40?45, 1989.
M. Kearns and R. Schapire. Efficient distribution-free learning of probabilistic concepts. Journal of
Computer and System Sciences, 48:464?497, 1994.
M. J. Kearns and U. V. Vazirani. An Introduction to Computational Learning Theory. MIT Press,
Cambridge, Massachusetts, 1994.
V. Koltchinskii. Rademacher penalties and structural risk minimization. IEEE Transactions on
Information Theory, 47(5):1902?1914, 2001.
S. Mallat. Group invariant scattering. CoRR, abs/1101.2286, 2011. http://arxiv.org/abs/1101.2286.
M. Mohri, A. Rostamizadeh, and A. Talwalkar. Foundations of Machine Learning. MIT Press, 2012.
P. Niyogi. Manifold regularization and semi-supervised learning: Some theoretical analyses. Journal
of Machine Learning Research, 14:1229?1250, 2013.
T. Poggio, J. Mutch, F. Anselmi, L. Rosasco, J. Leibo, and A. Tacchetti. The computational magic of
the ventral stream: sketch of a theory (and why some deep architectures work). Technical report,
Massachussetes Institute of Technology, 2012. MIT-CSAIL-TR-2012-035.
R. Urner, S. Shalev-Shwartz, and S. Ben-David. Access to unlabeled data can speed up prediction
time. In ICML, 2011.
L. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134?1142, 1984.
9
| 6197 |@word mild:1 version:8 middle:1 achievable:2 polynomial:1 norm:1 stronger:1 seek:1 mention:1 tr:1 solid:1 reduction:2 initial:1 generatively:1 chervonenkis:1 bc:3 comparing:1 cumulation:1 z2:4 si:21 additive:1 subsequent:2 partition:2 j1:5 v:1 generative:1 intelligence:1 beginning:1 reciprocal:1 short:1 record:2 tpx:6 provides:1 characterization:1 draft:1 successive:2 lx:7 org:1 simpler:2 shatter:1 along:1 differential:2 prove:8 consists:1 lcx:5 x0:2 expected:9 p1:1 multi:9 brain:1 inspired:1 automatically:1 considering:1 cardinality:2 becomes:2 spain:1 provided:1 underlying:7 moreover:2 notation:4 bounded:2 what:2 kind:1 proposing:1 finding:1 transformation:2 warning:1 impractical:1 ahissar:1 temporal:1 every:3 tie:1 fat:3 exactly:1 stick:7 omit:2 before:1 treat:1 consequence:1 despite:1 lugosi:1 might:2 koltchinskii:2 studied:2 collect:1 factorization:3 practical:1 unique:1 union:2 differs:1 dpx:6 cogn:1 lncs:1 area:1 suprema:1 dictate:1 projection:2 cannot:1 unlabeled:10 close:1 risk:5 impossible:2 seminal:1 py:2 measurable:1 deterministic:2 annealed:1 go:1 regardless:2 rectangular:1 simplicity:1 assigns:1 factored:1 rule:2 haussler:3 array:1 importantly:1 his:2 classic:3 proving:1 notion:4 handle:1 coordinate:1 resp:1 target:1 suppose:1 pt:6 construction:1 mallat:2 hierarchy:1 us:4 hypothesis:7 element:6 trend:1 satisfying:1 recognition:2 cut:1 labeled:11 ep:6 observed:1 capture:1 worst:9 region:1 ensures:1 connected:1 richness:1 deeply:1 intuition:1 environment:1 pd:9 complexity:10 depend:2 uniformity:3 segment:3 grateful:1 purely:2 learner:10 completely:1 joint:17 various:3 uottawa:1 distinct:2 describe:1 kp:5 artificial:1 shalev:1 refined:1 h0:1 larger:1 valued:1 say:3 otherwise:1 statistic:5 niyogi:9 unseen:1 transform:1 itself:2 noisy:8 final:3 sequence:2 advantage:4 propose:1 strengthening:1 zm:3 j2:5 relevant:1 maia:1 achieve:1 sticking:1 quantifiable:1 convergence:1 requirement:3 extending:1 rademacher:3 zp:4 generating:1 leave:1 ben:4 help:1 derive:1 augmenting:1 c:1 come:6 implies:5 quantify:1 vc:11 assign:2 suffices:1 generalization:7 fix:1 preliminary:1 biological:1 hold:6 sufficiently:1 considered:2 traded:1 claim:2 ventral:1 commonality:1 early:1 purpose:2 label:6 combinatorial:2 him:1 largest:1 tool:2 minimization:1 mit:3 clearly:1 aim:1 rather:3 pn:8 immunoreactivity:1 corollary:4 focus:3 indicates:1 contrast:2 rostamizadeh:1 sense:5 realizable:1 helpful:1 talwalkar:1 shattered:1 entire:1 typically:1 vl:5 zn1:3 selects:1 provably:1 compatibility:9 classification:4 colt:1 development:1 constrained:1 ssl:4 special:3 marginal:10 rosa:1 having:3 extraction:1 sampling:1 shattering:21 unsupervised:2 icml:1 bourne:1 report:2 simplify:2 belkin:2 saha:1 randomly:1 replaced:1 consisting:1 tq:5 n1:11 ab:2 interest:1 circular:2 function3:1 analyzed:2 extreme:1 misha:1 implication:1 experience:1 poggio:5 unless:1 divide:1 orbit:3 theoretical:2 mk:1 obstacle:1 cover:3 tg:1 ottawa:2 subset:9 uniform:21 submanifold:1 too:1 encoders:1 thanks:1 csail:1 probabilistic:3 off:1 together:4 analogously:1 squared:3 postulate:3 satisfied:1 rosasco:1 choose:1 possibly:1 slowly:1 style:5 return:1 gy:1 student:1 availability:2 disconnect:1 satisfy:1 depends:6 vi:3 piece:2 multiplicative:1 performed:1 stream:1 analyze:2 sup:6 start:1 recover:1 p2p:5 partha:1 square:1 ni:1 stretching:1 vp:11 castelli:3 produced:1 iid:2 lu:1 zn2:2 footnote:1 urner:3 definition:10 against:1 involved:1 conveys:1 proof:6 associated:1 recovers:1 massachusetts:1 recall:2 knowledge:3 scattering:1 higher:1 oppositely:1 supervised:8 maturation:1 reflected:1 specify:1 improved:1 mutch:1 mar:1 just:3 hand:3 working:1 sketch:1 ei:2 widespread:2 logistic:3 infimum:2 believe:2 semisupervised:1 hypothesized:1 concept:14 true:9 requiring:2 contain:2 inductive:4 hence:2 verify:2 regularization:2 essence:1 covering:1 anything:1 manifestation:2 complete:1 demonstrate:1 cereb:1 balcan:9 common:1 mt:1 overview:1 volume:2 he:1 marginals:8 refer:1 cambridge:1 rd:1 grid:2 mathematics:2 pm:1 access:7 similarity:1 cortex:2 inf:3 reverse:1 certain:1 binary:8 arbitrarily:3 accomplished:1 yi:1 paradigm:1 dashed:1 semi:5 multiple:2 reduces:1 smooth:2 technical:4 calculation:1 fraser:1 prediction:1 variant:3 regression:6 n5:1 essentially:1 expectation:1 ae:1 metric:1 arxiv:2 kernel:1 achieved:1 addition:2 conditionals:7 addressed:1 else:1 source:1 limn:2 comment:1 induced:3 dpm:2 spirit:2 mod:2 structural:1 presence:2 intermediate:3 bengio:1 enough:1 baxter:6 revealed:1 xj:4 fit:1 zi:1 architecture:1 restrict:1 opposite:1 identified:1 reduce:3 regarding:1 lesser:1 knowing:2 ankan:1 penalty:1 york:1 action:7 remark:3 deep:1 rfi:1 useful:1 governs:1 involve:1 listed:1 detailed:1 probabilty:1 transforms:1 amount:4 generally:2 concentrated:2 induces:1 p2m:11 simplest:1 reduced:3 schapire:5 occupy:1 http:2 exist:2 zj:1 disjoint:1 arising:1 discrete:5 write:2 group:15 blum:9 drawn:3 shatters:5 leibo:1 convert:1 powerful:1 extends:1 family:5 reader:2 decision:4 appendix:3 entirely:1 bound:35 followed:1 correspondence:2 oracle:2 adapted:1 dominated:1 integrand:1 speed:1 argument:4 hochstein:1 px:20 according:2 smaller:2 slightly:1 shallow:1 modification:2 s1:4 making:1 primate:1 counterpoint:1 invariant:11 restricted:1 erm:1 taken:2 neyman:1 equation:1 discus:1 needed:2 mind:1 know:3 letting:1 serf:1 hierarchical:1 alternative:1 existence:2 original:3 anselmi:1 assumes:1 hinge:3 establish:2 classical:1 implied:1 move:1 added:1 already:1 flipping:1 strategy:3 dependence:1 usual:1 said:1 dp:1 link:2 sci:1 majority:3 manifold:9 considers:1 assuming:5 devroye:3 pointwise:2 index:1 setup:2 lg:1 statement:1 stated:2 magic:1 unknown:2 perform:1 upper:9 vertical:2 observation:1 sm:1 finite:5 immediate:2 defining:1 situation:3 variability:2 precise:1 communication:1 arbitrary:4 tacchetti:1 canada:1 david:4 namely:4 required:3 specified:4 pair:1 z1:5 learned:2 barcelona:1 subgroup:2 nip:1 address:1 proceeds:1 below:4 pattern:2 xm:1 fp:60 suitable:1 natural:2 regularized:1 indicator:1 minimax:2 technology:1 imply:3 picture:1 jun:1 auto:1 sn:3 prior:1 geometric:1 l2:3 acknowledgement:1 asymptotic:2 relative:1 loss:13 foundation:1 sufficient:5 principle:2 dq:8 compatible:1 mohri:2 supported:1 free:10 keeping:1 soon:1 alain:1 bias:4 allow:2 weaker:2 understand:1 institute:2 side:1 taking:1 absolute:4 benefit:2 curve:5 dimension:25 author:4 collection:4 made:1 simplified:2 transaction:2 vazirani:2 approximate:1 obtains:1 implicitly:1 keep:1 sz:1 supremum:7 global:1 summing:1 assumed:1 xi:1 shwartz:1 k1n:1 x00:2 p2n:1 why:2 learn:2 transfer:2 ca:1 meanwhile:1 main:3 noise:12 arise:1 n2:12 allowed:1 x1:1 augmented:1 fashion:2 sub:5 wish:1 lie:2 pe:3 breaking:1 perceptual:1 third:1 learns:1 theorem:9 rk:1 removing:1 specific:6 s2m:1 pac:10 learnable:4 vapnik:1 valiant:3 corr:1 labelling:1 gap:4 suited:1 entropy:1 generalizing:1 likely:1 forming:1 ez:6 visual:2 expressed:2 partially:1 sindhwani:1 applies:1 springer:2 corresponds:2 determines:4 chance:1 acm:1 oct:1 conditional:4 goal:2 presentation:2 epub:1 fisher:1 change:1 determined:8 uniformly:7 kearns:7 clarification:1 invariance:1 select:3 formally:1 support:1 arises:1 relevance:1 dept:1 ex:1 |
5,744 | 6,198 | Phased Exploration with Greedy Exploitation in
Stochastic Combinatorial Partial Monitoring Games
Sougata Chaudhuri
Department of Statistics
University of Michigan Ann Arbor
[email protected]
Ambuj Tewari
Department of Statistics and Department of EECS
University of Michigan Ann Arbor
[email protected]
Abstract
Partial monitoring games are repeated games where the learner receives feedback
that might be different from adversary?s move or even the reward gained by the
learner. Recently, a general model of combinatorial partial monitoring (CPM)
games was proposed [1], where the learner?s action space can be exponentially
large and adversary samples its moves from a bounded, continuous space, according
to a fixed distribution. The paper gave a confidence bound based algorithm (GCB)
that achieves O(T 2/3 log T ) distribution independent and O(log T ) distribution
dependent regret bounds. The implementation of their algorithm depends on
two separate offline oracles and the distribution dependent regret additionally
requires existence of a unique optimal action for the learner. Adopting their CPM
model, our first contribution is a Phased Exploration with Greedy Exploitation
(PEGE) algorithmic framework?for the problem. Different algorithms within
the framework achieve O(T 2/3 log T ) distribution independent and O(log2 T )
distribution dependent regret respectively. Crucially, our framework needs only the
simpler ?argmax? oracle from GCB and the distribution dependent regret does not
require existence of a unique optimal action. Our second contribution is another
algorithm, PEGE2, which combines gap estimation with a PEGE algorithm, to
achieve an O(log T ) regret bound, matching the GCB guarantee but removing the
dependence on size of the learner?s action space. However, like GCB, PEGE2
requires access to both offline oracles and the existence of a unique optimal action.
Finally, we discuss how our algorithm can be efficiently applied to a CPM problem
of practical interest: namely, online ranking with feedback at the top.
1
Introduction
Partial monitoring (PM) games are repeated games played between a learner and an adversary over
discrete time points. At every time point, the learner and adversary each simultaneously select an
action, from their respective action sets, and the learner gains a reward, which is a function of the two
actions. In PM games, the learner receives limited feedback, which might neither be adversary?s move
(full information games) nor the reward gained (bandit games). In stochastic PM games, adversary
generates actions which are independent and identically distributed according to a distribution fixed
before the start of the game and unknown to the learner. The learner?s objective is to develop a
learning strategy that incurs low regret over time, based on the feedback received during the course of
the game. Regret is defined as the difference between cumulative reward of the learner?s strategy and
the best fixed learner?s action in hindsight. The usual learning strategies in online games combine
some form of exploration (getting feedback on certain learner?s actions) and exploitation (playing the
perceived optimal action based on current estimates).
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Starting with early work in the 2000s [2, 3], the study of finite PM games reached a culmination
point with a comprehensive and complete classification [4]. We refer the reader to these works for
more references and also note that newer results continue to appear [5]. Finite PM games restrict
both the learner?s and adversary?s action spaces to be finite, with a very general feedback model. All
finite partial monitoring games can be classified into one of four categories, with minimax regret
?(T ), ?(T 2/3 ), ?(T 1/2 ) and ?(1). The classification is governed by global and local observability
properties pertaining to a game [4]. Another line of work has extended traditional multi-armed
bandit problem (MAB) [6] to include combinatorial action spaces for learner (CMAB) [7, 8]. The
combinatorial action space can be exponentially large, rendering traditional MAB algorithms designed
for small finite action spaces, impractical with regret bounds scaling with size of action space. The
CMAB algorithms exploit a finite subset of base actions, which are specific to the structure of problem
at hand, leading to practical algorithms and regret bounds that do not scale with, or scale very mildly
with, the size of the learner?s action space.
While finite PM and CMAB problems have witnessed a lot of activity, there is only one paper [1]
on combinatorial partial monitoring (CPM) games, to the best of our knowledge. In that paper, the
authors combined the combinatorial aspect of CMAB with the limited feedback aspect of finite
PM games, to develop a CPM model. The model extended PM games to include combinatorial
action spaces for learner, which might be exponentially large, and infinite action spaces for the
adversary. Neither of these situations can be handled by generic algorithms for finite PM games.
Specifically, the model considered an action space X for the learner, that has a small subset of actions
defining a global observable set (see Assumption 2 in Section 2). The adversary?s action space is
a continuous, bounded vector space with the adversary sampling moves from a fixed distribution
over the vector space. The reward function considered is a general non-linear function of learner?s
and adversary?s actions, with some restrictions (see Assumptions 1 & 3 in Section 2). The model
incorporated a linear feedback mechanism where the feedback received is a linear transformation of
adversary?s move. Inspired by the classic confidence bound algorithms for MABs, such as UCB [6],
the authors proposed a Global Confidence Bound (GCB) algorithm that enjoyed two types of regret
bound. The first one was a distribution independent O(T 2/3 log T ) regret bound and the second
one was a distribution dependent O(log T ) regret bound. A distribution dependent regret bound
involves factors specific to the adversary?s fixed distribution, while distribution independent means
the regret bound holds over all possible distributions in a broad class of distributions. Both bounds
also had a logarithmic dependence on |X |. The algorithm combined online estimation with two
offline computational oracles. The first oracle finds the action(s) achieving maximum value of reward
function over X , for a particular adversary action (argmax oracle), and the second oracle finds the
action(s) achieving second maximum value of reward function over X , for a particular adversary
action (arg-secondmax oracle). Moreover, the distribution dependent regret bound requires existence
of a unique optimal learner action. The inspiration for the CPM model came from various applications
like crowdsourcing and matching problems like matching products with customers.
Our Contributions. We adopt the CPM model proposed earlier [1]. However, instead of using
upper confidence bound techniques, our work is motivated by another classic technique developed
for MABs, namely that of forced exploration. This technique was already used in the classic paper
of Robbins [9] and has also been called ?forcing with certainty equivalence? in the control theory
literature [10]. We develop a Phased Exploration with Greedy Exploitation (PEGE) algorithmic
framework (Section 3) borrowing the PEGE terminology from work on linearly parameterized?bandits
[11]. When the framework is instantiated with different parameters, it achieves O(T 2/3 log T )
distribution independent and O(log2 T ) distribution dependent regret. Significantly, the framework
combines online estimation with only the argmax oracle from GCB, which is a practical advantage
over requiring an additional arg-secondmax oracle. Moreover, the distribution dependent regret does
not require existence of unique optimal action. Uniqueness of optimal action can be an unreasonable
assumption, especially in the presence of a combinatorial action space. Our second contribution
is another algorithm PEGE2 (Section 4) that combines a PEGE algorithm with Gap estimation, to
achieve a distribution dependent O(log T ) regret bound, thus matching the GCB regret guarantee
in terms of T and gap. Here, gap refers to the difference between expected reward of optimal
and second optimal learner?s actions. However, like GCB, PEGE2 does require access to both the
oracles, existence of unique optimal action for O(log T ) regret and its regret is never larger than
O(T 2/3 log T ) when there is no unique optimal action. A crucial advantage of PEGE and PEGE2
over GCB is that all our regret bounds are independent of |X |, only depending on the size of the
2
small global observable set. Thus, though we have adopted the CPM model [1], our regret bounds
are meaningful for countably infinite or even continuous learner?s action space, whereas GCB regret
bound has an explicit logarithmic dependence on |X |. We provide a detailed comparison of our
work with the GCB algorithm in Section 5. Finally, we discuss how our algorithms can be efficiently
applied in the CPM problem of online ranking with feedback restricted to top ranked items (Section 6),
a problem already considered [12] but analyzed in a non-stochastic setting.
2
Preliminaries and Assumptions
The online game is played between a learner and an adversary, over discrete rounds indexed by
t = 1, 2, . . .. The learner?s action set is denoted as X which can be exponentially large. The
adversary?s action set is the infinite set [0, 1]n . The adversary fixes a distribution p on [0, 1]n before
start of the game (adversary?s strategy), with p unknown to the learner. At each round of the game,
adversary samples ?(t) ? [0, 1]n according to p, with E?(t)?p [?(t)] = ?p? . The learner chooses
x(t) ? X and gets reward r(x(t), ?(t)). However, the learner might not get to know either ?(t) (as
in a full information game) or r(x(t), ?(t)) (as in a bandit game). In fact, the learner receives, as
feedback, a linear transformation of ?(t).That is, every action x ? X has an associated transformation
matrix Mx ? Rmx ?n . On playing action x(t), the learner receives a feedback Mx(t) ? ?(t) ? Rmx .
Note that the game with the defined feedback mechanism subsumes full information and bandit
games. Mx = In?n , ?x makes it a full information game since Mx ? ? = ?. If r(x, ?) = x ? ?, then
Mx = x ? Rn makes it a bandit game. The dimension n, action space X , reward function r(?, ?) and
transformation matrices Mx , ?x ? X are known to the learner. The goal of the learner is to minimize
the expected regret, which, for a given time horizon T , is:
R(T ) = T ? max E[r(x, ?)] ?
x?X
T
X
E[r(x(t), ?(t))]
(1)
t=1
where the expectation in the first term is taken over ?, w.r.t. distribution p, and the second expectation
is taken over ? and possible randomness in the learner?s algorithm.
Assumption 1. (Restriction on Reward Function) The first assumption is that E??p [r(x, ?)] =
r?(x, ?p? ), for some function r?(?, ?). That is, the expected reward is a function of x and ?p? , which is
always satisfied if r(x, ?) is a linear function of ?, or if distribution p happens to be any distribution
with support [0, 1]n and fully parameterized by its mean ?p? . With this assumption, the expected regret
becomes:
T
X
R(T ) = T ? r?(x? , ?p? ) ?
E[?
r(x(t), ?p? )].
(2)
t=1
For distribution dependent regret bounds, we define gaps in expected rewards: Let x? ? S(?p? ) =
argmaxx?X r?(x, ?p? ). Then ?x = r?(x? , ?p? ) ? r?(x, ?p? ) , ?max = max{?x : x ? X } and ? =
min{?x : x ? X , ?x > 0}.
Assumption 2. (Existence of Global Observable Set) The second assumption is on the existence
of a global observable set, which is a subset of learner?s action set and is required for estimating
an adversary?s move ?. The global observable set is defined as follows: for a set of actions ? =
{x1 , x2 , . . . , x|?| } ? X , let their transformation matrices be stacked in a top down fashion to obtain
P|?|
a R i=1 mxi ?n dimensional matrix M? . ? is said to be a global observable set if M? has full column
rank, i.e., rank(M? ) = n. Then, the Moore-Penrose pseudoinverse M?+ satisfies M?+ M? = In?n .
Without the assumption on the existence of global observable set, it might be the case that even
if the learner plays all actions in X on same ?, the learner might not be able to recover ? (as
M?+ M? = I n?n will not hold without full rank assumption). In that case, learner might not be
able to distinguish between ?p?1 and ?p?2 , corresponding to two different adversary?s strategies. Then,
with non-zero probability, the learner can suffer ?(T ) regret and no learner strategy can guarantee a
sub-linear in T regret (the intuition forms the base of the global observability condition in [2]). Note
that the size of the global observable set is small, i.e., |?| ? n. A global observable set can be found
by including an action x in ? if it strictly increases the rank of M? , till the rank reaches n. There can,
of course, be more than one global observable set.
3
Assumption 3. (Lipschitz Continuity of Expected Reward Function) The third assumption is on
the Lipschitz continuity of expected reward function in its second argument. More precisely, it is
assumed that ? R > 0 such that ? x ? X , for any ?1 and ?2 , |?
r(x, ?1 ) ? r?(x, ?2 )| ? Rk?1 ? ?2 k2 .
This assumption is reasonable since otherwise, a small error in estimation of mean reward vector
?p? can introduce a large change in expected reward, leading to difficulty in controlling regret over
time. The Lipschitz condition holds trivially for expected reward functions which are linear in second
argument. The continuity assumption, along with the fact that adversary?s moves are in [0, 1]n ,
implies boundedness of expected reward for any learner?s action and any adversary?s action. We
denote Rmax = maxx?X ,??[0,1]n r?(x, ?).
The three assumptions above will be made throughout. However, the fourth assumption will only be
made in a subset of our results.
Assumption 4. (Unique Optimal Action) The optimal action x? = argmaxx?X r?(x, ?p? ) is unique.
Denote a second best action (which may not be unique) by x?? = argmaxx?X ,x6=x? r?(x, ?p? ). Note
that ? = r?(x? , ?p? ) ? r?(x?? , ?p? ).
3
Phased Exploration with Greedy Exploitation
Algorithm 1 (PEGE) uses the classic idea of doing exploration in phases that are successively further
apart from each other. In between exploration phases, we select action greedily by completely trusting
the current estimates. The constant ? controls how much we explore in a given phase and the constant
? along with the function C(?) determines how much we exploit. This idea is classic in the bandit
literature [9?11] but has not been applied to the CPM framework to the best of our knowledge.
Algorithm 1 The PEGE Algorithmic Framework
1: Inputs: ?, ? and function C(?) (to determine amount of exploration/exploitation in each phase).
2: For b = 1, 2, . . . ,
3:
Exploration
4:
For i = 1 to |?| (? is global observable set)
5:
For j = 1 to b?
6:
Let tj,i = t and ?(tj,i , b) = ?(t) where t is current time point
7:
Play xi ? ? and get feedback Mxi ? ?(tj,i , b) ? Rmxi .
8:
End For
9:
End For
10: Estimation
11: ??j,i = M?+ (Mx1 ? ?(tj,1 , i), . . . , Mx|?| ? ?(tj,|?| , i)) ? Rn .
Pb Pi? ?
j=1 ?j,i
? = i=1
12: ?(b)
? Rn .
Pb
?
j=1 j
?
13: x(b) ? argmaxx?X r?(x, ?(b)).
14: Exploitation
15:
For i = 1 to exp(C(b? ))
16:
Play x(b).
17:
End For
18: End For
It is easy to see that the estimators in Algorithm 1 have the following properties: Ep [??j,i ] =
? = ?? . Using the fact that
M?+ (Mx1 ? ?p? , . . . , Mx|?| ? ?p? ) = M?+ M? ? ?p? = ?p? and hence Ep [?]
p
M?+ = (M?> M? )?1 M?> , we also have the following bound on estimation error of ?p? :
k??j,i ? ?p? k2 ? kM?+ (Mx1 ? ?(tj,1 , i), . . . , Mx|?| ? ?(tj,|?| , i)) ? M?+ M? ?p? k2
= k(M?> M? )?1
|?|
X
Mx>k Mxk ? (?(tj,k , i) ? ?p? )k2 ?
k=1
|?|
? X
n
k(M?> M? )?1 Mx>k Mxk k2 =: ??
k=1
(3)
4
where the constant ?? defined above depends only on the structure of the linear transformation
matrices of the global observer set and not on adversary strategy p.
Our first result is about the regret of Algorithm 1 when within phase number b, the exploration part
spends |?| rounds (constant w.r.t. b) and the exploitation part grows polynomially with b.
Theorem 1. (Distribution Independent Regret) When Algorithm 1 is initialized with the parameters C(a) = log a, ? = 1/2 and ? = 0, and the online game is played over T rounds, we get the
following bound on expected regret:
p
(4)
R(T ) ? Rmax |?|T 2/3 + 2R?? T 2/3 log 2e2 + 2 log T + Rmax
where ?? is the constant as defined in Eq. 3.
Our next result is about the regret of Algorithm 1 when within phase number b, the exploration part
spends |?| ? b rounds (linearly increasing with b) and the exploitation part grows exponentially with b.
Theorem 2. (Distribution Dependent Regret) When Algorithm 1 is initialized with the parameters
C(a) = h ? a, for a tuning parameter h > 0, ? = 1 and ? = 1, and the online game is played over T
rounds, we get the following bound on expected regret:
?
2
X
4 2?e2 R?max ?? h2 (2R22 ??2 )
log T
e ?
+
?x
.
(5)
R(T ) ?
h
?
x??
Such an explicit bound for a PEGE algorithm that is polylogarithmic in T and explicitly states the
multiplicative and additive constants involved in not known, to the best of our knowledge, even in
the bandit literature (e.g., earlier bounds [10] are asymptotic) whereas here we prove it in the CPM
setting. Note that the additive constant above, though finite, blows up exponentially fast as ? ? 0
for a fixed h. It is well behaved however, if the tuning parameter h is on the same scale as ?. This
line of thought motivates us to estimate the gap to within constant factors and then feed that estimate
into a PEGE algorithm. This is what we will do in the next section.
4
Combining Gap Estimation with PEGE
Algorithm 2 tries to estimate the gap ? to within a constant multiplicative factor. However, if there is
no unique optimal action or when the true gap is small, gap estimation can take a very large amount
of time. To prevent that from happening, the algorithm also takes in a threshold T0 as input and
definitely stops if the threshold is reached. The result below assures us that, with high probability,
the algorithm behaves as expected. That is, if there is a unique optimal action and the gap is large
enough to be estimated with a given confidence before the threshold T0 kicks in, it will output an
? in the range [0.5?, 1.5?]. On the other hand, if there is no unique optimal action, it does
estimate ?
not generate an estimate of ? and instead runs out of the exploration budget T0 .
Theorem 3. (Gap Estimation within Constant Factors) Let T0 ? 1 and ? ? (0, 1) and define
2
2
256R2 ? 2
512e2 R2 ??
16R2 ? 2
T1 (?) = ?2 ? log
, T2 (?) = ?2 ? log 4e? . Consider Algorithm 2 run with
?2 ?
s
2 2
R2 ??2 log( 4e? b )
w(b) =
.
(6)
b
Then, the following 3 claims hold.
1. Suppose Assumption 4 holds and T1 (?) < T0 . Then with probability at least 1 ? ?,
? that satisfies 1 ? ? ?
? ? 3 ?.
Algorithm 2 stops in T1 (?) episodes and outputs an estimate ?
2
2
2. Suppose Assumption 4 holds and T0 ? T1 (?). Then with probability at least 1 ? ?, the
? that satisfies
algorithm either outputs ?threshold exceeded? or outputs an estimate ?
1
? ? 3 ?. Furthermore, if it outputs ?,
? it must be the case that the algorithm stopped
?
?
?
2
2
at an episode b such that T2 (?) < b < T0 .
3. Suppose Assumption 4 fails. Then, with probability at least 1 ? ?, Algorithm 2 stops in T0
episodes and outputs ?threshold exceeded?.
5
Algorithm 2 Algorithm for Gap Estimation
1: Inputs: T0 (exploration threshold) and ? (confidence parameter)
2: For b = 1, 2, . . . ,
3:
Exploration
4:
For i = 1 to |?|
5:
(Denote) ti = t and ?(ti , b) = ?(t) (t is current time point).
6:
Play xi ? ? and get feedback Mxi ? ?(ti , b) ? Rmxi .
7:
End For
8:
Estimation
9:
??b = M?+ (Mx1 ? ?(t1 , b), . . . , Mx|?| ? ?(t|?| , b)) ? Rn .
Pb ?
?i
?
10: ?(b) = i=1 ? Rn .
b
11: Stopping Rule (w(b) is defined as in Eq. (6))
?
12: If argmaxx?X r?(x, ?(b))
is unique:
?
13:
x
?(b) = argmaxx?X r?(x, ?(b))
?
14:
x
?? (b) = argmaxx?X ,x6=x?(b) r?(x, ?(b))
(need not be unique)
?
?
15:
If r?(?
x(b), ?(b)) ? r?(?
x? (b), ?(b)) > 6w(b):
?
?
? = r?(?
16:
STOP and output ?
x(b), ?(b))
? r?(?
x? (b), ?(b))
17:
End If
18: End If
19: If b > T0 :
20:
STOP and output ?threshold exceeded?
21: End If
22: End For
Equipped with Theorem 3, we are now ready to combine Algorithm 2 with Algorithm 1 to give
? it is fed into
Algorithm 3. Algorithm 3 first calls Algorithm 2. If Algorithm 2 outputs an estimate ?
Algorithm 1. If the threshold T0 is exceeded, then the remaining time is spent in pure exploitation.
Note that by choosing T0 to be of order T 2/3 we can guarantee a worst case regret of the same order
even when unique optimality assumption fails. For PM games that are globally observable but not
locally observable, such a distribution independent O(T 2/3 ) bound is known to be optimal [4].
Theorem 4. (Regret Bound for PEGE2) Consider Algorithm 3 run with knowledge of the number
T of rounds. Consider the distribution independent bound
p
2
B1 (T ) = 2(2R?? |?|2 Rmax
T )2/3 log(4e2 T 3 ) + Rmax ,
and the distribution dependent bound
X
256R2 ??2
512e2 R2 ??2 T
36R2 ??2 log T
8e2 R2 ??2
B2 (T ) =
log
R
|?|
+
?
+
+ Rmax .
max
x
?2
?2
?2
?2
x??
If Assumption 4 fails, then the expected regret of Algorithm 3 is bounded as R(T ) ? B1 (T ). If
Assumption 4 holds, then the expected regret of Algorithm 3 is bounded as
B2 (T )
if T1 (?) < T0
R(T ) ?
,
(7)
O(T 2/3 log T ) if T0 ? T1 (?)
where T1 (?) is as defined in Theorem 3 and ?, T0 are as defined in Algorithm 3.
In the above theorem, note that T1 (?) scales as ?( ?12 log ?T2 ) and T0 as ?(T 2/3 ). Thus, the two
cases in Eq. (7) correspond to large gap and small gap situations respectively.
5
Comparison with GCB Algorithm
We provide a detailed comparison of our results with those obtained for GCB [1]. (a) While we
use the same CPM model, our solution is inspired by the forced exploration technique while GCB
6
Algorithm 3 Algorithm Combining PEGE with Gap Estimation (PEGE2)
1: Input: T (total number of rounds)
2/3
2R?? T
2: Call Algorithm 2 with inputs T0 = |?|R
and ? = 1/T
max
3: If Algorithm 2 returns ?threshold exceeded?:
? 0 ) be the latest estimate of ?? maintained by Algorithm 2
4: Let ?(T
p
? for the remaining T ? T0 |?| rounds
5: Play x
?(T0 ) = argmaxx?X r?(x, ?)
6: Else:
? be the gap estimate produced by Algorithm 2
7: Let ?
8: For all remaining time steps, run Algorithm 1 with parameters C(a) = ha with
?2
h = 9R?2 ? 2 , ? = 1, ? = 0
?
9: End If
is inspired by the confidence bound technique, both of which are
? classic in the bandit literature.
(b) One instantiation of our PEGE framework gives an O(T 2/3 log T ) distribution independent
regret bound (Theorem 1), which does not require call to arg-secondmax oracle. This is of substantial
practical advantage over GCB since even for linear optimization problems over polyhedra, standard
routines usually do not have option of computing action(s) that achieve second maximum value
for the objective function. (c) Another instantiation of the PEGE framework gives an O(log2 T )
distribution dependent regret bound (Theorem 2), which neither requires call to arg-secondmax oracle
nor the assumption of existence of unique optimal action for learner. This is once again important,
since the assumption of existence of unique optimal action might be impractical, especially for
exponentially large action space. However, the caveat is that improper setting of the tuning parameter
h in Theorem 2 can lead to an exponentially large additive component in the regret. (d) A crucial
point, which we had highlighted in the beginning, is that the regret bounds achieved by PEGE and
PEGE2 do not have dependence on size of learner?s action space, i.e., |X |. The dependence is only on
the size of global observable set ?, which is guaranteed to be not more than dimension of adversary?s
action space. Thus, though we have adopted the CPM model [1], our algorithms achieve meaningful
regret bounds for countably infinite or even continuous learner?s action space. In contrast, the GCB
regret bounds have explicit, logarithmic dependence on size of learner?s action space. Thus, their
results cannot be extended to problems with infinite learner?s action space (see Section 6 for an
example), and are restricted to large, but finite action spaces. (e) The PEGE2 algorithm is a true
analogue of the GCB algorithm, matching the regret bounds of GCB in terms of T and gap ? with
the advantage that it has no dependence on |X |. The disadvantage, however, is that PEGE2 requires
knowledge of time horizon T , while GCB is an anytime algorithm. It remains an open problem to
design an algorithm that combines the strengths of PEGE2 and GCB.
6
Application to Online Ranking
A recent paper studied the problem of online ranking with feedback restricted to top ranked items
[12]. The problem was studied in a non-stochastic setting, i.e., it was assumed that an oblivious
adversary generates reward vectors. Moreover, the learner?s action space was exponentially large in
number of items to be ranked. The paper made the connection of the problem setting to PM games
(but not combinatorial PM games) and proposed an efficient algorithm for the specific problem at
hand. However, a careful reading of the paper shows that their algorithmic techniques can handle the
CPM model we have discussed so far, but in the non-stochastic setting. The reward function is linear
in both learner?s and adversary?s moves, adversary?s move is restricted to a finite space of vectors and
feedback is a linear transformation of adversary?s move. In this section, we give a brief description
of the problem setting and show how our algorithms can be used to efficiently solve the problem of
online ranking with feedback on top ranked items in the stochastic setting. We also give an example
of how the ranking problem setting can be somewhat naturally extended to one which has continuous
action space for learner, instead of large but finite action space.
The paper considered an online ranking problem, where a learner repeatedly re-ranks a set of n, fixed
items, to satisfy diverse users? preferences, who visit the system sequentially. Each learner action x
7
is a permutation of the n items. Each user has like/dislike preference for each item, varying between
users, with each user?s preferences encoded as an n length binary relevance vector ?. Once the ranked
list of items is presented to the user, the user scans through the items, but gives relevance feedback
only on top ranked item. However, the performance of the learner is judged based on full ranked list
and unrevealed, full relevance vector. Thus, we have a PM game, where neither adversary generated
relevance vector nor reward is revealed to learner. The paper showed how a number of practical
ranking measures, like Discounted Cumulative Gain (DCG), can be expressed as a linear function,
i.e., r(x, ?) = f (x) ? ?. The practical motivation of the work was based on learning a ranking strategy
to satisfy diverse user preferences, but with limited feedback received due to user burden constraints
and privacy concerns.
Online Ranking with Feedback at Top as a Stochastic CPM Game. We show how our algorithms
can be applied in online ranking with feedback for top ranked items by showing how it is a specific
instance of the CPM model and how our key assumptions are satisfied. The learner?s action space
is the finite but exponentially large space of X = n! permutations. Adversary?s move is an n
dimensional relevance vector, and thus, is restricted to {0, 1}n (finite space of size 2n ) contained
in [0, 1]n . In the stochastic setting, we can assume that adversary samples ? ? {0, 1}n from a fixed
distribution on the space. Since the feedback on playing a permutation is the relevance of top ranked
item, each move x has an associated transformation matrix (vector) Mx ? {0, 1}n , with 1 in the place
of the item which is ranked at the top by x and 0 everywhere else. Thus, Mx ? ? gives the relevance
of item ranked at the top by x. The global observable set ? is the set of any n actions, where each
action, in turn, puts a distinct item on top. Hence, M? is the n ? n dimensional permutation matrix.
Assumption 1 is satisfied because the reward function is linear in ? and r?(x, ?p? ) = f (x) ? ?p? , where
Ep [?] = ?p? ? [0, 1]n . Assumption 2 is satisfied since there will always be a global observable set
of size n and can be found easily. In fact, there will be multiple global observable sets, with the
freedom to choose any one of them. Assumption 3 is satisfied due to the expected reward function
being linear in second argument. The Lipschitz constant is maxx?X kf (x)k2 , which is always less
than some small polynomial factor of n, depending on specific f (?). The value of ?? can be easily
seen to be n3/2 . The argmax oracle returns the permutation which simply sorts items according to
their corresponding ? values. The arg-secondmax oracle is more complicated, though feasible. It
requires first sorting the items according to ? and then compare each pair of consecutive items to see
where least drop in reward value occurs and switch the corresponding items.
Likely Failure of Unique Optimal Action Assumption. Assumption 4 is unlikely to hold in this
problem setting (though of course theoretically possible). The mean relevance vector ?p? effectively
reflects the average preference of all users for each of the n items. It is very likely that at least a
few items will not be liked by anyone and which will ultimately be always ranked at the bottom.
Equally possible is that two items will have same user preference on average, and can be exchanged
without hurting the optimal ranking. Thus, existence of an unique optimal ranking, which indicates
that each item will have different average user preference than every other item, is unlikely. Thus,
PEGE algorithm can still be applied to get poly-logarithmic regret (Theorem 2), but GCB will only
achieve O(T 2/3 log T ) regret.
A PM Game with Infinite Learner Action Space. We give a simple modification of the ranking
problem above to show how the learner can have continuous action space. The learner now ranks the
items by producing an n dimensional score vector x ? [0, 1]n and sorting items according to their
scores. Thus the learner?s action space is now an uncountably infinite continuous space. As before,
the user gets to see the ranked list and gives relevance feedback on top ranked item. The learner?s
performance will now be judged by a continuous loss function, instead of a discrete-valued ranking
measure, since its moves are in a continuous space. Consider the simplest loss, viz., the squared
?loss? r(x, ?) = ?kx ? ?k22 (note -ve sign to keep reward interpetation). It can be easily seen that
r?(x, ?p? ) = E??p [r(x, ?)] = ?kxk22 + 2x ? ?p? ? 1 ? ?p? , if the relevance vectors ? are in {0, 1}n . Thus,
the Lipschitz condition is satisfied. The global observable set is still of size n, with the n actions
being any n score vectors, whose sorted orders place each of the n items, in turn, on top. ?? remains
same as before, with argmaxx E??p r(x, ?) = E??p [?] = ?p? . Both PEGE and PEGE2 can achieve
meaningful regret bound for this problem, while GCB cannot.
Acknowledgements
We acknowledge the support of NSF via grants IIS 1452099 and CCF 1422157.
8
References
[1] Tian Lin, Bruno Abrahao, Robert Kleinberg, John Lui, and Wei Chen. Combinatorial partial monitoring game with linear feedback and its applications. In Proceedings of the 31th
International Conference on Machine Learning, pages 901?909. ACM, 2014.
[2] Antonio Piccolboni and Christian Schindelhauer. Discrete prediction games with arbitrary
feedback and loss. In Proceedings of the 14th Annual Conference on Computational Learning
Theory, pages 208?223. Springer, 2001.
[3] Nicolo Cesa-Bianchi, G?bor Lugosi, and Gilles Stoltz. Regret minimization under partial
monitoring. Mathematics of Operations Research, pages 562?580, 2006.
[4] Gabor Bartok et al. Partial monitoring?classification, regret bounds, and algorithms. Mathematics of Operations Research, 39(4):967?997, 2014.
[5] Junpei Komiyama, Junya Honda, and Hiroshi Nakagawa. Regret lower bound and optimal
algorithm in finite stochastic partial monitoring. In Advances in Neural Information Processing
Systems, pages 1783?1791, 2015.
[6] Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed
bandit problem. Machine Learning, 47(2-3):235?256, 2002.
[7] Wei Chen, Yajun Wang, and Yang Yuan. Combinatorial multi-armed bandit: General framework
and applications. In Proceedings of the 30th International Conference on Machine Learning,
pages 151?159, 2013.
[8] Branislav Kveton, Zheng Wen, Azin Ashkan, and Csaba Szepesvari. Tight regret bounds
for stochastic combinatorial semi-bandits. In Proceedings of the Eighteenth International
Conference on Artificial Intelligence and Statistics, pages 535?543, 2015.
[9] Herbert Robbins. Some aspects of the sequential design of experiments. In Herbert Robbins
Selected Papers, pages 169?177. Springer, 1985.
[10] Rajeev Agrawal and Demosthenis Teneketzis. Certainty equivalence control with forcing:
revisited. Systems & Control Letters, 13(5):405?412, 1989.
[11] Paat Rusmevichientong and John N Tsitsiklis. Linearly parameterized bandits. Mathematics of
Operations Research, 35(2):395?411, 2010.
[12] Sougata Chaudhuri and Ambuj Tewari. Online ranking with top-1 feedback. In Proceedings
of the 18th International Conference on Artificial Intelligence and Statistics, pages 129?137.
ACM, 2015.
[13] Thomas P Hayes. A large-deviation inequality for vector-valued martingales. Combinatorics,
Probability and Computing, 2005.
9
| 6198 |@word exploitation:10 polynomial:1 open:1 km:1 crucially:1 incurs:1 boundedness:1 score:3 yajun:1 current:4 must:1 john:2 additive:3 christian:1 designed:1 drop:1 greedy:4 intelligence:2 selected:1 item:28 beginning:1 caveat:1 revisited:1 honda:1 preference:7 simpler:1 along:2 yuan:1 prove:1 combine:6 privacy:1 introduce:1 theoretically:1 expected:16 nor:3 multi:2 inspired:3 globally:1 discounted:1 armed:2 equipped:1 increasing:1 becomes:1 spain:1 estimating:1 bounded:4 moreover:3 what:1 rmax:6 spends:2 developed:1 hindsight:1 transformation:8 csaba:1 impractical:2 guarantee:4 tewaria:1 certainty:2 every:3 ti:3 k2:6 control:4 grant:1 appear:1 producing:1 before:5 t1:9 schindelhauer:1 local:1 lugosi:1 might:8 studied:2 equivalence:2 limited:3 range:1 tian:1 phased:4 unique:20 practical:6 kveton:1 regret:55 maxx:2 significantly:1 thought:1 matching:5 gabor:1 confidence:7 refers:1 get:8 cannot:2 judged:2 put:1 restriction:2 branislav:1 customer:1 eighteenth:1 latest:1 starting:1 pure:1 estimator:1 rule:1 classic:6 handle:1 controlling:1 play:5 suppose:3 user:12 mxk:2 us:1 ep:3 bottom:1 wang:1 worst:1 improper:1 episode:3 substantial:1 intuition:1 reward:26 ultimately:1 tight:1 learner:58 cmab:4 completely:1 easily:3 various:1 stacked:1 forced:2 instantiated:1 fast:1 distinct:1 pertaining:1 hiroshi:1 artificial:2 choosing:1 whose:1 encoded:1 larger:1 solve:1 valued:2 otherwise:1 statistic:4 fischer:1 highlighted:1 online:15 advantage:4 agrawal:1 product:1 combining:2 till:1 chaudhuri:2 achieve:7 description:1 getting:1 unrevealed:1 liked:1 paat:1 spent:1 depending:2 develop:3 received:3 eq:3 involves:1 implies:1 stochastic:10 exploration:16 require:4 fix:1 preliminary:1 mab:2 strictly:1 hold:8 considered:4 exp:1 algorithmic:4 claim:1 achieves:2 early:1 adopt:1 consecutive:1 perceived:1 estimation:13 uniqueness:1 cpm:16 combinatorial:12 robbins:3 reflects:1 minimization:1 always:4 varying:1 viz:1 abrahao:1 rank:7 polyhedron:1 indicates:1 contrast:1 greedily:1 dependent:14 stopping:1 dcg:1 unlikely:2 borrowing:1 bandit:13 arg:5 classification:3 denoted:1 once:2 never:1 sampling:1 broad:1 t2:3 oblivious:1 few:1 wen:1 simultaneously:1 ve:1 comprehensive:1 bartok:1 argmax:4 phase:6 freedom:1 interest:1 zheng:1 analyzed:1 tj:8 partial:10 respective:1 stoltz:1 indexed:1 initialized:2 re:1 exchanged:1 stopped:1 witnessed:1 column:1 earlier:2 instance:1 disadvantage:1 deviation:1 subset:4 eec:1 combined:2 chooses:1 definitely:1 international:4 again:1 squared:1 satisfied:6 successively:1 cesa:2 choose:1 leading:2 return:2 blow:1 b2:2 subsumes:1 rusmevichientong:1 satisfy:2 combinatorics:1 explicitly:1 ranking:16 depends:2 multiplicative:2 try:1 lot:1 observer:1 doing:1 reached:2 start:2 recover:1 option:1 sort:1 complicated:1 contribution:4 minimize:1 trusting:1 who:1 efficiently:3 correspond:1 bor:1 produced:1 monitoring:10 randomness:1 classified:1 reach:1 ashkan:1 failure:1 involved:1 e2:6 naturally:1 associated:2 gain:2 stop:5 knowledge:5 anytime:1 routine:1 auer:1 feed:1 exceeded:5 x6:2 wei:2 though:5 furthermore:1 hand:3 receives:4 rajeev:1 continuity:3 behaved:1 grows:2 k22:1 requiring:1 true:2 ccf:1 piccolboni:1 hence:2 inspiration:1 moore:1 round:9 game:40 during:1 maintained:1 mabs:2 complete:1 recently:1 behaves:1 exponentially:10 discussed:1 refer:1 multiarmed:1 hurting:1 enjoyed:1 tuning:3 trivially:1 pm:14 mathematics:3 bruno:1 had:2 access:2 base:2 nicolo:2 recent:1 showed:1 apart:1 forcing:2 certain:1 inequality:1 binary:1 continue:1 came:1 seen:2 herbert:2 additional:1 somewhat:1 determine:1 ii:1 semi:1 full:8 multiple:1 demosthenis:1 lin:1 equally:1 visit:1 prediction:1 expectation:2 adopting:1 achieved:1 whereas:2 else:2 crucial:2 call:4 presence:1 kick:1 revealed:1 yang:1 identically:1 easy:1 rendering:1 enough:1 switch:1 gave:1 restrict:1 observability:2 idea:2 t0:19 motivated:1 handled:1 suffer:1 peter:1 azin:1 action:77 repeatedly:1 antonio:1 tewari:2 detailed:2 amount:2 locally:1 category:1 simplest:1 generate:1 mx1:4 nsf:1 sign:1 estimated:1 r22:1 diverse:2 discrete:4 key:1 four:1 terminology:1 threshold:9 pb:3 achieving:2 prevent:1 neither:4 run:4 parameterized:3 fourth:1 everywhere:1 letter:1 sougata:3 place:2 throughout:1 reader:1 reasonable:1 scaling:1 bound:40 culmination:1 guaranteed:1 played:4 distinguish:1 oracle:15 activity:1 annual:1 strength:1 precisely:1 constraint:1 x2:1 n3:1 junya:1 generates:2 aspect:3 kleinberg:1 argument:3 min:1 optimality:1 anyone:1 department:3 according:6 newer:1 modification:1 happens:1 restricted:5 taken:2 remains:2 assures:1 discus:2 turn:2 mechanism:2 know:1 fed:1 end:10 umich:2 adopted:2 operation:3 komiyama:1 unreasonable:1 generic:1 existence:12 thomas:1 top:15 remaining:3 include:2 log2:3 exploit:2 especially:2 move:13 objective:2 already:2 occurs:1 strategy:8 dependence:7 usual:1 traditional:2 said:1 mx:14 separate:1 length:1 teneketzis:1 robert:1 implementation:1 design:2 motivates:1 unknown:2 bianchi:2 upper:1 gilles:1 finite:17 acknowledge:1 situation:2 extended:4 defining:1 incorporated:1 rn:5 arbitrary:1 namely:2 required:1 pair:1 connection:1 polylogarithmic:1 barcelona:1 nip:1 able:2 adversary:33 below:1 usually:1 reading:1 ambuj:2 max:6 including:1 analogue:1 ranked:14 difficulty:1 minimax:1 kxk22:1 mxi:3 brief:1 ready:1 rmx:2 literature:4 acknowledgement:1 dislike:1 kf:1 asymptotic:1 fully:1 loss:4 permutation:5 h2:1 playing:3 pi:1 uncountably:1 course:3 offline:3 tsitsiklis:1 distributed:1 feedback:27 dimension:2 cumulative:2 author:2 made:3 far:1 polynomially:1 observable:18 countably:2 keep:1 global:20 pseudoinverse:1 instantiation:2 sequentially:1 hayes:1 b1:2 assumed:2 xi:2 continuous:9 additionally:1 szepesvari:1 argmaxx:9 poly:1 linearly:3 motivation:1 paul:1 repeated:2 x1:1 fashion:1 martingale:1 sub:1 fails:3 explicit:3 governed:1 third:1 removing:1 down:1 rk:1 theorem:11 specific:5 showing:1 r2:8 list:3 concern:1 burden:1 sequential:1 effectively:1 gained:2 budget:1 horizon:2 kx:1 gap:18 mildly:1 sorting:2 chen:2 michigan:2 logarithmic:4 simply:1 explore:1 likely:2 penrose:1 happening:1 expressed:1 contained:1 springer:2 satisfies:3 determines:1 acm:2 goal:1 sorted:1 ann:2 careful:1 lipschitz:5 feasible:1 change:1 infinite:7 specifically:1 lui:1 nakagawa:1 called:1 total:1 arbor:2 ucb:1 meaningful:3 select:2 junpei:1 support:2 scan:1 relevance:10 crowdsourcing:1 |
5,745 | 6,199 | Near-Optimal Smoothing of Structured Conditional
Probability Matrices
Moein Falahatgar
University of California, San Diego
San Diego, CA, USA
[email protected]
Mesrob I. Ohannessian
Toyota Technological Institute at Chicago
Chicago, IL, USA
[email protected]
Alon Orlitsky
University of California, San Diego
San Diego, CA, USA
[email protected]
Abstract
Utilizing the structure of a probabilistic model can significantly increase its learning
speed. Motivated by several recent applications, in particular bigram models
in language processing, we consider learning low-rank conditional probability
matrices under expected KL-risk. This choice makes smoothing, that is the careful
handling of low-probability elements, paramount. We derive an iterative algorithm
that extends classical non-negative matrix factorization to naturally incorporate
additive smoothing and prove that it converges to the stationary points of a penalized
empirical risk. We then derive sample-complexity bounds for the global minimzer
of the penalized risk and show that it is within a small factor of the optimal
sample complexity. This framework generalizes to more sophisticated smoothing
techniques, including absolute-discounting.
1
Introduction
One of the fundamental tasks in statistical learning is probability estimation. When the possible
outcomes can be divided into k discrete categories, e.g. types of words or bacterial species, the task
of interest is to use data to estimate the probability masses p1 , ? ? ? , pk , where pj is the probability of
observing category j. More often than not, it is not a single distribution that is to be estimated, but
multiple related distributions, e.g. frequencies of words within various contexts or species in different
samples. We can group these into a conditional probability (row-stochastic) matrix Pi,1 , ? ? ? , Pi,k
as i varies over c contexts, and Pij represents the probability of observing category j in context i.
Learning these distributions individually would cause the data to be unnecessarily diluted. Instead,
the structure of the relationship between the contexts should be harnessed.
A number of models have been proposed to address this structured learning task. One of the wildly
successful approaches consists of positing that P , despite being a c?k matrix, is in fact of much lower
rank m. Effectively, this means that there exists a latent context space of size m c, k into which
the original context maps probabilistically via a c ? m stochastic matrix A, then this latent context
in turn determines the outcome via an m ? k stochastic matrix B. Since this structural model means
that P factorizes as P = AB, this problem falls within the framework of low-rank (non-negative)
matrix factorization. Many topic models, such as the original work on probabilistic latent semantic
analysis PLSA, also map to this framework. We narrow our attention here to such low-rank models,
but note that more generally these efforts fall under the areas of structured and transfer learning.
Other examples include: manifold learning, multi-task learning, and hierarchical models.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
In natural language modeling, low-rank models are motivated by the inherent semantics of language:
context first maps into meaning which then maps to a new word prediction. An alternative form
of such latent structure, word embeddings derived from recurrent neural networks (or LSTMs) are
the state-of-the-art of current language models, [20, 25, 28]. A first chief motivation for the present
work is to establish a theoretical underpinning of the success of such representations. We restrict the
exposition to bigram models. The traditional definition of the bigram is that language is modeled as a
sequence of words generated by a first order Markov-chain. Therefore the ?context? of a new word is
simply its preceding word, and we have c = k. Since the focus here is not the dependencies induced
by such memory, but rather the ramifications of the structural assumptions on P , we take bigrams to
model word-pairs independently sampled by first choosing the contextual word with probability ?
and then choosing the second word according to the conditional probability P , thus resulting in a
joint distribution over word-pairs (?i Pij ).
What is the natural measure of performance for a probability matrix estimator? Since ultimately
such estimators are used to accurately characterize the likelihood of test data, the measure of choice
used in empirical studies is the perplexity, or alternatively its logarithm, the cross entropy. For data
consisting of n word-pairs,
P if Cij is the number of times pair (i, j) appears, then the cross entropy
of an estimator Q is n1 ij Cij log Q1ij . The population quantity that corresponds to this empirical
P
P
performance measure is the (row-by-row weighted) KL-divergence D(P kQ) = ij ?i Pij log Qij
.
ij
Note that this is indeed the expectation of the cross entropy modulo the true entropy, an additive term
that does not depend on Q. This is the natural notion of risk for the learning task, since we wish
to infer the likelihood of future data, and our goal can now be more concretely stated as using the
data to produce an estimator Qn with a ?small? value of D(P kQn ). The choice of KL-divergence
introduces a peculiar but important problem: the necessity to handle small frequencies appropriately.
In particular, using the empirical conditional probability is not viable, since a zero in Q implies
infinite risk. This is the problem of smoothing, which has received a great amount of attention by the
NLP community. Our second salient motivation for the present work is to propose principled methods
of integrating well-established smoothing techniques, such as add- 12 and absolute discounting, into
the framework of structured probability matrix estimation.
Our contributions are as follows, we provide:
? A general framework for integrating smoothing and structured probability matrix estimation, as an
alternating-minimization that converges to a stationary point of a penalized empirical risk.
? A sample complexity upper bound of O(km log2 (2n + k)/n) for the expected KL-risk, for the
global minimizer of this penalized empirical risk.
? A lower bound that matches this upper bound up to the logarithmic term, showing near-optimality.
The paper is organized as follows. Section 2 reviews related work. Section 3 states the problem
and Section 4 highlights our main results. Section 5 proposes our central algorithm and Section 6
analyzes its idealized variant. Section 7 provides some experiments and Section 8 concludes.
2
Related Work
Latent variable models, and in particular non-negative matrix factorization and topic models, have
been such an active area of research in the past two decades that the space here cannot possibly do
justice to the many remarkable contributions. We list here some of the most relevant to place our
work in context. We start by mentioning the seminal papers [12, 18] which proposed the alternating
minimization algorithm that forms the basis of the current work. This has appeared in many forms in
the literature, including the multiplicative updates [29]. Some of the earliest work is reviewed in [23].
These may be generally interpreted as discrete analogs to PCA (and even ICA) [10].
An influential Bayesian generative topic model, the Latent Dirichlet Allocation, [7] is very closely
related to what we propose. In fact, add-half smoothing effectively corresponds to a Dirichlet(1/2)
(Jeffreys) prior. Our exposition differs primarily in adopting a minimax sample complexity perspective
which is often not found in the otherwise elegant Bayesian framework. Furthermore, exact Bayesian
inference remains a challenge and a lot of effort has been expended lately toward simple iterative
algorithms with provable guarantees, e.g. [3, 4]. Besides, a rich array of efficient smoothing
techniques exists for probability vector estimation [2, 16, 22, 26], of which one could directly avail
in the methodology that is presented here.
2
A direction that is very related to ours was recently proposed in [13]. There, the primary goal is to
recover the rows of A and B in `1 -risk. This is done at the expense of additional separation conditions
on these rows. This makes the performance measure not easily comparable to our context, though
with the proper weighted combination it is easy to see that the implied `1 -risk result on P is subsumed
by our KL-risk result (via Pinsker?s inequality), up to logarithmic factors, while the reverse isn?t true.
Furthermore, the framework of [13] is restricted to symmetric joint probability matrices, and uses
an SVD-based algorithm that is difficult to scale beyond very small latent ranks m. Apart from this
recent paper for the `1 -risk, sample complexity bounds for related (not fully latent) models have been
proposed for the KL-risk, e.g. [1]. But these remain partial, and far from optimal. It is also worth
noting that information geometry gives conditions under which KL-risk behaves close to `2 -risk [8],
thus leading to a Frobenius-type risk in the matrix case.
Although the core optimization problem itself is not our focus, we note that despite being a nonconvex problem, many instances of matrix factorization admit efficient solutions. Our own heuristic
initialization method is evidence of this. Recent work, in the `2 context, shows that even simple
gradient descent, appropriately initialized, could often provably converge to the global optimum [6].
Concerning whether such low-rank models are appropriate for language modeling, there has been
evidence that some of the abovementioned word embeddings [20] can be interpreted as implicit matrix
factorization [19]. Some of the traditional bigram smoothing techniques, such as the Kneser-Ney
algorithm [17, 11], are also reminiscent of rank reduction [14, 24, 15].
3
Problem Statement
Data Dn consists of n pairs (Xs , Ys ), s = 1, ? ? ? , n, where Xs is a context and Ys is the corresponding
outcome. In the spirit of a bigram language model, we assume that the context and outcome spaces
have the same cardinality, namely k. Thus (Xs , Ys ) takes values inP[k]2 . We denote the count of pairs
(i, j) by Cij . As a shortcut, we also write the row-sums as Ci = j Cij .
We assume the underlying generative model of the data to be i.i.d., where each pair is drawn by first
sampling the context Xs according to a probability distribution ? = (?i ) over [k] and then sampling
Ys conditionally on Xs according to a k ? k conditional probability (stochastic) matrix P = (Pij ), a
non-negative matrix where each row sums to 1. We also assume that P has non-negative rank m. We
denote the set of all such matrices by Pm . They can all be factorized (non-uniquely) as P = AB,
where both A and B are stochastic matrices in turn, of size k ? m and m ? k respectively.
A conditional probability matrix estimator is an algorithm that maps the data into a stochastic matrix
Qn (X1 , ? ? ? , Xn ) that well-approximates P , in the absence of any knowledge about the underlying
model. We generally drop the explicit notation showing dependence on the data, and use instead
the implicit n-subscript notation. The performance, or how well any given stochastic matrix Q
approximates P , is measured according to the KL-risk:
X
Pij
R(Q) =
?i Pij log
(1)
Qij
ij
Note that this corresponds to an expected loss, with the log-loss L(Q, i, j) = log Pij /Qij . Although
we do seek out PAC-style (in-probability) bounds for R(Qn ), in order to give a concise definition
of optimality, we consider the average-case performance E[R(Qn )]. The expectation here is with
respect to the data. Since the underlying model is completely unknown, we would like to do well
against adversarial choices of ? and P , and thus we are interested in a uniform upper bound of the
form:
r(Qn ) = max E[R(Qn )].
?,P ?Pm
The optimal estimator, in the minimax sense, and the minimax risk of the class Pm are thus given by:
Q?n = arg min r(Qn )
=
arg min max E[R(Qn )]
Qn
?
r (Pm )
Qn
=
?,P ?Pm
min max E[R(Qn )].
Qn ?,P ?Pm
Explicitly obtaining minimax optimal estimators is a daunting task, and instead we would like to
exhibit estimators that compare well.
3
Definition 1 (Optimality). If an estimator satisfies E[R(Qn )] ? ? ? E[R(Q?n )], ??, (called an oracle
inequality), then if ? is a constant (of n, k, and m), we say that the estimator is (order) optimal.
If ? is not constant, but its growth is negligible with respect to the decay of r? (Pm ) with n or the
growth of r? (Pm ) with k or m, then we can call the estimator near-optimal. In particular, we
reserve this terminology for a logarithmic gap in growth, that is an estimator is near-optimal if
log ?/ log r? (Pm ) ? 0 asymptotically in any of n, k, or m. Finally, if ? does not depend on P we
have strong optimality, and r(Qn ) ? ? ? r? (Pm ). If ? does depend on P , we have weak optimality.
As a proxy to the true risk (1), we define the empirical risk:
1X
Pij
Rn (Q) =
Cij log
n ij
Qij
(2)
The conditional probability matrix that minimizes this empirical risk is the empirical conditional
probability P?n,ij = Cij /Ci . Not only is P?n,ij not optimal, but since there always is a positive (even if
slim) probability that some Cij = 0 even if Pij 6= 0, it follows that E[Rn (P?n )] = ?. This shows the
importance of smoothing. The simplest benchmark smoothing that we consider is add- 12 smoothing
Add- 1
P?ij 2 = (Cij + 1/2) / (Ci + k/2) , where we give an additional ?phantom? half-sample to each
word-pair, to avoid zeros. This simple method has optimal minimax performance when estimating
probability vectors. However, in the present matrix case it is possible to show that this can be a
factor of k/m away from optimal, which is significant (cf. Figure 1(a) in Section 7). Of course,
since we have not used the low-rank structure of P , we may be tempted to ?smooth by factoring?, by
performing a low-rank approximation of P?n . However, this will not eliminate the zero problem, since
a whole column may be zero. These facts highlight the importance of principled smoothing. The
problem is therefore to construct (possibly weakly) optimal or near-optimal smoothed estimators.
4
Main Results
In Section 5 we introduce the A DD - 12 -S MOOTHED L OW-R ANK algorithm, which essentially consists
of EM-style alternating minimizations, with the addition of smoothing at each stage. Here we state
the main results. The first is a characterization of the implicit risk function that the algorithm targets.
1
Theorem 2 (Algorithm). QAdd- 2 -LR converges to a stationary point of the penalized empirical risk
1
1
1 X
1 X
log
log
+
, where Q = W H.
(3)
Rn,penalized (W, H) = Rn (Q) +
2n
Wi`
2n
H`j
i,`
`,j
Conversely, any stationary point of (3) is a stable point of A DD - 12 -S MOOTHED L OW-R ANK.
The proof of Theorem 2 follows closely that of [18]. We now consider the global minimum of
this implicit risk, and give a sample complexity bound. By doing so, we intentionally decouple the
algorithmic and statistical aspects of the problem and focus on the latter.
Theorem 3 (Sample Complexity). Let Qn ? Pm achieve the global minimum of Equation 3. Then
for all P ? Pm such that Pij > km
n log(2n + k) ?i, j and n > 3,
E[R(Qn )] ? c
km
log2 (2n + k),
n
with c = 3100.
We outline the proof in Section 6. The basic ingredients are: showing the problem is near-realizable,
a quantization argument to describe the complexity of Pm , and a PAC-style [27] relative uniform
convergence which uses a sub-Poisson concentration for the sums of log likelihood ratios and uniform
variance and scale bounds. Finer analysis based on VC theory may be possible, but it would need to
handle the challenge of the log-loss being possibly unbounded and negative. The following result
shows that Theorem 3 gives weak near-optimality for n large, as it is tight up to the logarithmic factor.
Theorem 4 (Lower Bound). For n > k, the minimax rate of Pm satisfies:
km
r? (Pm ) ? c
,
with c = 0.06.
n
This is based on the vector case lower bound and providing the oracle with additional information:
instead of only (Xs , Ys ) it observes (Xs , Zs , Ys ), where Zs is sampled from Xs using A and Ys is
sampled from Zs using B. This effectively allows the oracle to estimate A and B directly.
4
5
Algorithm
Our main algorithm is a direct modification of the classical alternating minimization algorithm for
non-negative matrix factorization [12, 18]. This classical algorithm (with a slight variation) can be
shown to essentially solve the following mathematical program:
XX
1
QNNMF (?) = arg min
?ij log
.
Qij
Q=W H i
j
The analysis is a simple extension of the original analysis of [12, 18]. By ?essentially solves?, we
mean that each of the update steps can be identified as a coordinate descent, reducing the cost function
and ultimately converging as T ? ? to a stationary (zero gradient) point of this function. Conversely,
all stationary points of the function are stable points of the algorithm. In particular, since the problem
is convex in W and H individually, but not jointly in both, the algorithm can be thought of as taking
exact steps toward minimizing over W (as H is held fixed) and then minimizing over H (as W is
held fixed), whence the alternating-minimization name.
Before we incorporate smoothing, note that there are two ingredients missing from this algorithm.
First, the cost function is the sum of row-by-row KL-divergences, but each row is not weighted, as
compared to Equation (1). If we think of ?ij as P?ij = Cij /Ci , then the natural weight of row i is
?i or its proxy Ci /n. For this, the algorithm can easily be patched. Similarly to the analysis of the
original algorithm, one finds that this change essentially minimizes the weighted KL-risks of the
empirical conditional probability matrix, or equivalently the empirical risk as defined in Equation (2):
X Ci X Cij
1
log
.
QLR (C) = arg min Rn (Q) = arg min
n j Ci
Qij
Q=W H
Q=W H i
Of course, this is nothing but the maximum likelihood estimator of P under the low-rank constraint.
Just like the empirical conditional probability matrix, it suffers from lack of smoothing. For instance,
if a whole column of C is zero, then so will be the corresponding column of QERM (C). The first
naive attempt at smoothing would be to add- 21 to C and then apply the algorithm:
1
QNaive Add- 2 -LR (C) = QLR (C + 21 )
However, this would result in excessive smoothing, especially when m is small. The intuitive reason
is this: in the extreme case of m = 1 all rows need to be combined, and thus instead of adding 12 to
1
each category, QNaiveadd? 2 LR would add k/2, leading to the the uniform distribution overwhelming
the original distribution. We may be tempted to mitigate this by adding instead 1/2k, but this doesn?t
generalize well to other smoothing methods. A more principled approach should perform smoothing
directly inside the factorization, and this is exactly what we propose here. Our main algorithm is:
Algorithm: A DD - 12 -S MOOTHED L OW-R ANK
? Input: k ? k matrix (Cij ); Initial W 0 and H 0 ; Number of iterations T
? Iterations: Start at t = 0, increment and repeat while t < T
P
Cij
t?1
? For all i ? [k], ` ? [m], update Wi`t ? Wi`t?1 j (W H)
t?1 H`j
ij
Cij
t?1 P
t?1
t
? For all ` ? [m], j ? [k], update H`j
? H`j
i (W H)t?1 Wi`
ij
? Add-1/2 to each element of W t and H t , then normalize each row.
1
? Output: QAdd- 2 -LR (C) = W T H T
The intuition here is that, prior to normalization, the updated W and H can be interpreted as soft
counts. One way to see this is to sum each row i of (pre-normalized) W , which would give Ci . As for
H, the sums of its (pre-normalized) columns reproduce the sums of the columns of C. Next, we are
1
naturally led to ask: is QAdd- 2 LR (C) implicitly minimizing a risk, just as QLR (C) minimizes Rn (Q)?
1
Theorem 2 shows that indeed QAdd- 2 LR (C) essentially minimizes a penalized empirical risk.
More interestingly, A DD - 12 -S MOOTHED L OW-R ANK lends itself to a host of generalizations. In
particular, an important smoothing technique, absolute discounting, is very well suited for heavytailed data such as natural language [11, 21, 5]. We can generalize it to fractional counts as follows.
Let Ci indicate counts in traditional (vector) probability estimation, and let D be the total number of
5
P
distinct observed categories,
i.e. D = i I{Ci ? 1}. Let the number of fractional distinct categories
P
d be defined as d = i Ci I{Ci < 1}. We have the following soft absolute discounting smoothing:
( C ??
i
P
if Ci ? 1,
C
Soft-AD
?
Pi
(C, ?) =
?(D+d)
1??
P
P
if Ci < 1.
C Ci + (k?D?d)
C (1 ? Ci )
This gives us the following patched algorithm, which we do not place under the lens of theory
currently, but we strongly support it with our experimental results of Section 7.
Algorithm: A BSOLUTE -D ISCOUNTING -S MOOTHED L OW-R ANK
? Input: Specify ? ? (0, 1)
? Iteration:
? Add-1/2 to each element of W t , then normalize.
t
t
? Apply soft absolute discounting to H`j
? P?jSoft-AD (H`,?
, ?)
? Output: QAD-LR (C, ?) = W T H T
6
Analysis
We now outline the proof of the sample complexity upper bound of Theorem 3. Thus for the remainder
of this section we have:
1 X
1
1
1 X
Qn (C) = arg min Rn (Q) +
log
log
+
,
2n
Wi`
2n
H`j
Q=W H
i,`
`,j
that is Qn ? Pm achieves the global minimum of Equation 3. Since we have a penalized empirical
risk minimization at hand, we can study it within the classical PAC-learning framework. However,
rates of order n1 are often associated withe the realizable case, where Rn (Qn ) is exactly zero [27].
The following Lemma shows that we are near the realizable regime.
Lemma 5 (Near-realizability). We have
km
k
E[Rn (Qn )] ? +
log(2n + k).
n
n
We characterize the complexity of the class Pm by quantizing probabilities, as follows. Given a
positive integer L, define ?L to be the subset of the appropriate simplex ? consisting of L-empirical
distributions (or ?types? in information theory): ?L consists exactly of those distributions p that can
be written as pi = Li /L, where Li are non-negative integers that sum to L.
Definition 6 (Quantization). Given a positive integer L, define the L-quantization operation as
mapping a probability vector p to the closest (in `1 -distance) element of ?L , p? = arg minq??L kp ?
qk1 . For a matrix P ? Pm , define an L-quantization for any given factorization choice P = AB as
? where each row of A? and B
? is the L-quantization of the respective row of A and B. Lastly,
P? = A?B,
define Pm,L to be the set of all quantized probability matrices derived from Pm .
Via counting arguments, the cardinality of Pm,L is bounded by |Pm,L | ? (L + 1)2km . This quantized
family gives us the following approximation ability.
Lemma 7 (De-quantization). For a probability vector p, L-quantization satisfies |pi ? p?i | ? L1 for
? satisfies
all i, and kp ? p?k1 ? L2 . For a conditional probability matrix Q ? Pm , any quantization Q
3
6
? ij | ? for all i. Furthermore, if Q > per entry and L > , then:
|Qij ? Q
L
6
? ? 6 .
? ?
|R(Q) ? R(Q)|
and |Rn (Q) ? Rn (Q)|
L
L
We now give a PAC-style relative uniform convergence bound on the empirical risk [27].
Theorem 8 (Relative uniform convergence). Assume lower-bounded P > ? and choose any ? > 0.
? > in Pm,L (Definition 6):
We then have the following uniform bound over all lower-bounded Q
?
?
2
n?s
?
+2km log(L+1)
?
?
1
1
1
? ? Rn (Q)
?
R(Q)
20 log
+2? 10
log
?
q
Pr
sup
>? ?e
.
(4)
?Q?P
?
?
?
?
m,L ,Q>
R(Q)
6
? of showing a sub-Poisson concentration of the sum
The proof of this Theorem consists, for fixed Q,
of the log likelihood ratios. This needs care, as a simple Bennett or Bernstein inequality is not enough,
because we need to eventually self-normalize. A critical component is to relate the variance and scale
of the concentration to the KL-risk and its square root, respectively. The theorem then follows from
uniformly bounding the normalized variance and scale over Pm,L and a union bound.
To put the pieces together, first note that thanks to the fact that the optimum is also a stable point of
the A DD - 12 -S MOOTHED L OW-R ANK, the add- 12 nature of the updates implies that all of the elements
1
of Qn are lower-bounded by 2n+k
. By Lemma 7 and a proper choice of L of the order of (2n + k)2 ,
1
the quantized version won?t be much smaller. We can thus choose = 2n+k
in Theorem 8 and use
km
our assumption of ? = n log(2n + k). Using Lemmas 5 and 7 to bound the contribution of the
empirical risk, we can then integrate the probability bound of (4) similarly to the realizable case.
1
This gives a bound on the expected risk of the quantized version of Qn of order km
n log log L or
2
km
effectively n log (2n + k). We then complete the proof by de-quantizing using Lemma 7.
7
Experiments
Having expounded the theoretical merit of properly smoothing structered conditional probability
matrices, we give a brief empirical study of its practical impact. We use both synthetic and real data.
The various methods compared are as follows:
1
?
?
?
?
?
?
?
?
AddAdd- 12 , directly on the bigram counts: P?n,ij 2 = (Cij + 12 )/(Ci + 12 )
Absolute-discounting, directly on the bigram counts: P?nAD (C, ?) (see Section 5)
1
Naive Add- 21 Low-Rank, smoothing the counts then factorizing: QNaive Add- 2 -LR = QLR (C + 21 )
Naive Absolute-Discounting Low-Rank: QNaive AD-LR = QLR (nP?nAD (C, ?))
Stupid backoff (SB) of Google, a very simple algorithm proposed in [9]
Kneser-Ney (KN), a widely successful algorithm proposed in [17]
1
Add- 12 -Smoothed Low-Rank, our proposed algorithm with provable guarantees: QAdd- 2 -LR
Absolute-Discounting-Smoothed Low-Rank, heuristic generalization of our algorithm: QAD-LR
The synthetic model is determined randomly. ? is uniformly sampled from the k-simplex. The matrix
P = AB is generated as follows. The rows of A are uniformly sampled from the k-simplex. The
rows of B are generated in one of two ways: either sampled uniformly from the simplex or randomly
permuted power law distributions, to imitate natural language. The discount parameter is then fixed
to 0.75. Figure 1(a) uses uniformly sampled rows of B, and shows that, despite attempting to harness
the low-rank structure of P , not only does Naive Add- 12 fall short, but it may even perform worse
than Add- 12 , which is oblivious to structure. Add- 12 -Smoothed Low-Rank, on the other hand, reaps
the benefits of both smoothing and structure.
Figure 1(b) expands this setting to compare against other methods. Both of the proposed algorithms
have an edge on all other methods. Note that Kneser-Ney is not expected to perform well in this
regime (rows of B uniformly sampled), because uniformly sampled rows of B do not behave
like natural language. On the other hand, for power law rows, even if k n, Kneser-Ney does
well, and it is only superseded by Absolute-Discounting-Smoothed Low-Rank. The consistent
good performance of Absolute-Discounting-Smoothed Low-Rank may be explained by the fact that
absolute-discounting seems to enjoy some of the competitive-optimality of Good-Turing estimation,
as recently demonstrated by [22]. This is why we chose to illustrate the flexibility of our framework
by heuristically using absolute-discounting as the smoothing component.
Before moving on to experiments on real data, we give a short description of the data sets. All but the
first one are readily available through the Python NLTK:
? tartuffe, a French text, train and test size: 9.3k words, vocabulary size: 2.8k words.
? genesis, English version, train and test size: 19k words, vocabulary size: 4.4k words
? brown, shortened Brown corpus, train and test size: 20k words, vocabulary size: 10.5k words
For natural language, using absolute-discounting is imperative, and we restrict ourselves to AbsoluteDiscounting-Smoothed Low-Rank. The results of the performance of various algorithms are listed
in Table 1. For all these experiments, m = 50 and 200 iterations were performed. Note that the
proposed method has less cross-entropy per word across the board.
7
1.4
4
Add-1/2
1.2
Naive Abs-Disc Low-Rank
Naive Abs-Disc Low-Rank
3.5
Abs-Disc-Smoothed Low-Rank
Abs-Disc-Smoothed Low-Rank
10 -1
Kneser-Ney
Kneser-Ney
0.8
3
0.6
KL loss
KL loss
KL loss
Absolute-discounting
Absolute-discounting
Add-1/2-Smoothed Low-Rank
1
Add-1/2-Smoothed Low-Rank
Add-1/2-Smoothed Low-Rank
Naive Add-1/2 Low-Rank
2.5
0.4
10 -2
2
0.2
0
0
500
1000
1500
2000
2500
3000
Number of samples, n
0.5
1
1.5
2
2.5
Number of samples, n
(a) k = 100, m = 5
#10 4
1.5
10
15
20
25
30
35
40
45
50
Number of samples, n
(b) k = 50, m = 3
(c) k = 1000, m = 10
Figure 1: Performance of selected algorithms over synthetic data
0.05
Smoothed Low Rank
-0.1
Kneser-Ney
-0.15
-0.2
-0.25
-0.3
-0.35
-0.4
0
5000
10000
Training size (number of words)
(a) Performance on tartuffe
15000
5.82
0
Validation set cross-entropy (nats/word)
Stupid Backoff (baseline)
Diff in test cross-entropy from baseline (nats/word)
Diff in test cross-entropy from baseline (nats/word)
0
-0.05
Stupid Backoff (baseline)
-0.05
Smoothed Low Rank
Kneser-Ney
-0.1
-0.15
-0.2
-0.25
-0.3
0.5
1
1.5
Training size (number of words)
2
#10
4
(b) Performance on genesis
5.8
5.78
5.76
5.74
5.72
5.7
5.68
0
20
40
60
80
100
120
m
(c) rank selection for tartuffe
Figure 2: Experiments on real data
Add- 21
7.1808
7.3039
8.847
Dataset
tartuffe
genesis
brown
AD
6.268
6.041
7.9819
SB
6.0426
5.9058
7.973
KN
5.7555
5.7341
7.7001
AD-LR
5.6923
5.6673
7.609
Table 1: Cross-entropy results for different methods on several small corpora
We also illustrate the performance of different algorithms as the training size increases. Figure 2
shows the relative performance of selected algorithms with Stupid Backoff chosen as the baseline. As
Figure 2(a) suggests, the amount of improvement in cross-entropy at n = 15k is around 0.1 nats/word.
This improvement is comparable, even more significant, than that reported in the celebrated work of
Chen and Goodman [11] for Kneser-Ney over the best algorithms at the time.
Even though our algorithm is given the rank m as a parameter, the internal dimension is not revealed,
if ever known. Therefore, we could choose the best m using model selection. Figure 2(c) shows one
way of doing this, by using a simple cross-validation for the tartuffe data set. In particular, half of the
data was held out as a validation set, and for a range of different choices for m, the model was trained
and its cross-entropy on the validation set was calculated. The figure shows that there exists a good
choice of m k. A similar behavior is observed for all data sets. Most interestingly, the ratio of the
best m to the vocabulary size corpus is reminiscent of the choice of internal dimension in [20].
8
Conclusion
Despite the theoretical impetus of the paper, the resulting algorithms considerably improve over
several benchmarks. There is more work ahead, however. Many possible theoretical refinements
are in order, such as eliminating the logarithmic term in the sample complexity and dependence
on P (strong optimality). This framework naturally extends to tensors, such as for higher-order
N -gram language models. It is also worth bringing back the Markov assumption and understanding
how various mixing conditions influence the sample complexity. A more challenging extension,
and one we suspect may be necessary to truly be competitive with RNNs/LSTMs, is to parallel this
contribution in the context of generative models with long memory. The reason we hope to not only
be competitive with, but in fact surpass, these models is that they do not use distributional properties
of language, such as its quintessentially power-law nature. We expect smoothing methods such as
absolute-discounting, which do account for this, to lead to considerable improvement.
Acknowledgments We would like to thank Venkatadheeraj Pichapati and Ananda Theertha Suresh
for many helpful discussions. This work was supported in part by NSF grants 1065622 and 1564355.
8
References
[1] Abe, Warmuth, and Takeuchi. Polynomial learnability of probabilistic concepts with respect to the
Kullback-Leibler divergence. In COLT, 1991.
[2] Acharya, Jafarpour, Orlitsky, and Suresh. Optimal probability estimation with applications to prediction
and classification. In COLT, 2013.
[3] Agarwal, Anandkumar, Jain, and Netrapalli. Learning sparsely used overcomplete dictionaries via
alternating minimization. arXiv preprint arXiv:1310.7991, 2013.
[4] Arora, Ge, Ma, and Moitra. Simple, efficient, and neural algorithms for sparse coding. arXiv preprint
arXiv:1503.00778, 2015.
[5] Ben Hamou, Boucheron, and Ohannessian. Concentration Inequalities in the Infinite Urn Scheme for
Occupancy Counts and the Missing Mass, with Applications. Bernoulli, 2017.
[6] Bhojanapalli, Kyrillidis, and Sanghavi. Dropping convexity for faster semi-definite optimization. arXiv
preprint arXiv:1509.03917, 2015.
[7] Blei, Ng, and Jordan. Latent Dirichlet allocation. JMLR, 2003.
[8] Borade and Zheng. Euclidean information theory. IEEE Int. Zurich Seminar on Comm., 2008.
[9] Brants, Popat, Xu, Och, and Dean. Large language models in machine translation. In EMNLP, 2007.
[10] Buntine and Jakulin. Applying discrete PCA in data analysis. In Proceedings of the 20th conference on
Uncertainty in artificial intelligence, pages 59?66. AUAI Press, 2004.
[11] Chen and Goodman. An empirical study of smoothing techniques for language modeling. Computer
Speech & Language, 13(4):359?393, 1999.
[12] Hofmann. Probabilistic latent semantic indexing. In ACM SIGIR, 1999.
[13] Huang, Kakade, Kong, and Valiant.
arXiv:1602.06586, 2016.
Recovering structured probability matrices.
arXiv preprint
[14] Hutchinson, Ostendorf, and Fazel. Low rank language models for small training sets. IEEE SPL, 2011.
[15] Hutchinson, Ostendorf, and Fazel. A Sparse Plus Low-Rank Exponential Language Model for Limited
Resource Scenarios. IEEE Trans. on Audio, Speech, and Language Processing, 2015.
[16] Kamath, Orlitsky, Pichapati, and Suresh. On learning distributions from their samples. In COLT, 2015.
[17] Kneser and Ney. Improved backing-off for m-gram language modeling. In ICASSP, 1995.
[18] Lee and Seung. Algorithms for non-negative matrix factorization. In NIPS, 2001.
[19] Levy and Goldberg. Neural word embedding as implicit matrix factorization. In NIPS, 2014.
?
[20] Mikolov, Kombrink, Burget, Cernock`
y, and Khudanpur. Extensions of recurrent neural network language
model. In ICASSP, 2011.
[21] Ohannessian and Dahleh. Rare probability estimation under regularly varying heavy tails. In COLT, 2012.
[22] Orlitsky and Suresh. Competitive distribution estimation: Why is Good-Turing good. In NIPS, 2015.
[23] Papadimitriou, Tamaki, Raghavan, and Vempala. Latent semantic indexing: A probabilistic analysis. In
ACM SIGACT-SIGMOD-SIGART, 1998.
[24] Parikh, Saluja, Dyer, and Xing. Language Modeling with Power Low Rank Ensembles. arXiv preprint
arXiv:1312.7077, 2013.
[25] Shazeer, Pelemans, and Chelba. Skip-gram Language Modeling Using Sparse Non-negative Matrix
Probability Estimation. arXiv preprint arXiv:1412.1454, 2014.
[26] Valiant and Valiant. Instance optimal learning. arXiv preprint arXiv:1504.05321, 2015.
[27] Vapnik. Statistical Learning Theory. Wiley-Interscience, 1998.
[28] Williams, Prasad, Mrva, Ash, and Robinson. Scaling Recurrent Neural Network Language Models. arXiv
preprint arXiv:1502.00512, 2015.
[29] Zhu, Yang, and Oja. Multiplicative updates for learning with stochastic matrices. In Im. An. 2013.
9
| 6199 |@word kong:1 version:3 eliminating:1 bigram:8 seems:1 polynomial:1 justice:1 plsa:1 km:10 heuristically:1 seek:1 prasad:1 concise:1 jafarpour:1 reduction:1 initial:1 celebrated:1 necessity:1 ours:1 interestingly:2 past:1 current:2 contextual:1 reminiscent:2 written:1 readily:1 chicago:2 additive:2 hofmann:1 drop:1 update:6 stationary:6 generative:3 half:3 selected:2 intelligence:1 imitate:1 warmuth:1 core:1 short:2 lr:12 blei:1 provides:1 characterization:1 quantized:4 positing:1 unbounded:1 mathematical:1 dn:1 direct:1 viable:1 qij:7 prove:1 consists:5 interscience:1 inside:1 introduce:1 ica:1 indeed:2 expected:5 p1:1 chelba:1 behavior:1 multi:1 overwhelming:1 cardinality:2 spain:1 estimating:1 underlying:3 notation:2 xx:1 mass:2 factorized:1 bounded:4 what:3 bhojanapalli:1 interpreted:3 moein:2 minimizes:4 z:3 guarantee:2 mitigate:1 orlitsky:4 expands:1 growth:3 auai:1 exactly:3 grant:1 enjoy:1 och:1 positive:3 negligible:1 before:2 despite:4 shortened:1 jakulin:1 subscript:1 kneser:10 chose:1 rnns:1 initialization:1 plus:1 conversely:2 suggests:1 challenging:1 mentioning:1 factorization:10 limited:1 range:1 reaps:1 fazel:2 practical:1 acknowledgment:1 union:1 definite:1 differs:1 suresh:4 dahleh:1 area:2 empirical:20 significantly:1 thought:1 burget:1 word:28 integrating:2 inp:1 pre:2 cannot:1 close:1 selection:2 put:1 risk:33 context:16 seminal:1 influence:1 applying:1 map:5 phantom:1 missing:2 demonstrated:1 dean:1 williams:1 attention:2 independently:1 convex:1 minq:1 sigir:1 estimator:14 utilizing:1 array:1 population:1 handle:2 notion:1 variation:1 coordinate:1 increment:1 updated:1 embedding:1 diego:4 target:1 modulo:1 exact:2 us:3 goldberg:1 element:5 sparsely:1 distributional:1 observed:2 preprint:8 technological:1 observes:1 principled:3 intuition:1 convexity:1 complexity:12 nats:4 comm:1 seung:1 pinsker:1 ultimately:2 trained:1 depend:3 weakly:1 tight:1 basis:1 completely:1 easily:2 joint:2 icassp:2 various:4 train:3 distinct:2 jain:1 describe:1 kp:2 artificial:1 outcome:4 choosing:2 heuristic:2 widely:1 solve:1 say:1 otherwise:1 ability:1 expounded:1 think:1 jointly:1 itself:2 sequence:1 quantizing:2 nad:2 propose:3 slim:1 remainder:1 relevant:1 ramification:1 mixing:1 flexibility:1 achieve:1 impetus:1 intuitive:1 frobenius:1 description:1 normalize:3 convergence:3 optimum:2 produce:1 converges:3 ben:1 diluted:1 derive:2 alon:2 recurrent:3 avail:1 illustrate:2 measured:1 ij:15 received:1 solves:1 netrapalli:1 recovering:1 strong:2 skip:1 implies:2 indicate:1 direction:1 closely:2 stochastic:8 vc:1 raghavan:1 generalization:2 im:1 extension:3 underpinning:1 around:1 great:1 algorithmic:1 mapping:1 reserve:1 achieves:1 dictionary:1 heavytailed:1 estimation:10 currently:1 individually:2 weighted:4 minimization:7 hope:1 always:1 rather:1 avoid:1 factorizes:1 varying:1 probabilistically:1 earliest:1 derived:2 focus:3 properly:1 improvement:3 rank:36 likelihood:5 bernoulli:1 mesrob:2 adversarial:1 baseline:5 sense:1 whence:1 helpful:1 inference:1 realizable:4 factoring:1 sb:2 eliminate:1 reproduce:1 interested:1 semantics:1 provably:1 backing:1 arg:7 classification:1 colt:4 proposes:1 smoothing:29 art:1 bacterial:1 construct:1 having:1 ng:1 sampling:2 represents:1 unnecessarily:1 excessive:1 future:1 simplex:4 np:1 sanghavi:1 papadimitriou:1 inherent:1 primarily:1 oblivious:1 acharya:1 randomly:2 oja:1 divergence:4 geometry:1 consisting:2 ourselves:1 n1:2 ab:8 attempt:1 subsumed:1 interest:1 zheng:1 introduces:1 truly:1 extreme:1 held:3 chain:1 peculiar:1 edge:1 partial:1 necessary:1 respective:1 euclidean:1 logarithm:1 initialized:1 overcomplete:1 theoretical:4 stupid:4 instance:3 column:5 modeling:6 soft:4 cost:2 subset:1 entry:1 imperative:1 kq:1 uniform:7 rare:1 successful:2 learnability:1 characterize:2 reported:1 buntine:1 dependency:1 kn:2 varies:1 hutchinson:2 synthetic:3 combined:1 considerably:1 thanks:1 fundamental:1 probabilistic:5 off:1 lee:1 together:1 central:1 moitra:1 choose:3 possibly:3 emnlp:1 huang:1 worse:1 admit:1 leading:2 style:4 li:2 expended:1 withe:1 account:1 de:2 coding:1 int:1 explicitly:1 idealized:1 ad:5 piece:1 multiplicative:2 root:1 lot:1 performed:1 observing:2 doing:2 sup:1 start:2 recover:1 competitive:4 parallel:1 xing:1 contribution:4 il:1 square:1 takeuchi:1 variance:3 ensemble:1 generalize:2 weak:2 bayesian:3 pichapati:2 accurately:1 disc:4 worth:2 finer:1 suffers:1 definition:5 against:2 frequency:2 intentionally:1 naturally:3 proof:5 associated:1 sampled:9 dataset:1 ask:1 knowledge:1 fractional:2 organized:1 sophisticated:1 back:1 appears:1 higher:1 methodology:1 specify:1 harness:1 daunting:1 improved:1 done:1 though:2 strongly:1 wildly:1 furthermore:3 just:2 implicit:5 stage:1 lastly:1 hand:3 lstms:2 ostendorf:2 lack:1 google:1 french:1 usa:3 name:1 qlr:5 normalized:3 true:3 brown:3 concept:1 discounting:16 alternating:6 symmetric:1 leibler:1 boucheron:1 semantic:3 conditionally:1 self:1 uniquely:1 won:1 outline:2 complete:1 l1:1 meaning:1 recently:2 parikh:1 behaves:1 permuted:1 harnessed:1 analog:1 slight:1 approximates:2 tail:1 significant:2 pm:25 similarly:2 language:24 moving:1 stable:3 add:22 closest:1 own:1 recent:3 perspective:1 apart:1 perplexity:1 reverse:1 scenario:1 nonconvex:1 inequality:4 success:1 kqn:1 analyzes:1 additional:3 minimum:3 preceding:1 care:1 converge:1 semi:1 multiple:1 infer:1 smooth:1 match:1 faster:1 cross:11 long:1 divided:1 concerning:1 host:1 y:7 impact:1 prediction:2 variant:1 basic:1 converging:1 essentially:5 expectation:2 poisson:2 arxiv:16 iteration:4 normalization:1 adopting:1 agarwal:1 addition:1 ank:6 appropriately:2 goodman:2 bringing:1 sigact:1 induced:1 suspect:1 elegant:1 regularly:1 spirit:1 jordan:1 anandkumar:1 call:1 integer:3 near:9 structural:2 noting:1 counting:1 bernstein:1 embeddings:2 easy:1 enough:1 revealed:1 yang:1 restrict:2 identified:1 kyrillidis:1 whether:1 motivated:2 pca:2 effort:2 patched:2 speech:2 cause:1 generally:3 backoff:4 listed:1 ohannessian:3 amount:2 discount:1 category:6 simplest:1 nsf:1 estimated:1 per:2 discrete:3 write:1 dropping:1 group:1 salient:1 terminology:1 drawn:1 pj:1 tamaki:1 qk1:1 asymptotically:1 sum:9 turing:2 uncertainty:1 extends:2 place:2 family:1 spl:1 separation:1 scaling:1 comparable:2 bound:18 paramount:1 oracle:3 ahead:1 constraint:1 aspect:1 speed:1 argument:2 optimality:8 min:7 performing:1 attempting:1 urn:1 mikolov:1 vempala:1 structured:6 influential:1 according:4 combination:1 remain:1 smaller:1 em:1 across:1 wi:5 kakade:1 modification:1 jeffreys:1 explained:1 restricted:1 pr:1 indexing:2 equation:4 zurich:1 remains:1 resource:1 turn:2 count:8 eventually:1 merit:1 ge:1 dyer:1 generalizes:1 operation:1 available:1 venkatadheeraj:1 apply:2 hierarchical:1 away:1 appropriate:2 ney:10 alternative:1 original:5 dirichlet:3 include:1 nlp:1 cf:1 log2:2 sigmod:1 brant:1 k1:1 especially:1 establish:1 classical:4 implied:1 tensor:1 quantity:1 primary:1 dependence:2 concentration:4 traditional:3 abovementioned:1 exhibit:1 gradient:2 ow:6 lends:1 distance:1 thank:1 topic:3 manifold:1 toward:2 provable:2 reason:2 besides:1 modeled:1 relationship:1 ratio:3 providing:1 minimizing:3 equivalently:1 difficult:1 cij:14 statement:1 relate:1 expense:1 kamath:1 sigart:1 negative:10 stated:1 proper:2 unknown:1 perform:3 upper:4 markov:2 benchmark:2 moothed:6 descent:2 behave:1 ever:1 genesis:3 rn:12 ucsd:2 shazeer:1 smoothed:14 community:1 abe:1 ttic:1 pair:8 namely:1 kl:14 california:2 narrow:1 established:1 barcelona:1 nip:4 trans:1 address:1 beyond:1 robinson:1 appeared:1 regime:2 challenge:2 program:1 including:2 memory:2 max:3 power:4 critical:1 natural:8 cernock:1 zhu:1 minimax:6 scheme:1 improve:1 occupancy:1 brief:1 arora:1 realizability:1 concludes:1 lately:1 superseded:1 naive:7 isn:1 text:1 review:1 literature:1 prior:2 l2:1 python:1 understanding:1 relative:4 law:3 fully:1 loss:6 highlight:2 expect:1 allocation:2 remarkable:1 ingredient:2 validation:4 ash:1 integrate:1 pij:10 proxy:2 consistent:1 dd:5 pi:5 heavy:1 translation:1 row:22 kombrink:1 course:2 penalized:8 repeat:1 supported:1 english:1 institute:1 fall:3 taking:1 absolute:16 sparse:3 benefit:1 dimension:2 xn:1 vocabulary:4 calculated:1 rich:1 qn:22 doesn:1 concretely:1 gram:3 refinement:1 san:4 far:1 implicitly:1 kullback:1 global:6 active:1 corpus:3 alternatively:1 factorizing:1 iterative:2 latent:11 decade:1 chief:1 table:2 reviewed:1 why:2 nature:2 transfer:1 ca:2 obtaining:1 pk:1 main:5 motivation:2 whole:2 bounding:1 nothing:1 falahatgar:1 x1:1 xu:1 board:1 wiley:1 sub:2 seminar:1 wish:1 explicit:1 exponential:1 jmlr:1 toyota:1 levy:1 theorem:11 nltk:1 showing:4 pac:4 popat:1 list:1 x:8 decay:1 theertha:1 evidence:2 exists:3 quantization:8 vapnik:1 adding:2 effectively:4 importance:2 ci:17 valiant:3 gap:1 chen:2 suited:1 entropy:11 logarithmic:5 led:1 simply:1 khudanpur:1 corresponds:3 minimizer:1 determines:1 satisfies:4 acm:2 ma:1 conditional:13 goal:2 careful:1 exposition:2 tempted:2 absence:1 shortcut:1 change:1 bennett:1 considerable:1 infinite:2 determined:1 reducing:1 uniformly:7 diff:2 surpass:1 decouple:1 lemma:6 ananda:1 called:1 specie:2 total:1 lens:1 svd:1 experimental:1 internal:2 support:1 latter:1 incorporate:2 audio:1 handling:1 |
5,746 | 62 | 290
CYCLES: A Simulation Tool for Studying
Cyclic Neural Networks
Michael T. Gately
Texas Instruments Incorporated, Dallas, TX 75265
ABSTRACT
A computer program has been designed and implemented to allow a researcher
to analyze the oscillatory behavior of simulated neural networks with cyclic connectivity. The computer program, implemented on the Texas Instruments Explorer / Odyssey system, and the results of numerous experiments are discussed.
The program, CYCLES, allows a user to construct, operate, and inspect neural
networks containing cyclic connection paths with the aid of a powerful graphicsbased interface. Numerous cycles have been studied, including cycles with one or
more activation points, non-interruptible cycles, cycles with variable path lengths,
and interacting cycles. The final class, interacting cycles, is important due to its
ability to implement time-dependent goal processing in neural networks.
INTRODUCTION
Neural networks are capable of many types of computation. However, the
majority of researchers are currently limiting their studies to various forms of
mapping systems; such as content addressable memories, expert system engines,
and artificial retinas. Typically, these systems have one layer of fully connected
neurons or several layers of neurons with limited (forward direction only) connectivity. I have defined a new neural network topology; a two-dimensional lattice of
neurons connected in such a way that circular paths are possible.
The neural networks defined can be viewed as a grid of neurons with one
edge containing input neurons and the opposite edge containing output neurons
[Figure 1]. Within the grid, any neuron can be connected to any other. Thus
from one point of view, this is a multi-layered system with full connectivity. I
view the weights of the connections as being the long term memory (LTM) of the
system and the propagation of information through the grid as being it's short
term memory (STM).
The topology of connectivity between neurons can take on any number of
patterns. Using the mammalian brain as a guide, I have limit~d the amount of
connectivity to something much less then total. In addition to making analysis
of such systems less complex, limiting the connectivity to some small percentage
of the total number of neurons reduces the amount of memory used in computer
simulations. In general, the connectivity can be purely random, or can form any
of a number of patterns that are repeated across the grid of neurons.
The program CYCLES allows the user to quickly describe the shape of the
neural network grid, the source of input data, the destination of the output data,
the pattern of connectivity. Once constructed, the network can be "run." during
which time the STM may be viewed graphically.
? American Institute of Physics 1988
input column
output column
~
~
-00000000000--00000000000--00000000000--
COMMAND WINDOW
..... vJI ...... '_,.lIr _". .... ;"r ?? ,Ii
-
11.,61 ... "
~
.??......
I'???????
.
......
? .. ......
? . .........
......'.,
..
~OOOOOOOOOOo--
Alt., GlobM VaN., . .
,. ,
,
~OOOOOOOOOOo--
'nllI./n
"., ,
?? ?.....
... ..
? ..........
?.....
'
'
~OOOOOOOOOOo--
o
000
~O
~O
~O
~O
-000
o
0 0 000-00000-00000-0
0 0 0 0-o 0 0 0 0 0-0 0 0 000--
sample connectivity
pattern -- replicated
across all neurons
t
output column
Figure 1. COMPONENTS OF A CYCLES NEURAL NETWORK
I .,,_
,
'
'
~OOOOOOOOOOo-~O
.,~~,~~
fI/II~",*- ,~.
IIlIIZIII
u.rou.F_tII'Y
GRAPHICAL DISPLAY WINDOW
HM.
It_,~
......
w_I'
NYlI"
''''' "
"'~ r .... " ' _ 0'
"" ... t_
~
fI'." .... 11 ... 8' ,,,,,,"
of .... t ' . . ,t .... ,,'
,.._ IN,.,.... .. 0Jf
,..,_t,.,,,,,_01
y. . ,.
1
"J???? ~ , ..
'.Itt,,,,. ,,,_ . rt.,..?,.,
""'tU.raf' .......
r ...
II,\IIM " " _ 1 W .... ", ..,,.. tVf'?"ltJ
w,~,,_
\
USER INTERACTION WINDOW
STI
rus WINDOW
Figure 2. NEURAL NETWORK WORKSTATION INTERFACE
tv
to
I-'
292
IMPLEMENTATION
CYCLES was implemented on a TI Explorer/Odyssey computer system with
8MB of RAM and 128MB of Virtual Memory. The program was written in Common LISP. The program was started in July of 1986, put aside for a while, and
finished in March of 1987. Since that time, numerous small enhancements have
been made - and the system has been used to test various theories of cyclic neural
networks.
The code was integrated into the Neural Network Workstation (NNW), an
interface to various neural network algorithms. The NNW utilizes the window
interface of the Explorer LISP machine to present a consistent command input
and graphical output to a variety of neural network algorithms [Figure 2].
The backpropagation-like neurons are collected together into a large threedimensional array. The implementation actually allows the use of multiple twodimensional grids; to date, however, I have studied only single-grid systems.
Each neuron in a CYCLES simulation consists of a list of information; the
value of the neuron, the time that the neuron last fired, a temporary value used
during the computation of the new value, and a list of the neurons connectivity.
The connectivity list stores the location of a related neuron and the strength of
the connection between the two neurons. Because the system is implemented in
arrays and lists, large systems tend to be very slow. However, most of my analysis
has taken place on very small systems ? 80 neurons) and for this size the speed
is acceptable.
To help gauge the speed of CYCLES, a single grid system containing 100
neurons takes 0.8 seconds and 1235 cons cells (memory cells) to complete one
update within the LISP machine. If the graphics interface is disabled, a test
requiring 100 updates takes a total of 10.56 seconds.
TYPES OF CYCLES
As mentioned above, several types of cycles have been observed. Each of these
can be used for different applications. Figure 3 shows some of these cycles.
1. SIMPLE cycles are those that have one or more points of activation traveling
across a set number of neurons in a particular order. The path length can be
any SIze.
2. NON-INTERRUPTABLE cycles are those that have sufficiently strong connectivity strengths that random flows of activation which interact with the
cycle will not upset or vary the original cycle.
3. VARIABLE PATH LENGTH cycles can, based upon external information,
change their path length. There must be one or more neurons that are always
a part of the path.
4. INTERACTING cycles typically have one neuron in common. Each cycle
must have at least one other neuron involved at the junction point in order to
keep the cycles separate. This type of cycle has been shown to implement a
complex form of a clock where the product of the two (or more) path lengths
are the fundamental frequency.
293
Figure 3. Types of Cycles [Simple and Interacting]
?
?
?
?
?
? ? ? ? ? ? ?
? ? ?
?
?
? ? ?
?
? ? ?
? ? ? ? ? ? ?
*
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?????????
?
?
?
?
?
?
??
??
?????????
?
?
?
?
?
?
Figure 4. Types of Connectivity [Nearest Neighbor and Gaussian]
INPUT
OUTPUT
Intent
Completed
Joint 3 Extended
Move Joint 3
Joint 2 Centered
Move Joint 2
Joint 1 Extended
Move Joint 1
Chuck Opened
Open Chuck
Chuck Closed
Close Chuck
Figure 5. Robot Arm used in Example
294
CONNECTIVITY
Several types of connectivity have been investigated. These are shown in
Figure 4.
1. In TOTAL connectivity, every neuron is connected to every other neuron.
This particular pattern produces very complex interactions with no apparent
stability.
2. With RANDOM connectivity, each neuron is connected to a random number
of other neurons. These other neurons can be anywhere in the grid.
3. A very useful type of connectivity is to have a PATTERN. The patterns can
be of any shape, typically having one neuron feed its nearest neighbors.
4. Finally, the GAUSSIAN pattern has been used with the most success. In this
pattern, each neuron is connected to a set number of nodes - but the selection
is random. Further, the distribution of nodes is in a Gaussian shape, centered
around a point "forward" of itself. Thus the flow of information, in general,
moves forward, but the connectivity allows cycles to be formed.
\
ALGORITHM
The algorithm currently being used in the system is a standard inner product
equation with a sigmoidal threshold function. Each time a neuron's weight is to
be calculated, the value of each contributing neuron on the connectivity list is
multiplied by the strength of the connection and summed. This sum is passed
through a sigmoidal thresholding function. The value of the neuron is changed
to be the result of this threshold function. As you can see, the system updates
neurons in an ordered fashion, thus certain interactions will not be observed. Since
timing information is saved in the neurons, asynchron:' could be simulated.
Initially, the weights of the connections are set randomly. A number of interesting cycles have been observed as a result of this randomness. However, several
experiments have required specific weights. To accommodate this, an interface to
the weight matrix is used. The user can create any set of connection strengths
desired.
I have experimented with several learning algorithms-that is, algorithms that
change the connection weights. The first mechanism was a simple Hebbian rule
that states that if two neurons both fire, and there is a connection between them,
then strengthen the strength of that connection. A second algorithm I experimented with used a pain/pleasure indicator to strengthen or weaken weights.
An algorithm that is currently under development actually presets the weights
from a grammar of activity required of the network. Thus, the user can describe
a process that must be controlled by a network using a simple grammar. This
description is then "compiled" into a set of weights that contain cycles to indicate
time-independent components of the activity.
295
USAGE
Even without a biological background, it is easy to see that the processing
power of the human brain is far more than present associative memories. Our
repertoire of capabilities includes, among other things: memory of a time line,
creativity, numerous types of biological clocks, and the ability to create and execute complex plans. The CYCLES algorithm has been shown to be capable of
executing complex, time-variable plans.
A plan can be defined as a sequence of actions that must be performed in
some preset order. Under this definition, the execution of a plan would be very
straightforward. However, when individual actions within the plan take an indeterminate length of time, it is necessary to construct an execution engine capable
of dealing with unexpected time delays. Such a system must also be able to abort
the processing of a plan based on new data.
With careful programming of connection weights, I have been able to use
CYCLES to execute time-variable plans. The particular example I have chosen
is for a robot arm to change its tool. In this activity, once the controller receives
the signal that the motion required, a series of actions take place that result in
the tool being changed.
As input to this system I have used a number of sensors that may be found
in a robot; extension sensors in 2-D joints and pressure sensors in articulators.
The outputs of this network are pulses that I have defined to activate motors on
the robot arm. Figure 5 shows how this system could be implemented. Figure
6 indicates the steps required to perform the task. Simple time delays, such as
found with binding motors and misplaced objects are accommodated with the
built in time-independence.
The small cycles that occur within the neural network can be thought of as
short term memory. The cycle acts as a place holder - keeping track of the system's
current place in a series of tasks. This type of pausing is necessary in many "real"
activities such as simple process control or the analysis of time varying data.
IMPLICATIONS
The success of CYCLES to simple process control activities such as robot arm
control implies that there is a whole new area of applications for neural networks
beyond present associative memories. The exploitation of the flow of activation as
a form of short term memory provides us with a technique for dealing with many
of the "other" type of computations which humans perform.
The future of the CYCLES algorithm will take two directions. First, the
completion of a grammar and compiler for encoding process control tasks into a
network. Second, other learning algorithms will be investigated which are capable
of adding and removing connections and altering the strengths of connections
based upon an abstract pain/pleasure indicator.
296
The robot gets a signal
to begin the tool change
process. A cycle is started
that outputs a signal to
the chuck motor.
~.-.-.
-~
? ? ? ? ? ?
....
? ? ? ? ? ?
?
? ? ? ? ? ?
.
? ? ? ? ? ?
~
: : : ;}'\.:-.
...
? ?
? ? ?
.~
? ?:.J. ? ? ?
? .p..~
_. ?
? ? ? ?
? ? ? ? ? ?
Next, the chuck is closed
around the new tool bit.
When the joint indicator
indicates that the joint is
centered, it changes the
flow of activation to cause
a cycle that activates the
third joint.
..?
? ? ? ? ? ?
~.~
~~.
-..~
?
D
?
-.(.:/.: :? ? ? ? ? ?
? ? U
? ? ? ?
The last signal ends the
sequence of cycles and
sends the completed
signal.
? ? ? ? ?
.-
? ? ? ? ? ?
?
? ? ? ? ? ?
? .? . ..-
? ? ? ? ? ?
? ? ? ? ? ?
? / ....
?
? ? ? ? ? ?
. .........
.
-"'.
?
D
When the first joint is
fully extended, the joint
sensor sends a signal
that stops that cycle, and
begins one that outputs
a signal to the second
joint.
? ? ? ? ? ?
? ? ?
? ? ? ? ?
? ? ? ? ? ?
When sensor indicates
that the chuck is open.
the first cycle is stopped
and a second begins
activ<lting the motor
in the first joint.
? ? ? ? ? ?
? ? ? ? ? ?
-.~. ? ?
D
Figure 6. Example use of CYCLES to control a Robot Arm
| 62 |@word exploitation:1 open:2 pulse:1 simulation:3 t_:1 pressure:1 accommodate:1 cyclic:4 series:2 current:1 activation:5 written:1 must:5 shape:3 motor:4 designed:1 update:3 aside:1 short:3 provides:1 node:2 location:1 sigmoidal:2 constructed:1 consists:1 behavior:1 multi:1 brain:2 window:5 stm:2 begin:3 every:2 ti:1 act:1 control:5 misplaced:1 timing:1 dallas:1 limit:1 encoding:1 path:8 studied:2 limited:1 implement:2 backpropagation:1 addressable:1 area:1 indeterminate:1 thought:1 get:1 close:1 layered:1 selection:1 put:1 twodimensional:1 presets:1 graphically:1 straightforward:1 rule:1 array:2 stability:1 limiting:2 user:5 strengthen:2 programming:1 mammalian:1 observed:3 cycle:41 connected:6 rou:1 mentioned:1 purely:1 upon:2 joint:14 various:3 tx:1 describe:2 activate:1 artificial:1 apparent:1 grammar:3 ability:2 tvf:1 itself:1 final:1 associative:2 sequence:2 interaction:3 mb:2 product:2 tu:1 date:1 fired:1 description:1 ltj:1 enhancement:1 produce:1 executing:1 object:1 help:1 completion:1 nearest:2 strong:1 implemented:5 indicate:1 implies:1 direction:2 saved:1 opened:1 centered:3 human:2 virtual:1 odyssey:2 creativity:1 repertoire:1 biological:2 extension:1 sufficiently:1 around:2 mapping:1 vary:1 currently:3 gauge:1 create:2 tool:5 sensor:5 always:1 gaussian:3 activates:1 varying:1 command:2 articulator:1 indicates:3 dependent:1 nnw:2 typically:3 integrated:1 initially:1 among:1 development:1 plan:7 summed:1 construct:2 once:2 having:1 future:1 retina:1 randomly:1 individual:1 fire:1 circular:1 implication:1 edge:2 capable:4 necessary:2 accommodated:1 desired:1 weaken:1 stopped:1 column:3 altering:1 lattice:1 delay:2 graphic:1 my:1 fundamental:1 destination:1 physic:1 michael:1 together:1 quickly:1 connectivity:20 containing:4 external:1 expert:1 american:1 includes:1 performed:1 view:2 closed:2 analyze:1 compiler:1 capability:1 raf:1 formed:1 holder:1 researcher:2 randomness:1 oscillatory:1 definition:1 frequency:1 involved:1 workstation:2 con:1 stop:1 actually:2 feed:1 execute:2 anywhere:1 clock:2 traveling:1 receives:1 propagation:1 abort:1 disabled:1 usage:1 requiring:1 contain:1 during:2 interruptible:1 complete:1 motion:1 interface:6 fi:2 common:2 discussed:1 grid:9 robot:7 compiled:1 something:1 store:1 certain:1 chuck:7 success:2 signal:7 july:1 ii:3 full:1 multiple:1 reduces:1 hebbian:1 long:1 controlled:1 controller:1 cell:2 addition:1 background:1 source:1 sends:2 operate:1 tend:1 thing:1 flow:4 lisp:3 easy:1 variety:1 independence:1 topology:2 opposite:1 inner:1 texas:2 passed:1 cause:1 action:3 useful:1 amount:2 percentage:1 track:1 upset:1 threshold:2 ram:1 sum:1 run:1 sti:1 powerful:1 you:1 place:4 utilizes:1 acceptable:1 bit:1 layer:2 display:1 activity:5 strength:6 occur:1 speed:2 tv:1 march:1 across:3 making:1 vji:1 taken:1 equation:1 mechanism:1 instrument:2 end:1 studying:1 junction:1 multiplied:1 original:1 completed:2 graphical:2 threedimensional:1 move:4 rt:1 pain:2 separate:1 pleasure:2 simulated:2 majority:1 collected:1 ru:1 length:6 code:1 intent:1 implementation:2 perform:2 inspect:1 neuron:37 extended:3 incorporated:1 interacting:4 required:4 connection:12 engine:2 temporary:1 able:2 beyond:1 pattern:9 program:6 built:1 including:1 memory:11 power:1 explorer:3 indicator:3 arm:5 numerous:4 finished:1 started:2 hm:1 contributing:1 fully:2 interesting:1 consistent:1 thresholding:1 changed:2 last:2 keeping:1 guide:1 allow:1 iim:1 institute:1 neighbor:2 van:1 calculated:1 forward:3 made:1 replicated:1 far:1 pausing:1 keep:1 lir:1 dealing:2 itt:1 interact:1 investigated:2 complex:5 whole:1 repeated:1 fashion:1 slow:1 aid:1 third:1 removing:1 specific:1 list:5 experimented:2 alt:1 adding:1 ltm:1 execution:2 unexpected:1 ordered:1 binding:1 goal:1 viewed:2 careful:1 jf:1 content:1 change:5 activ:1 preset:1 total:4 |
5,747 | 620 | Connected Letter Recognition with a
Multi-State Time Delay Neural Network
Hermann Hild and Alex Waibel
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213-3891, USA
Abstract
The Multi-State Time Delay Neural Network (MS-TDNN) integrates a nonlinear time alignment procedure (DTW) and the highaccuracy phoneme spotting capabilities of a TDNN into a connectionist speech recognition system with word-level classification and
error backpropagation. We present an MS-TDNN for recognizing
continuously spelled letters, a task characterized by a small but
highly confusable vocabulary. Our MS-TDNN achieves 98.5/92.0%
word accuracy on speaker dependent/independent tasks, outperforming previously reported results on the same databases. We propose training techniques aimed at improving sentence level performance, including free alignment across word boundaries, word duration modeling and error backpropagation on the sentence rather
than the word level. Architectures integrating submodules specialized on a subset of speakers achieved further improvements.
1
INTRODUCTION
The recognition of spelled strings of letters is essential for all applications involving
proper names, addresses or other large sets of special words which due to their sheer
size can not be in the basic vocabulary of a recognizer. The high confusability of the
English letters makes the seemingly easy task a very challenging one, currently only
addressed by a few systems, e.g. those of R. Cole et. al. [JFC90, FC90, CFGJ91]
for isolated spoken letter recognition. Their connectionist systems first find a broad
phonetic segmentation, from which a letter segmentation is derived, which is then
712
Connected Letter Recognition with a Multi-State Time Delay Neural Network
?
Word Layer
A
27 word units
Sil
(only 4 shown)
DTW Layer
27 word templates
(only 4 shown)
unity
---- .....
Phoneme Layer
59 phoneme units
(only 9 shown)
Hidden Layer
15 hidden units
Input Layer
16 melscale FFT coefficients
Figure 1: The MS-TDNN recognizing the excerpted word 'B'. Only the activations
for the words 'SIL', 'A', 'B', and 'c' are shown.
In this paper, we present the MS-TDNN as a
classified by another network.
connectionist speech recognition system for connected letter recognition. After describing the baseline architecture, training techniques aimed at improving sentence
level performance and architectures with gender-specific sub nets are introduced.
Baseline Architecture. Time Delay Neural Networks (TDNNs) can combine the
robustness and discriminative power of Neural Nets with a time-shift invariant architecture to form high accuracy phoneme classifiers [WHH+S9]. The Multi-State
TDNN (MS-TDNN) [HFW91, Haf92, HW92], an extension of the TDNN, is capable
of classifying words (represen ted as sequences of phonemes) by integrating a nonlinear time alignment procedure (DTW) into the TDNN architecture. Figure 1 shows
an MS-TDNN in the process of recognizing the excerpted word 'B', represented by
16 melscale FFT coefficients at a 10-msec frame rate. The first three layers constitute a standard TDNN, which uses sliding windows with time delayed connections
to compute a score for each phoneme (state) for every frame, these are the activations in the "Phoneme Layer". In the "DTW Layer" , each word to be recognized
is modeled by a sequence of phonemes. The corresponding activations are simply
713
714
Hild and Waibel
copied from the Phoneme Layer into the word models of the DTW Layer, where an
optimal alignment path is found for each word. The activations along these paths
are then collected in the word output units. All units in the DTW and Word Layer
are linear and have no biases. 15 (25 to 100) hidden units per frame were used
for speaker-dependent (-independent) experiments, the entire 26 letter network has
approximately 5200 (8600 to 34500) parameters.
Training starts with "bootstrapping", during which only the front-end TDNN is
used with fixed phoneme boundaries as targets. In a second phase, training is performed with word level targets. Phoneme boundaries are freely aligned within given
word boundaries in the DTW layer. The error derivatives are backpropagated from
the word units through the alignment path and the front-end TDNN.
The choice of sensible objective functions is of great importance. Let Y =
(Yl, ... ,Yn) the output and T = (tl, ... , t n ) the target vector. For training on
the phoneme level (bootstrapping), there is a target vector T for each frame in
time, representing the correct phoneme j in a "1-out-of-n" coding, i.e. ti = Dij. To
see why the standard Mean Square Error (MSE = l:?:l (Yi - ti)2) is problematic
for "1-out-of-n" codings for large n (n = 59 in our case), consider for example that
for a target (1.0,0.0, ... ,0.0) the output vector (0.0, ... ,0.0) has only half the error
than the more desirable output (1.0,0.2, ... ,0.2). To avoid this problem, we are
usmg
n
i=1
which (like cross entropy) punishes "outliers" with an error approaching infinity for
Iti - yd approaching 1.0. For the word level training, we have achieved best results
with an objective function similar to the "Classification Figure of Merit (CFM)"
[HW90], which tries to maximize the distance d = Yc - Yhi between the correct score
Yc and the highest incorrect score Yhi instead of using absolute target values of 1.0
and 0.0 for correct and incorrect word units:
ECFM(T, Y)
= f(yc -
Yhd
= f(d) = (1 -
d)2
The philosophy here is not to "touch" any output unit not directly related to correct
classification. We found it even useful to apply error backpropagation only in the
case of a wrong or too narrow classification, i.e. if Yc - Yhi < DllaJety...margin.
2
2.1
IMPROVING CONTINUOUS RECOGNITION
TRAINING ACROSS WORD BOUNDARIES
A proper treatment of word 1 boundaries is especially important for a short word
vocabulary, since most phones are at word boundaries. While the phoneme boundaries within a word are freely aligned by the DTW during "word level training" , the
word boundaries are fixed and might be error prone or suboptimal. By extending
the alignment one phoneme to the left (last phoneme of previous word) and the
right (first phoneme of next word), the word boundaries can be optimally adjusted
1 In our context, a "word" consists of one spelled letter, and a "sentence" is a continuously spelled string of letters.
Connected Letter Recognition with a Multi-State Time Delay Neural Network
1. 410~
:-1. :
..
b
B
b
.u
p
u:~
b
~
.
... 2
.
..
. .:
..
-
word boundaries -
. . .
~
? u 4~
~
word boundary found
by free alignment
(a) alignment across word boundaries
c
B
~~~+---~----~~--~
probed)
d
(durat1.on)
(b) duration dependent word penalties
b
h"
B
b
all
b
P
':~
P
b
X>. '
A::!,~
. :.
aU'~
.
I-- word bound_ri ? ? ~
word boundary found
by fr_ allgnlngnt
(C) training on the sentence level
Figure 2: Various techniques to improve sentence level recognition performance
715
716
Hild and Waibel
in the same way as the phoneme boundaries within a word. Figure 2(a) shows an
example in which the word to recognize is surrounded by a silence and a 'B', thus
the left and right context (for all words to be recognized) is the phoneme 'sil' and
'b', respectively. The gray shaded area indicates the extension necessary to the
DTW alignment. The diagram shows how a new boundary for the beginning of
the word 'A' is found. As indicated in figure 3, this techniques improves continuous
recognition significantly, but it doesn't help for excerpted words.
2.2
WORD DURATION DEPENDENT PENALIZING OF
INSERTION AND DELETION ERRORS
In "continuous testing mode" , instead of looking at word units the well-known "One
Stage DTW" algorithm [Ney84) is used to find an optimal path through an unspecified sequence of words. The short and confusable English letters cause many word
insertion and deletion errors, such as "T E" vs. ''T'' or "0" vs. "0 0", therefore
proper duration modeling is essential.
As suggested in [HW92], minimum phoneme duration can be enforced by "state
duplication". In addition, we are modeling a duration and word dependent penalty
Penw(d)
log(k + probw(d?, where the pdf probw(d) is approximated from the
training data and k is a small constant to avoid zero probabilities. Pen w(d) is
added to the accumulated score AS of the search path, AS AS + Aw * Pen w(d),
whenever it crosses the boundary of a word w in Ney's "One Stage DTW" algorithm, as indicated in figure 2(b). The ratio Aw , which determines the degree of
influence of the duration penalty, is another important degree of freedom. There
is no straightforward mathematically exact way to compute the effect of a change
of the "weight" Aw to the insertion and deletion rate. Our approach is a (pseudo)
gradient descent, which changes Aw proportional to E(w) = (#ins w - #delw)/#w,
i.e. we are trying to maximize the relative balance of insertion and deletion errors.
=
=
2.3
ERROR BACKPOPAGATION AT THE SENTENCE LEVEL
Usually the MS-TDNN is trained to classify excerpted words, but evaluated on continuously spoken sentences. We propose a simple but effective method to extend
training on the sentence level. Figure 2( c) shows the alignment path of the sentence
"e A B", in which a typical error, the insertion of an 'A', occurred. In a forced
alignment mode (i.e. the correct sequence of words is enforced), positive training is
applied along the correct path, while the units along the incorrect path receive negative training . Note that the effect of positive and negative training is neutralized if
the paths are the same, only differing parts receive non-zero error backpropagation.
2.4
LEARNING CURVES
Figure 3 demonstrates the effect of the various training phases. The system is
bootstrapped (a) during iteration 1 to 130. Word level training starts (b) at iteration
110. Word level training with additional "training across word boundaries" (c) is
started at iteration 260. Excerpted word performance is not improved after (c), but
continuous recognition becomes significantly better, compare (d) and (e). In (d),
sentence level training is started directly after iteration 260, while in (e) sentence
level training is started after additional "across boundaries (word level) training".
Connected Letter Recognition with a Multi-State Time Delay Neural Network
(~)
~( c)
95.0
---~--~---~~-.-- - - - ---.---- -- --~ -- --- - -- --
90 . 0
85.0
~
?
??
?
o
20
40
60
80
.
..
.
..
.. ..
... .
.
..
..
.
.
..
. ~ - - - - ~ - - - -:- - - - i - - - -:- - - - -! - - - - - - - -:- - - - ~ - - - - :- - - - -:- - - ,
.
I
I
I .
.
I
?
~00120140160~80200220 2 402602803003 2 03403603804004 2 0
Figure 3: Learning curves (a = bootstrapping, b,c = word level (excerpted words),
sentence level training (continuous speech)) on the training (0), crossvalidad,e
tion (-) and test set (x) for the speaker-independent RM Spell-Mode data.
=
3
GENDER SPECIFIC SUBNETS
A straightforward approach to building a more specialized system is simply to train
two entirely individual networks for male and female speakers only. During training,
the gender of a speaker is known, during testing it is determined by an additional
"gender identification network" , which is simply another MS-TDNN with two output units representing male and female speakers. Given a sentence as input, this
network classifies the speaker's gender with approx. 99% correct. The overall modularized network improved the word accuracy from 90 .8% (for the "pooled" net,
see table 1) to 91.3%. However, a hybrid approach with specialized gender-specific
connections at the lower, input level and shared connections for the remaining net
worked even better. As depicted in figure 4, in this architecture the gender identification network selects one of the two gender-specific bundles of connections between
the input and hidden layer. This technique improved the word accuracy to 92.0% .
More experiments with speaker-specific subnetworks are reported in [HW93] .
4
EXPERIMENTAL RESULTS
Our MS-TDNN achieved excellent performance on both speaker dependent and
independent tasks. For speaker dependent testing, we used the "eMU AlphData", with 1000 sentences (Le. a continuously spelled string of letters) from each
of 3 male and 3 female speakers. 500, 100, and 400 sentences were used as train-
717
718
Hild and Waibel
Phoneme Layer
Hidden Layer
Input Layer
F F T
Figure 4: A network architecture with gender-sp ecific and shared connections. Only
the front-end TDNN is shown.
ing, cross-validation and test set, respectively. The DARPA Resource Management
Spell-Mode Data were used for speaker independent testing. This data base
contains about 1700 sentences, spelled by 85 male and 35 female speakers. The
speech of 7 male and 4 female speakers was set aside for the test set, one sentence
from all 109 and all sentences from 6 training speakers were used for crossvalidation.
Table 1 summarizes our results. With the help of the training techniques described
above we were able to outperform previously reported [HFW91] speaker dependent
results as well as the HMM-based SPHINX System.
5
SUMMARY AND FUTURE WORK
We have presented a connectionist speech recognition system for high accuracy
connected letter recognition. New training techniques aimed at improving sentence
level recognition enabled our MS-TDNN to outperform previous systems of its own
kind as well as a state-of-the art HMM-based system (SPHINX). Beyond the gender
specific subnets, we are experimenting with an MS-TDNN which maintains several
"internal speaker models" for a more sophisticated speaker-independent system. In
the future we will also experiment with context dependent phoneme models.
Acknowledgements
The authors gratefully acknowledge support by the National Science Foundation
and DARPA. We wish to thank Joe Tebelskis for insightful discussions, Arthur
McNair for keeping our machines running, and especially Patrick Haffner. Many of
the ideas presented have been developed in collaboration with him.
References
[CFGJ91]
R. A. Cole, M. Fanty, Gopalakrishnan, and R. D.T. Janssen. SpeakerIndependent Name Retrival from Spellings Using a Database of 50,000 Names.
Connected Letter Recognition with a Multi-State Time Delay Neural Network
Speaker Dependent (eMU Alph Data)
500/2500 train, 100/500 crossvalidation, 400/2000 test sentences/words
our
speaker
SPHINX[HFW91] MS-TDNN[HFW91]
MC:_'T'nNN
rnjrnt
rndbs
rnaern
fcaw
flgt
fee
96.0
83.9
97.5
89.7
-
-
-
98.5
91.1
94.6
98.8
86.9
91.0
Speaker Independent (Resource Management Spell-Mode)
109 (ca. 11000) train, 11 (ca. 900) test speaker (words).
our MS-TDNN
SPHINX[HH92]
gender specific
+ Senone
88.7
90.4
90.8
92.0
Table 1: Word accuracy (in % on the test sets) on speaker dependent and speaker
independent connected letter tasks.
In Proceedings of the International Conference on Acoustics, Speech and Signal Processing, Toronto, Ontario, Canada, May 1991. IEEE.
M. Fanty and R. Cole. Spoken letter recognition. In Proceedings of the Neural
[FC90]
Information Processing Systems Conference NIPS, Denver, November 1990.
P. Haffner. Connectionist Word-Level Classification in Speech Recognition.
[Haf92]
In Proc. IEEE International Conference on Acoustics, Speech, and Signal
Processing. IEEE, 1992.
P. Haffner, M. Franzini, and A. Waibel. Integrating Time Alignment and
[HFW91]
Neural Networks for High Performance Continuous Speech Recognition. In
Proc. Int. Conf. on Acoustics, Speech, and Signal Processing. IEEE, 1991.
[HH92]
M.-Y. Hwang and X. Huang. Subphonetic Modeling with Markov States Senone. In Proc. IEEE International Conference on Acoustics, Speech, and
Signal Processing, pages 133 - 137. IEEE, 1992.
[HW90]
J. Hampshire and A. Waibel. A Novel Objective Function for Improved
Phoneme Recognition Using Time Delay Neural Networks. IEEE Transactions on Neural Networks, June 1990.
P. Haffner and A. Waibel. Multi-state Time Delay Neural Networks for Con[HW92]
tinuous Speech Recognition. In NIPS(4). Morgan Kaufman, 1992.
H. Hild and A. Waibel. Multi-SpeakerjSpeaker-Independent Architectures for
[HW93]
the Multi-State Time Delay Neural Network. In Proc. IEEE International
Conference on Acoustics, Speech, and Signal Processing. IEEE, 1993.
[JFC90]
R.D.T. Jansen, M. Fanty, and R. A. Cole. Speaker-independent Phonetic
Classification in Continuous English Letters. In Proceedings of the JJCNN
90, Washington D.C., July 1990.
[Ney84]
H. Ney. The Use of a One-Stage Dynamic Programming Algorithm for Connected Word Recognition. In IEEE Transactions on Acoustics, Speech, and
Signal Processing, pages 263-271. IEEE, April 1984.
[WHH+89] A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, and K. Lang. Phoneme
Recognition Using Time-Delay Neural Networks. IEEE, Transactions on
Acoustics, Speech and Signal Processing, March 1989.
719
PART IX
ApPLICATIONS
| 620 |@word contains:1 score:4 punishes:1 bootstrapped:1 activation:4 lang:1 speakerindependent:1 v:2 aside:1 half:1 beginning:1 short:2 toronto:1 along:3 incorrect:3 consists:1 combine:1 multi:10 window:1 becomes:1 classifies:1 kind:1 unspecified:1 string:3 kaufman:1 developed:1 spoken:3 differing:1 bootstrapping:3 pseudo:1 every:1 ti:2 classifier:1 wrong:1 demonstrates:1 rm:1 unit:12 yn:1 positive:2 path:9 approximately:1 yd:1 might:1 au:1 challenging:1 shaded:1 testing:4 backpropagation:4 procedure:2 area:1 significantly:2 word:67 integrating:3 s9:1 context:3 influence:1 straightforward:2 duration:7 enabled:1 target:6 exact:1 programming:1 us:1 pa:1 recognition:24 approximated:1 database:2 connected:9 highest:1 insertion:5 dynamic:1 trained:1 darpa:2 represented:1 various:2 train:4 forced:1 effective:1 hanazawa:1 seemingly:1 sequence:4 net:4 propose:2 fanty:3 aligned:2 ontario:1 crossvalidation:2 extending:1 spelled:6 help:2 subnets:2 school:1 hermann:1 correct:7 mathematically:1 adjusted:1 extension:2 hild:5 great:1 cfm:1 achieves:1 recognizer:1 proc:4 integrates:1 currently:1 cole:4 him:1 rather:1 avoid:2 derived:1 june:1 improvement:1 indicates:1 experimenting:1 baseline:2 dependent:11 accumulated:1 entire:1 hidden:5 selects:1 overall:1 classification:6 jansen:1 art:1 special:1 ted:1 washington:1 broad:1 future:2 connectionist:5 few:1 recognize:1 national:1 individual:1 delayed:1 phase:2 freedom:1 highly:1 alignment:12 male:5 melscale:2 bundle:1 capable:1 necessary:1 arthur:1 confusable:2 isolated:1 classify:1 modeling:4 whh:2 subset:1 delay:11 recognizing:3 dij:1 front:3 too:1 optimally:1 reported:3 aw:4 international:4 yl:1 continuously:4 management:2 huang:1 conf:1 sphinx:4 derivative:1 hh92:2 coding:2 pooled:1 coefficient:2 int:1 performed:1 try:1 tion:1 start:2 maintains:1 capability:1 square:1 accuracy:6 phoneme:24 identification:2 mc:1 classified:1 whenever:1 con:1 treatment:1 improves:1 segmentation:2 sophisticated:1 improved:4 april:1 evaluated:1 stage:3 touch:1 nonlinear:2 mode:5 indicated:2 gray:1 hwang:1 usa:1 effect:3 building:1 name:3 spell:3 subphonetic:1 during:5 speaker:26 m:14 trying:1 pdf:1 novel:1 specialized:3 denver:1 extend:1 occurred:1 mellon:1 approx:1 gratefully:1 neutralized:1 base:1 patrick:1 own:1 female:5 phone:1 phonetic:2 outperforming:1 yi:1 morgan:1 minimum:1 additional:3 freely:2 recognized:2 maximize:2 signal:7 july:1 sliding:1 desirable:1 ing:1 characterized:1 cross:3 involving:1 basic:1 iteration:4 achieved:3 tdnns:1 usmg:1 receive:2 addition:1 addressed:1 diagram:1 duplication:1 easy:1 fft:2 submodules:1 architecture:9 approaching:2 suboptimal:1 idea:1 haffner:4 shift:1 penalty:3 nnn:1 speech:15 cause:1 constitute:1 useful:1 aimed:3 backpropagated:1 outperform:2 problematic:1 per:1 carnegie:1 probed:1 sheer:1 penalizing:1 enforced:2 letter:20 summarizes:1 fee:1 entirely:1 layer:16 copied:1 jjcnn:1 infinity:1 worked:1 alex:1 represen:1 tebelskis:1 waibel:9 march:1 across:5 unity:1 outlier:1 invariant:1 resource:2 previously:2 describing:1 merit:1 end:3 subnetworks:1 apply:1 ney:2 robustness:1 remaining:1 running:1 especially:2 franzini:1 objective:3 added:1 spelling:1 gradient:1 distance:1 thank:1 hmm:2 sensible:1 collected:1 gopalakrishnan:1 modeled:1 ratio:1 balance:1 negative:2 proper:3 markov:1 iti:1 acknowledge:1 descent:1 november:1 hinton:1 looking:1 frame:4 canada:1 introduced:1 sentence:21 connection:5 acoustic:7 narrow:1 deletion:4 emu:2 alph:1 nip:2 address:1 able:1 suggested:1 spotting:1 usually:1 beyond:1 yc:4 including:1 mcnair:1 confusability:1 power:1 hybrid:1 representing:2 improve:1 dtw:11 started:3 tinuous:1 tdnn:22 acknowledgement:1 relative:1 proportional:1 sil:3 validation:1 foundation:1 degree:2 classifying:1 surrounded:1 collaboration:1 prone:1 summary:1 last:1 free:2 english:3 keeping:1 silence:1 bias:1 template:1 excerpted:6 absolute:1 modularized:1 boundary:19 curve:2 vocabulary:3 doesn:1 author:1 transaction:3 pittsburgh:1 discriminative:1 shikano:1 continuous:7 pen:2 search:1 why:1 table:3 ca:2 improving:4 mse:1 excellent:1 sp:1 tl:1 sub:1 msec:1 wish:1 ix:1 specific:7 insightful:1 essential:2 janssen:1 joe:1 importance:1 margin:1 yhi:3 entropy:1 depicted:1 simply:3 gender:11 determines:1 shared:2 change:2 typical:1 determined:1 hampshire:1 experimental:1 internal:1 support:1 philosophy:1 |
5,748 | 6,200 | Improved Deep Metric Learning with
Multi-class N-pair Loss Objective
Kihyuk Sohn
NEC Laboratories America, Inc.
[email protected]
Abstract
Deep metric learning has gained much popularity in recent years, following the
success of deep learning. However, existing frameworks of deep metric learning
based on contrastive loss and triplet loss often suffer from slow convergence, partially because they employ only one negative example while not interacting with
the other negative classes in each update. In this paper, we propose to address
this problem with a new metric learning objective called multi-class N -pair loss.
The proposed objective function firstly generalizes triplet loss by allowing joint
comparison among more than one negative examples ? more specifically, N -1
negative examples ? and secondly reduces the computational burden of evaluating
deep embedding vectors via an efficient batch construction strategy using only N
pairs of examples, instead of (N +1)?N . We demonstrate the superiority of our
proposed loss to the triplet loss as well as other competing loss functions for a
variety of tasks on several visual recognition benchmark, including fine-grained
object recognition and verification, image clustering and retrieval, and face verification and identification.
1
Introduction
Distance metric learning aims to learn an embedding representation of the data that preserves
the distance between similar data points close and dissimilar data points far on the embedding
space [15, 30]. With success of deep learning [13, 20, 23, 5], deep metric learning has received
a lot of attention. Compared to standard distance metric learning, it learns a nonlinear embedding
of the data using deep neural networks, and it has shown a significant benefit by learning deep
representation using contrastive loss [3, 7] or triplet loss [27, 2] for applications such as face recognition [24, 22, 19] and image retrieval [26]. Although yielding promising progress, such frameworks often suffer from slow convergence and poor local optima, partially due to that the loss function employs only one negative example while not interacting with the other negative classes per
each update. Hard negative data mining could alleviate the problem, but it is expensive to evaluate
embedding vectors in deep learning framework during hard negative example search. As to experimental results, only a few has reported strong empirical performance using these loss functions
alone [19, 26], but many have combined with softmax loss to train deep networks [22, 31, 18, 14, 32].
To address this problem, we propose an (N +1)-tuplet loss that optimizes to identify a positive example from N -1 negative examples. Our proposed loss extends triplet loss by allowing joint comparison among more than one negative examples; when N =2, it is equivalent to triplet loss. One
immediate concern with (N +1)-tuplet loss is that it quickly becomes intractable when scaling up
since the number of examples to evaluate in each batch grows in quadratic to the number of tuplets
and their length N . To overcome this, we propose an efficient batch construction method that only
requires 2N examples instead of (N +1)N to build N tuplets of length N +1. We unify the (N +1)tuplet loss with our proposed batch construction method to form a novel, scalable and effective deep
metric learning objective, called multi-class N -pair loss (N -pair-mc loss). Since the N -pair-mc
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
+
fN-2
f
f
-
f
+
...
.. f ..
+
fN-1
f2+
f4+
DNN
f+N-1
fN-2
f1
f1+
+
3
f
f4+
f
.. x ..
f1+
f1
...
f+
f
f3+
-
f2+
Figure 1: Deep metric learning with (left) triplet loss and (right) (N +1)-tuplet loss. Embedding
vectors f of deep networks are trained to satisfy the constraints of each loss. Triplet loss pulls
positive example while pushing one negative example at a time. On the other hand, (N +1)-tuplet
loss pushes N -1 negative examples all at once, based on their similarity to the input example.
loss already considers comparison to N -1 negative examples in its training objectives, negative data
mining won?t be necessary in learning from small or medium-scale datasets in terms of the number
of output classes. For datasets with large number of output classes, we propose a hard negative
?class? mining scheme which greedily adds examples to form a batch from a class that violates the
constraint with the previously selected classes in the batch.
In experiment, we demonstrate the superiority of our proposed N -pair-mc loss to the triplet loss as
well as other competing metric learning objectives on visual recognition, verification, and retrieval
tasks. Specifically, we report much improved recognition and verification performance on our finegrained car and flower recognition datasets. In comparison to the softmax loss, N -pair-mc loss is as
competitive for recognition but significantly better for verification. Moreover, we demonstrate substantial improvement in image clustering and retrieval tasks on Online product [21], Car-196 [12],
and CUB-200 [25], as well as face verification and identification accuracy on LFW database [8].
2
Preliminary: Distance Metric Learning
Let x ? X be an input data and y ? {1, ? ? ? , L} be its output label. We use x+ and x? to denote
positive and negative examples of x, meaning that x and x+ are from the same class and x? is from
different class to x. The kernel f (?; ?) : X ? RK takes x and generates an embedding vector f (x).
We often omit x from f (x) for simplicity, while f inherits all superscripts and subscripts.
Contrastive loss [3, 7] takes pairs of examples as input and trains a network to predict whether two
inputs are from the same class or not. Specifically, the loss is written as follows:
2
2
Lm
(1)
cont (xi , xj ; f ) = 1{yi = yj }kfi ? fj k2 + 1{yi 6= yj } max 0, m ? kfi ? fj k2
where m is a margin parameter imposing the distance between examples from different classes to
be larger than m. Triplet loss [27, 2, 19] shares a similar spirit to contrastive loss, but is composed
of triplets, each consisting of a query, a positive example (to the query), and a negative example:
+
?
+ 2
? 2
Lm
(2)
tri (x, x , x ; f ) = max 0, kf ? f k2 ? kf ? f k2 + m
Compared to contrastive loss, triplet loss only requires the difference of (dis-)similarities between
positive and negative examples to the query point to be larger than a margin m. Despite their wide
use, both loss functions are known to suffer from slow convergence and they often require expensive
data sampling method to provide nontrivial pairs or triplets to accelerate the training [2, 19, 17, 4].
3
Deep Metric Learning with Multiple Negative Examples
The fundamental philosophy behind triplet loss is the following: for an input (query) example, we
desire to shorten the distances between its embedding vector and those of positive examples while
enlarging the distances between that of negative examples. However, during one update, the triplet
loss only compares an example with one negative example while ignoring negative examples from
the rest of the classes. As a consequence, the embedding vector for an example is only guaranteed
to be far from the selected negative class but not necessarily the others. Thus we can end up only
differentiating an example from a limited selection of negative classes yet still maintain a small
distance from many other classes. In practice, the hope is that, after looping over sufficiently many
randomly sampled triplets, the final distance metric can be balanced correctly; but individual update
can still be unstable and the convergence would be slow. Specifically, towards the end of training,
most randomly selected negative examples can no longer yield non-zero triplet loss error.
2
An evident way to improve the vanilla triplet loss is to select a negative example that violates the
triplet constraint. However, hard negative data mining can be expensive with a large number of output classes for deep metric learning. We seek an alternative: a loss function that recruits multiple
negatives for each update, as illustrated by Figure 1. In this case, an input example is being compared against negative examples from multiple classes and it needs to be distinguishable from all of
them at the same time. Ideally, we would like the loss function to incorporate examples across every
class all at once. But it is usually not attainable for large scale deep metric learning due to the memory bottleneck from the neural network based embedding. Motivated by this thought process, we
propose a novel, computationally feasible loss function, illustrated by Figure 2, which approximates
our ideal loss by pushing N examples simultaneously.
3.1 Learning to identify from multiple negative examples
We formalize our proposed method, which is optimized to identify a positive example from multiple
negative examples. Consider an (N +1)-tuplet of training examples {x, x+ , x1 , ? ? ? , xN ?1 }: x+ is a
?1
positive example to x and {xi }N
i=1 are negative. The (N +1)-tuplet loss is defined as follows:
N
?1
X
?1
>
> +
L({x, x+ , {xi }N
};
f
)
=
log
1
+
exp(f
f
?
f
f
)
i
i=1
(3)
i=1
where f (?; ?) is an embedding kernel defined by deep neural network. Recall that it is desirable for
the tuplet loss to involve negative examples across all classes but it is impractical in the case when
the number of output classes L is large; even if we restrict the number of negative examples per
class to one, it is still too heavy-lifting to perform standard optimization, such as stochastic gradient
descent (SGD), with a mini-batch size as large as L.
When N = 2, the corresponding (2+1)-tuplet loss highly resembles the triplet loss as there is only
one negative example for each pair of input and positive examples:
L(2+1)-tuplet ({x, x+ , xi }; f ) = log 1 + exp(f > fi ? f > f + ) ;
(4)
+
>
> +
Ltriplet ({x, x , xi }; f ) = max 0, f fi ? f f .
(5)
Indeed, under mild assumptions, we can show that an embedding f minimizes L(2+1)-tuplet if and
only if it minimizes Ltriplet , i.e., two loss functions are equivalent.1 When N > 2, we further argue
the advantages of (N +1)-tuplet loss over triplet loss. We compare (N +1)-tuplet loss with the triplet
loss in terms of partition function estimation of an ideal (L+1)-tuplet loss, where an (L+1)-tuplet
loss coupled with a single example per negative class can be written as follows:
log 1 +
L?1
X
exp(f > fi ? f > f + ) = ? log
i=1
exp(f > f + )
PL?1
exp(f > f + ) + i=1 exp(f > fi )
(6)
Equation (6) is similar to the multi-class logistic loss (i.e., softmax loss) formulation when we view
f as a feature vector, f + and fi ?s as weight vectors, and the denominator on the right hand side
of Equation (6) as a partition function of the likelihood P (y = y + ). We observe that the partition
function corresponding to the (N +1)-tuplet approximates that of (L+1)-tuplet, and larger the value
of N , more accurate the approximation. Therefore, it naturally follows that (N +1)-tuplet loss is a
better approximation than the triplet loss to an ideal (L+1)-tuplet loss.
3.2 N -pair loss for efficient deep metric learning
Suppose we directly apply the (N +1)-tuplet loss to the deep metric learning framework. When the
batch size of SGD is M , there are M ?(N +1) examples to be passed through f at one update. Since
the number of examples to evaluate for each batch grows in quadratic to M and N , it again becomes
impractical to scale the training for a very deep convolutional networks.
Now, we introduce an effective batch construction to avoid excessive computational burden. Let
+
{(x1 , x+
1 ), ? ? ? , (xN , xN )} be N pairs of examples from N different classes, i.e., yi 6= yj , ?i 6= j.
+
+
+
We build N tuplets, denoted as {Si }N
i=1 , from the N pairs, where Si = {xi , x1 , x2 , ? ? ? , xN }.
+
+
Here, xi is the query for Si , xi is the positive example and xj , j 6= i are the negative examples.
1
We assume f to have unit norm in Equation (5) to avoid degeneracy.
3
f1
f2
fN
f1+
f2+
fN+
f1f2-
fN-
(a) Triplet loss
f1
f2
fN
f1+
f2+
fN+
f1,-
f1,-
1
2
f2,-
f2,-
1
2
fN,1
fN,2
f1,N-1
f1
f1+
f2,N-1
f2
f2+
f3
f3+
fN
fN+
fN,N-1
f2+
f3+
fN+
f1+
f2+
+
fN-1
(c) N -pair-mc loss
(b) (N +1)-tuplet loss
Figure 2: Triplet loss, (N +1)-tuplet loss, and multi-class N -pair loss with training batch construction. Assuming each pair belongs to a different class, the N -pair batch construction in (c) leverages
all 2 ? N embedding vectors to build N distinct (N +1)-tuplets with {fi }N
i=1 as their queries; thereafter, we congregate these N distinct tuplets to form the N -pair-mc loss. For a batch consisting
of N distinct queries, triplet loss requires 3N passes to evaluate the necessary embedding vectors,
(N +1)-tuplet loss requires (N +1)N passes and our N -pair-mc loss only requires 2N .
Figure 2(c) illustrates this batch construction process. The corresponding (N +1)-tuplet loss, which
we refer to as the multi-class N -pair loss (N -pair-mc), can be formulated as follows:2
N
LN -pair-mc ({(xi , x+
i )}i=1 ; f ) =
N
X
1 X
log 1 +
exp(fi> fj+ ? fi> fi+ )
N i=1
(7)
j6=i
The mathematical formulation of our N -pair loss shares similar spirits with other existing methods,
such as the neighbourhood component analysis (NCA) [6] and the triplet loss with lifted structure [21].3 Nevertheless, our batch construction is designed to achieve the utmost potential of such
(N +1)-tuplet loss, when using deep CNNs as embedding kernel on large scale datasets both in terms
of training data and number of output classes. Therefore, the proposed N -pair-mc loss is a novel
framework consisting of two indispensable components: the (N +1)-tuplet loss, as the building block
loss function, and the N -pair construction, as the key to enable highly scalable training. Later in
Section 4.4, we empirically show the advantage of our N -pair-mc loss framework in comparison to
other variations of mini-batch construction methods.
Finally, we note that the tuplet batch construction is not specific to the (N +1)-tuplet loss. We call the
set of loss functions using tuplet construction method an N -pair loss. For example, when integrated
into the standard triplet loss, we obtain the following one-vs-one N -pair loss (N -pair-ovo):
N
LN -pair-ovo ({(xi , x+
i )}i=1 ; f ) =
N
1 XX
log 1 + exp(fi> fj+ ? fi> fi+ ) .
N i=1
(8)
j6=i
3.2.1
Hard negative class mining
The hard negative data mining is considered as an essential component to many triplet-based distance
metric learning algorithms [19, 17, 4] to improve convergence speed or the final discriminative
performance. When the number of output classes are not too large, it may be unnecessary for N pair loss since the examples from most of the negative classes are considered jointly already. When
we train on the dataset with large output classes, the N -pair loss can be benefited from carefully
selected impostor examples.
Evaluating deep embedding vectors for multiple examples from large number of classes is computationally demanding. Moreover, for N -pair loss, one theoretically needs N classes that are negative
to one another, which substantially adds to the challenge of hard negative search. To overcome such
difficulty, we propose negative ?class? mining, as opposed to negative ?instance? mining, which
greedily selects negative classes in a relatively efficient manner.
More specifically, the negative class mining for N -pair loss can be executed as follows:
We also consider the symmetric loss to Equation (7) that swaps f and f + to maximize the efficacy.
Our N -pair batch construction can be seen as a special case of lifted structure [21] where the batch includes
only positive pairs that are from disjoint classes. Besides, the loss function in [21] is based on the max-margin
formulation, whereas we optimize the log probability of identification loss directly.
2
3
4
1. Evaluate Embedding Vectors: choose randomly a large number of output classes C; for each
class, randomly pass a few (one or two) examples to extract their embedding vectors.
2. Select Negative Classes: select one class randomly from C classes from step 1. Next, greedily
add a new class that violates triplet constraint the most w.r.t. the selected classes till we reach
N classes. When a tie appears, we randomly pick one of tied classes [28].
3. Finalize N -pair: draw two examples from each selected class from step 2.
3.2.2
L2 norm regularization of embedding vectors
The numerical value of f > f + can be influenced by not only the direction of f + but also its norm,
even though the classification decision should be determined merely by the direction. Normalization
can be a solution to avoid such situation, but it is too stringent for our loss formulation since it bounds
the value of |f > f + | to be less than 1 and makes the optimization difficult. Instead, we regularize
the L2 norm of the embedding vectors to be small.
4
Experimental Results
We assess the impact of our proposed N -pair loss functions, such as multi-class N -pair loss (N -pairmc) or one-vs-one N -pair loss (N -pair-ovo), on several generic and fine-grained visual recognition
and verification tasks. As a baseline, we also evaluate the performance of triplet loss with negative
data mining4 (triplet-nm). In our experiments, we draw a pair of examples from two different classes
and then form two triplets: each with one of the positive examples as query, the other one as positive,
(any) one of the negative examples as negative. Thus, a batch of 2N training examples can produce
N = 2N
4 ? 2 triplets, which is more efficient than the formulation in Equation (2) that we need 3N
examples to form N triplets. We adapt the smooth upper bound of triplet loss in Equation (4) instead
of large-margin formulation [27] in all our experiments to be consistent with N -pair-mc losses.
We use Adam [11] for mini-batch stochastic gradient descent with data augmentation, namely horizontal flips and random crops. For evaluation, we extract a feature vector and compute the cosine
similarity for verification. When more than one feature vectors are extracted via horizontal flip or
from multiple crops, we use the cosine similarity averaged over all possible combinations between
feature vectors of two examples. For all our experiments except for the face verification, we use
ImageNet pretrained GoogLeNet5 [23] for network initialization; for face verification, we use the
same network architecture as CasiaNet [31] but trained from scratch without the last fully-connected
layer for softmax classification. Our implementation is based on Caffe [10].
4.1 Fine-grained visual object recognition and verification
We evaluate deep metric learning algorithms on fine-grained object recognition and verification
tasks. Specifically, we consider car and flower recognition problems on the following database:
? Car-333 [29] dataset is composed of 164, 863 images of cars from 333 model categories collected from the internet. Following the experimental protocol [29], we split the dataset into
157, 023 images for training and 7, 840 for testing.
? Flower-610 dataset contains 61, 771 images of flowers from 610 different flower species and
among all collected, 58, 721 images are used for training and 3, 050 for testing.
We train networks for 40k iterations with 144 examples per batch. This corresponds to 72 pairs per
batch for N -pair losses. We perform 5-fold cross-validation on the training set and report the average
performance on the test set. We evaluate both recognition and verification accuracy. Specifically, we
consider verification setting where there are different number of negative examples from different
classes, and determine as success only when the positive example is closer to the query example than
any other negative example. Since the recognition task is involved, we also evaluate the performance
of deep networks trained with softmax loss. The summary results are given in Table 1.
We observe consistent improvement of 72-pair loss models over triplet loss models. Although the
negative data mining could bring substantial improvement to the baseline models, the performance
is not as competitive as 72-pair loss models. Moreover, the 72-pair loss models are trained without
negative data mining, thus should be more effective for deep metric learning framework. Between
4
Throughout experiments, negative data mining refers to the negative class mining for both triplet and
N -pair loss instead of negative instance mining.
5
https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet
5
triplet
triplet-nm
72-pair-ovo
72-pair-mc
Recognition
70.24?0.38
83.22?0.09
86.84?0.13
88.37?0.05
VRF (neg=1)
VRF (neg=71)
96.78?0.04
48.96?0.35
97.39?0.07
65.14?0.24
98.09?0.07
73.05?0.25
97.92?0.06
76.02?0.30
Recognition
71.55?0.26
82.85?0.22
84.10?0.42
85.57?0.25
VRF (neg=1)
VRF (neg=71)
98.73?0.03
73.04?0.13
99.15?0.03
83.13?0.15
99.32?0.03
87.42?0.18
99.50?0.02
88.63?0.14
Database, evaluation metric
Car-333
Flower-610
softmax
89.21?0.16
88.69?0.20?
96.19?0.07
55.36?0.30
84.38?0.28
84.59?0.21?
98.72?0.04
78.44?0.33
Table 1: Mean recognition and verification accuracy with standard error on the test set of Car-333
and Flower-610 datasets. The recognition accuracy of all models are evaluated using kNN classifier;
for models with softmax classifier, we also evaluate recognition accuracy using softmax classifier
(? ). The verification accuracy (VRF) is evaluated at different numbers of negative examples.
N -pair loss models, the multi-class loss (72-pair-mc) shows better performance than the one-vs-one
loss (72-pair-ovo). As discussed in Section 3.1, superior performance of multi-class formulation is
expected since the N -pair-ovo loss is decoupled in the sense that the individual losses are generated
for each negative example independently.
When it compares to the softmax loss, the recognition performance of the 72-pair-mc loss models
are competitive, showing slightly worse on Car-333 but better on Flower-610 datasets. However, the
performance of softmax loss model breaks down severely on the verification task. We argue that the
representation of the model trained with classification loss is not optimal for verification tasks. For
example, examples near the classification decision boundary can still be classified correctly, but are
prone to be missed for verification when there are examples from different class near the boundary.
4.2 Distance metric learning for unseen object recognition
Distance metric learning allows to learn a metric that can be generalized to an unseen categories.
We highlight this aspect of deep metric learning on several visual object recognition benchmark.
Following the experimental protocol in [21], we evaluate on the following three datasets:
? Stanford Online Product [21] dataset is composed of 120, 053 images from 22, 634 online product categories, and is partitioned into 59, 551 images of 11, 318 categories for training and
60, 502 images of 11, 316 categories for testing.
? Stanford Car-196 [12] dataset is composed of 16, 185 images of cars from 196 model categories.
The first 98 model categories are used for training and the rest for testing.
? Caltech-UCSD Birds (CUB-200) [25] dataset is composed of 11, 788 images of birds from 200
different species. Similarly, we use the first 100 categories for training.
Unlike in Section 4.1, the object categories between train and test sets are disjoint. This makes the
problem more challenging since deep networks can easily overfit to the categories in the train set
and generalization of distance metric to unseen object categories could be difficult.
We closely follow experimental setting of [21]. For example, we initialize the network using ImageNet pretrained GoogLeNet and train for 20k iterations using the same network architecture (e.g.,
64 dimensional embedding for Car-196 and CUB-200 datasets and 512 dimensional embedding for
Online product dataset) and the number of examples (e.g., 120 examples) per batch. Besides, we
use Adam for stochastic optimization and other hyperparameters such as learning rate are tuned accordingly via 5-fold cross-validation on the train set. We report the performance for both clustering
and retrieval tasks using F1 and normalized mutual information (NMI) [16] scores for clustering as
well as recall@K [9] score for retrieval in Table 2.
We observe similar trend as in Section 4.1. The triplet loss model performs the worst among all
losses considered. Negative data mining can alleviate the model to escape from the local optimum,
but the N -pair loss models outperforms even without additional computational cost for negative
data mining. The performance of N -pair loss further improves when combined with the proposed
negative data mining. Overall, we improve by 9.6% on F1 score, 1.99% on NMI score, and 14.41%
on recall@1 score on Online product dataset compared to the baseline triplet loss models. Lastly,
our model outperforms the performance of triplet loss with lifted structure [21], which demonstrates
the effectiveness of the proposed N pair batch construction.
6
Online product
triplet-lifted
60-pair-ovo
60-pair-mc
triplet triplet-nm
60-pair-ovo
60-pair-mc
structure [21]
-nm
-nm
F1
19.59
24.27
25.6
23.13
25.31
26.53
28.19
NMI
86.11
87.23
87.5
86.98
87.45
87.77
88.10
K=1
53.32
62.39
61.8
60.71
63.85
65.25
67.73
K=10
72.75
79.69
79.9
78.74
81.22
82.15
83.76
87.66
91.10
91.1
91.03
91.89
92.60
92.98
K=100
K=1000 96.43
97.25
97.3
97.50
97.51
97.92
97.81
Car-196
CUB-200
triplet triplet-nm 60-pair-ovo 60-pair-mc triplet triplet-nm 60-pair-ovo 60-pair-mc
F1
24.73
27.86
33.52
33.55
21.88
24.37
25.21
27.24
59.94
63.87
63.95
55.83
57.87
58.55
60.39
NMI 58.25
K=1 53.84
61.62
69.52
71.12
43.30
46.47
48.73
50.96
K=2 66.02
73.48
78.76
79.74
55.84
58.58
60.48
63.34
K=4 75.91
81.88
85.80
86.48
67.30
71.03
72.08
74.29
K=8 84.18
87.81
90.94
91.60
77.48
80.17
81.62
83.22
Table 2: F1, NMI, and recall@K scores on the test set of online product [21], Car-196 [12], and
CUB-200 [25] datasets. F1 and NMI scores are averaged over 10 different random seeds for kmeans
clustering but standard errors are omitted due to space limit. The best performing model and those
with overlapping standard errors are bold-faced.
VRF
Rank-1
DIR@FIR=1%
triplet
95.88?0.30
55.14
25.96
192-pair-ovo
96.92?0.24
66.21
34.14
triplet-nm
96.68?0.30
60.93
34.60
192-pair-mc
98.27?0.19
88.58
66.51
320-pair-mc
98.33?0.17
90.17
71.76
Table 3: Mean verification accuracy (VRF) with standard error, rank-1 accuracy of closed set identification and DIR@FAR=1% rate of open-set identification [1] on LFW dataset. The number of
examples per batch is fixed to 384 for all models except for 320-pair-mc model.
4.3
Face verification and identification
Finally, we apply our deep metric learning algorithms on face verification and identification, a problem that determines whether two face images are the same identities (verification) and a problem that
identifies the face image of the same identity from the gallery with many negative examples (identification). We train our networks on the WebFace database [31], which is composed of 494, 414
images from 10, 575 identities, and evaluate the quality of embedding networks trained with different metric learning objectives on Labeled Faces in the Wild (LFW) [8] database. We follow the
network architecture in [31]. All networks are trained for 240k iterations, while the learning rate is
decreased from 0.0003 to 0.0001 and 0.00003 at 160k and 200k iterations, respectively. We report
the performance of face verification. The summary result is provided in Table 3.
The triplet loss model shows 95.88% verification accuracy, but the performance breaks down on
identification tasks. Although negative data mining helps, the improvement is limited. Compared to
these, the N -pair-mc loss model improves the performance by a significant margin. Furthermore, we
observe additional improvement by increasing N to 320, obtaining 98.33% for verification, 90.17%
for closed-set and 71.76% for open-set identification accuracy. It is worth noting that, although it
shows better performance than the baseline triplet loss models, the N -pair-ovo loss model performs
much worse than the N -pair-mc loss on this problem.
Interestingly, the N -pair-mc loss model also outperforms the model trained with combined contrastive loss and softmax loss whose verification accuracy is reported as 96.13% [31]. Since this
model is trained on the same dataset using the same network architecture, this clearly demonstrates
the effectiveness of our proposed metric learning objectives on face recognition tasks. Nevertheless,
there are other works reported higher accuracy for face verification. For example, [19] demonstrated
99.63% test set verification accuracy on LFW database using triplet network trained with hundred
millions of examples and [22] reported 98.97% by training multiple deep neural networks from
different facial keypoint regions with combined contrastive loss and softmax loss. Since our contribution is complementary to the scale of the training data or the network architecture, it is expected
to bring further improvement by replacing the existing training objectives into our proposal.
7
4.5
?
100
?
tri,
?tri
?
tri,
?192
?
4
?
90
?
192p-??ovo,
?tri
?
80
?
192p-??ovo,
?192
?
3.5
?
192p-??mc,
?tri
?
3
?
70
?
192p-??mc,
?192
?
60
?
2.5
?
50
?
2
?
40
?
tri,
?tri
?
1.5
?
30
?
1
?
tri,
?192
?
192p-??ovo,
?tri
?
20
?
0.5
?
192p-??ovo,
?192
?
192p-??mc,
?tri
?
10
?
0
?
0
?
40000
?
80000
?
120000
?
160000
?
200000
?
192p-??mc,
?192
?
0
?
240000
?
0
?
(a) Triplet and 192-pair loss
40000
?
80000
?
120000
?
160000
?
200000
?
240000
?
(b) Triplet and 192-way classification accuracy
Figure 3: Training curve of triplet, 192-pair-ovo, and 192-pair-mc loss models on WebFace database.
We measure both (a) triplet and 192-pair loss as well as (b) classification accuracy.
F1
NMI
K=1
Online product
60 ? 2 30 ? 4
26.53
25.01
87.77
87.40
65.25
63.58
VRF
Rank-1
DIR@FIR=1%
60 ? 2
33.55
63.87
71.12
192 ? 2
98.27?0.19
88.58
66.51
Car-196
CUB-200
30 ? 4 10 ? 12 60 ? 2 30 ? 4 10 ? 12
31.92
29.87
27.24
27.54
26.66
62.94
61.84
60.39
60.43
59.37
69.30
65.49
50.96
50.91
49.65
96 ? 4
64 ? 6
32 ? 12
98.25?0.25 97.98?0.22 97.57?0.33
87.53
83.96
79.61
66.22
64.38
56.46
Table 4: F1, NMI, and recall@1 scores on online product, Car-196, and CUB-200 datasets, and
verification and rank-1 accuracy on LFW database. For model name of N ? M , we refer N the
number of different classes in each batch and M the number of positive examples per class.
Finally, we provide training curve in Figure 3. Since the difference of triplet loss between models is
relatively small, we also measure 192-pair loss (and accuracy) of three models at every 5k iteration.
We observe significantly faster training progress using 192-pair-mc loss than triplet loss; it only
takes 15k iterations to reach at the loss at convergence of triplet loss model (240k iteration).
4.4 Analysis on tuplet construction methods
In this section, we highlight the importance of the proposed tuplet construction strategy using N
pairs of examples by conducting control experiments using different numbers of distinguishable
classes per batch while fixing the total number of examples per batch the same. For example, if we
are to use N/2 different classes per batch rather than N different classes, we select 4 examples from
each class instead of a pair of examples. Since N -pair loss is not defined to handle multiple positive
examples, we follow the definition of NCA in this experiments as follows:
P
>
1 X
j6=i:y =y exp(fi fj )
L=
? log P j i
(9)
>
2N i
j6=i exp(fi fj )
We repeat experiments in Section 4.2 and 4.3 and provide the summary results in Table 4. We
observe a certain degree of performance drop as we decrease the number of classes. Despite, all of
these results are substantially better than those of triplet loss, confirming the importance of training
with multiple negative classes, and suggesting to train with as many negative classes as possible.
5
Conclusion
Triplet loss has been widely used for deep metric learning, even though with somewhat unsatisfactory convergence. We present a scalable novel objective, multi-calss N -pair loss, for deep metric
learning, which significantly improves upon the triplet loss by pushing away multiple negative examples jointly at each update. We demonstrate the effectiveness of N -pair-mc loss on fine-grained
visual recognition and verification, as well as visual object clustering and retrieval.
Acknowledgments
We express our sincere thanks to Wenling Shang for her support in many parts of this work from
algorithm development to paper writing. We also thank Junhyuk Oh and Paul Vernaza for helpful
discussion.
8
References
[1] L. Best-Rowden, H. Han, C. Otto, B. F. Klare, and A. K. Jain. Unconstrained face recognition: Identifying a person of interest
from a media collection. IEEE Transactions on Information Forensics and Security, 9(12):2144?2157, 2014.
[2] G. Chechik, V. Sharma, U. Shalit, and S. Bengio. Large scale online learning of image similarity through ranking. Journal of
Machine Learning Research, 11:1109?1135, 2010.
[3] S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application to face verification. In
CVPR, 2005.
[4] Y. Cui, F. Zhou, Y. Lin, and S. Belongie. Fine-grained categorization and dataset bootstrapping using deep metric learning with
humans in the loop. In CVPR, 2016.
[5] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Region-based convolutional networks for accurate object detection and
segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, PP(99):1?1, 2015.
[6] J. Goldberger, G. E. Hinton, S. T. Roweis, and R. Salakhutdinov. Neighbourhood components analysis. In NIPS, 2004.
[7] R. Hadsell, S. Chopra, and Y. LeCun. Dimensionality reduction by learning an invariant mapping. In CVPR, 2006.
[8] G. B. Huang, M. Narayana, and E. Learned-Miller. Towards unconstrained face recognition. In CVPR Workshop, 2008.
[9] H. Jegou, M. Douze, and C. Schmid. Product quantization for nearest neighbor search. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 33(1):117?128, 2011.
[10] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional
architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
[11] D. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
[12] J. Krause, M. Stark, J. Deng, and L. Fei-Fei. 3d object representations for fine-grained categorization. In ICCV Workshop,
2013.
[13] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In NIPS, 2012.
[14] J. Liu, Y. Deng, T. Bai, and C. Huang.
abs/1506.07310, 2015.
Targeting ultimate accuracy: Face recognition via deep embedding.
CoRR,
[15] D. G. Lowe. Similarity metric learning for a variable-kernel classifier. Neural computation, 7(1):72?85, 1995.
[16] C. D. Manning, P. Raghavan, H. Sch?utze, et al. Introduction to information retrieval, volume 1. Cambridge university press
Cambridge, 2008.
[17] M. Norouzi, D. J. Fleet, and R. R. Salakhutdinov. Hamming distance metric learning. In NIPS, 2012.
[18] O. M. Parkhi, A. Vedaldi, and A. Zisserman. Deep face recognition. BMVC, 2015.
[19] F. Schroff, D. Kalenichenko, and J. Philbin. FaceNet: A unified embedding for face recognition and clustering. In CVPR, 2015.
[20] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
[21] H. O. Song, Y. Xiang, S. Jegelka, and S. Savarese. Deep metric learning via lifted structured feature embedding. In CVPR,
2016.
[22] Y. Sun, Y. Chen, X. Wang, and X. Tang. Deep learning face representation by joint identification-verification. In NIPS, 2014.
[23] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with
convolutions. In CVPR, 2015.
[24] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf. Deepface: Closing the gap to human-level performance in face verification. In
CVPR, 2014.
[25] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 Dataset. Technical Report
CNS-TR-2011-001, California Institute of Technology, 2011.
[26] J. Wang, Y. Song, T. Leung, C. Rosenberg, J. Wang, J. Philbin, B. Chen, and Y. Wu. Learning fine-grained image similarity
with deep ranking. In CVPR, 2014.
[27] K. Q. Weinberger, J. Blitzer, and L. K. Saul. Distance metric learning for large margin nearest neighbor classification. In NIPS,
2005.
[28] J. Weston, S. Bengio, and N. Usunier. Wsabie: Scaling up to large vocabulary image annotation. In IJCAI, volume 11, pages
2764?2770, 2011.
[29] S. Xie, T. Yang, X. Wang, and Y. Lin. Hyper-class augmented and regularized deep learning for fine-grained image classification. In CVPR, 2015.
[30] E. P. Xing, A. Y. Ng, M. I. Jordan, and S. Russell. Distance metric learning with application to clustering with side-information.
2003.
[31] D. Yi, Z. Lei, S. Liao, and S. Z. Li. Learning face representation from scratch. CoRR, abs/1411.7923, 2014.
[32] X. Zhang, F. Zhou, Y. Lin, and S. Zhang. Embedding label structures for fine-grained feature representation. In CVPR, 2016.
9
| 6200 |@word mild:1 norm:4 open:2 seek:1 contrastive:7 attainable:1 pick:1 sgd:2 tr:1 reduction:1 bai:1 liu:2 contains:1 efficacy:1 score:8 tuned:1 interestingly:1 outperforms:3 existing:3 guadarrama:1 com:2 si:3 yet:1 goldberger:1 written:2 fn:15 numerical:1 partition:3 confirming:1 designed:1 drop:1 update:7 v:3 alone:1 intelligence:2 selected:6 accordingly:1 firstly:1 zhang:2 narayana:1 mathematical:1 wild:1 manner:1 introduce:1 theoretically:1 expected:2 indeed:1 multi:10 jegou:1 salakhutdinov:2 increasing:1 becomes:2 spain:1 xx:1 moreover:3 provided:1 medium:2 minimizes:2 substantially:2 recruit:1 unified:1 bootstrapping:1 impractical:2 every:2 tie:1 k2:4 classifier:4 demonstrates:2 control:1 unit:1 omit:1 superiority:2 positive:16 local:2 limit:1 consequence:1 severely:1 despite:2 subscript:1 bird:3 initialization:1 resembles:1 challenging:1 branson:1 limited:2 kfi:2 averaged:2 nca:2 acknowledgment:1 lecun:2 yj:3 testing:4 practice:1 block:1 impostor:1 empirical:1 significantly:3 thought:1 vedaldi:1 chechik:1 refers:1 close:1 selection:1 targeting:1 writing:1 optimize:1 equivalent:2 demonstrated:1 attention:1 independently:1 hadsell:2 unify:1 simplicity:1 identifying:1 shorten:1 pull:1 regularize:1 oh:1 embedding:28 handle:1 variation:1 construction:16 suppose:1 trend:1 recognition:29 expensive:3 database:8 vrf:8 labeled:1 preprint:1 wang:4 worst:1 region:2 connected:1 sun:1 ranzato:1 decrease:1 russell:1 substantial:2 balanced:1 kalenichenko:1 ideally:1 trained:10 upon:1 f2:13 swap:1 accelerate:1 joint:3 easily:1 america:1 train:10 distinct:3 jain:1 effective:3 fast:1 query:9 hyper:1 caffe:3 whose:1 larger:3 stanford:2 widely:1 cvpr:11 otto:1 simonyan:1 knn:1 unseen:3 jointly:2 superscript:1 online:10 final:2 advantage:2 karayev:1 propose:6 douze:1 product:10 loop:1 till:1 achieve:1 roweis:1 sutskever:1 convergence:7 ijcai:1 optimum:2 darrell:2 produce:1 categorization:2 adam:3 object:10 help:1 blitzer:1 fixing:1 nearest:2 received:1 progress:2 strong:1 direction:2 closely:1 f4:2 stochastic:4 cnns:1 human:2 raghavan:1 enable:1 stringent:1 violates:3 require:1 f1:22 generalization:1 alleviate:2 preliminary:1 secondly:1 pl:1 sufficiently:1 considered:3 exp:10 seed:1 mapping:1 predict:1 lm:2 cub:7 omitted:1 utze:1 estimation:1 schroff:1 label:2 hope:1 clearly:1 aim:1 rather:1 avoid:3 zhou:2 lifted:5 rosenberg:1 inherits:1 improvement:6 unsatisfactory:1 rank:4 likelihood:1 greedily:3 baseline:4 sense:1 helpful:1 leung:1 integrated:1 her:1 perona:1 dnn:1 going:1 selects:1 overall:1 among:4 classification:9 denoted:1 development:1 softmax:12 special:1 initialize:1 mutual:1 once:2 f3:4 ng:1 sampling:1 excessive:1 report:5 others:1 sincere:1 escape:1 employ:2 few:2 randomly:6 composed:6 simultaneously:1 preserve:1 individual:2 consisting:3 cns:1 maintain:1 ab:2 detection:1 interest:1 mining:18 highly:2 evaluation:2 yielding:1 behind:1 accurate:2 closer:1 necessary:2 facial:1 decoupled:1 tree:1 savarese:1 shalit:1 girshick:2 instance:2 rabinovich:1 cost:1 hundred:1 krizhevsky:1 welinder:1 too:3 reported:4 dir:3 combined:4 thanks:1 person:1 fundamental:1 quickly:1 again:1 augmentation:1 nm:8 opposed:1 choose:1 huang:2 fir:2 worse:2 stark:1 li:1 szegedy:1 suggesting:1 potential:1 bold:1 includes:1 inc:1 satisfy:1 ranking:2 later:1 view:1 philbin:2 lab:1 lot:1 break:2 closed:2 lowe:1 competitive:3 xing:1 annotation:1 jia:2 contribution:1 ass:1 accuracy:18 convolutional:5 conducting:1 miller:1 yield:1 identify:3 identification:11 norouzi:1 mc:32 finalize:1 worth:1 j6:4 classified:1 reach:2 influenced:1 definition:1 against:1 pp:1 involved:1 naturally:1 hamming:1 degeneracy:1 sampled:1 dataset:13 finegrained:1 recall:5 car:15 improves:3 dimensionality:1 segmentation:1 formalize:1 carefully:1 appears:1 higher:1 forensics:1 xie:1 follow:3 zisserman:2 improved:2 bmvc:1 formulation:7 evaluated:2 though:2 furthermore:1 lastly:1 overfit:1 hand:2 horizontal:2 replacing:1 nonlinear:1 overlapping:1 logistic:1 quality:1 lei:1 grows:2 building:1 name:1 normalized:1 regularization:1 symmetric:1 laboratory:1 illustrated:2 during:2 won:1 cosine:2 generalized:1 evident:1 demonstrate:4 performs:2 bring:2 fj:6 image:20 meaning:1 novel:4 fi:14 superior:1 junhyuk:1 empirically:1 volume:2 million:1 discussed:1 googlenet:1 approximates:2 significant:2 refer:2 anguelov:1 cambridge:2 imposing:1 vanilla:1 unconstrained:2 similarly:1 closing:1 han:1 similarity:8 longer:1 add:3 recent:1 optimizes:1 belongs:1 indispensable:1 certain:1 success:3 yi:4 caltech:2 neg:4 seen:1 additional:2 somewhat:1 deng:2 determine:1 maximize:1 vernaza:1 sharma:1 multiple:11 desirable:1 reduces:1 smooth:1 technical:1 faster:1 adapt:1 cross:2 long:1 retrieval:8 lin:3 impact:1 scalable:3 crop:2 denominator:1 liao:1 metric:38 lfw:5 arxiv:2 iteration:7 kernel:4 normalization:1 proposal:1 whereas:1 fine:10 krause:1 decreased:1 sch:1 rest:2 unlike:1 tri:11 pass:2 spirit:2 effectiveness:3 jordan:1 call:1 near:2 leverage:1 ideal:3 noting:1 split:1 bengio:2 yang:2 chopra:2 variety:1 xj:2 architecture:6 competing:2 restrict:1 bottleneck:1 whether:2 motivated:1 fleet:1 ultimate:1 passed:1 song:2 suffer:3 deep:41 wenling:1 involve:1 utmost:1 bvlc:1 sohn:1 category:11 http:1 disjoint:2 popularity:1 per:11 correctly:2 express:1 thereafter:1 key:1 nevertheless:2 merely:1 year:1 taigman:1 master:1 extends:1 throughout:1 wu:1 missed:1 draw:2 decision:2 scaling:2 bound:2 layer:1 internet:1 guaranteed:1 fold:2 quadratic:2 nontrivial:1 constraint:4 looping:1 fei:2 x2:1 generates:1 aspect:1 speed:1 performing:1 relatively:2 structured:1 combination:1 poor:1 cui:1 manning:1 across:2 slightly:1 nmi:8 wsabie:1 partitioned:1 invariant:1 iccv:1 computationally:2 equation:6 ln:2 previously:1 flip:2 end:2 usunier:1 generalizes:1 apply:2 observe:6 away:1 generic:1 neighbourhood:2 batch:30 alternative:1 weinberger:1 clustering:8 pushing:3 build:3 objective:10 malik:1 already:2 strategy:2 gradient:2 iclr:2 distance:16 thank:1 argue:2 considers:1 unstable:1 collected:2 gallery:1 assuming:1 length:2 besides:2 cont:1 reed:1 mini:3 sermanet:1 difficult:2 executed:1 negative:67 ba:1 implementation:1 perform:2 allowing:2 upper:1 convolution:1 datasets:10 benchmark:2 descent:2 immediate:1 situation:1 hinton:2 interacting:2 ucsd:2 pair:87 namely:1 optimized:1 imagenet:3 security:1 wah:1 california:1 learned:1 barcelona:1 kingma:1 nip:6 address:2 flower:8 usually:1 ksohn:1 pattern:2 challenge:1 including:1 max:4 memory:1 demanding:1 difficulty:1 regularized:1 scheme:1 improve:3 github:1 technology:1 keypoint:1 identifies:1 coupled:1 extract:2 schmid:1 faced:1 l2:2 kf:2 xiang:1 loss:149 fully:1 highlight:2 discriminatively:1 validation:2 shelhamer:1 degree:1 vanhoucke:1 jegelka:1 verification:34 consistent:2 ovo:17 share:2 heavy:1 prone:1 summary:3 repeat:1 last:1 dis:1 side:2 deeper:1 institute:1 wide:1 neighbor:2 face:22 saul:1 differentiating:1 benefit:1 overcome:2 boundary:2 xn:4 evaluating:2 curve:2 vocabulary:1 collection:1 far:3 erhan:1 transaction:3 unnecessary:1 belongie:2 xi:10 discriminative:1 search:3 triplet:65 table:8 promising:1 learn:2 ignoring:1 obtaining:1 necessarily:1 protocol:2 hyperparameters:1 paul:1 complementary:1 x1:3 augmented:1 benefited:1 slow:4 tied:1 learns:1 grained:10 donahue:2 tang:1 rk:1 enlarging:1 down:2 specific:1 showing:1 concern:1 burden:2 intractable:1 essential:1 workshop:2 quantization:1 corr:2 gained:1 importance:2 lifting:1 nec:2 illustrates:1 push:1 margin:6 chen:2 gap:1 distinguishable:2 visual:7 desire:1 partially:2 pretrained:2 deepface:1 corresponds:1 wolf:1 determines:1 extracted:1 weston:1 identity:3 formulated:1 kmeans:1 towards:2 feasible:1 hard:7 parkhi:1 specifically:7 determined:1 except:2 shang:1 called:2 specie:2 pas:1 total:1 experimental:5 select:4 support:1 kihyuk:1 dissimilar:1 facenet:1 philosophy:1 incorporate:1 evaluate:12 scratch:2 |
5,749 | 6,201 | Unsupervised Risk Estimation Using Only
Conditional Independence Structure
Jacob Steinhardt
Stanford University
[email protected]
Percy Liang
Stanford University
[email protected]
Abstract
We show how to estimate a model?s test error from unlabeled data, on distributions
very different from the training distribution, while assuming only that certain conditional independencies are preserved between train and test. We do not need to
assume that the optimal predictor is the same between train and test, or that the
true distribution lies in any parametric family. We can also efficiently compute
gradients of the estimated error and hence perform unsupervised discriminative
learning. Our technical tool is the method of moments, which allows us to exploit
conditional independencies in the absence of a fully-specified model. Our framework encompasses a large family of losses including the log and exponential loss,
and extends to structured output settings such as conditional random fields.
1
Introduction
Can we measure the accuracy of a model at test time without any ground truth labels, and without
assuming the test distribution is close to the training distribution? This is the problem of unsupervised
risk estimation (Donmez et al., 2010): Given a loss function L(?; x, y) and a fixed model ?, estimate
def
the risk R(?) = Ex,y?p? [L(?; x, y)] with respect to a test distribution p? (x, y), given access only
to m unlabeled examples x(1:m) ? p? (x). Unsupervised risk estimation lets us estimate model
accuracy on a novel distribution, and is thus important for building reliable machine learning systems.
Beyond evaluating a single model, it also provides a way of harnessing unlabeled data for learning: by
minimizing the estimated risk over ?, we can perform unsupervised learning and domain adaptation.
Unsupervised risk estimation is impossible without some assumptions on p? , as otherwise p? (y | x)?
about which we have no observable information?could be arbitrary. How satisfied we should be
with an estimator depends on how strong its underlying assumptions are. In this paper, we present
an approach which rests on surprisingly weak assumptions?that p? satisfies certain conditional
independencies, but not that it lies in any parametric family or is close to the training distribution.
To give a flavor for our results, suppose that y ? {1, . . . , k} and that the loss decomposes as a
P3
sum of three parts: L(?; x, y) = v=1 fv (?; xv , y), where the xv (v = 1, 2, 3) are independent
conditioned on y. In this case, we show that we can estimate the risk to error in poly(k)/2 samples,
independently of the dimension of x or ?, with only very mild additional assumptions on p? . In
Sections 2 and 3 we generalize to a larger family of losses including the log and exponential losses,
and extend beyond the multiclass case to conditional random fields.
Some intuition behind our result is provided in Figure 1. At a fixed value of x, we can think of each
fv as ?predicting? that y = j if fv (xv , j) is low and fv (xv , j 0 ) is high for j 0 6= j. Since f1 , f2 , and
f3 all provide independent signals about y, their rate of agreement gives information about the model
accuracy. If f1 , f2 , and f3 all predict that y = 1, then it is likely that the true y equals 1 and the
loss is small. Conversely, if f1 , f2 , and f3 all predict different values of y, then the loss is likely
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
f1
f2
f3
f1
f2
f3
1 2 3 4
1 2 3 4
1 2 3 4
1 2 3 4
1 2 3 4
1 2 3 4
y
y
y
y
y
y
Figure 1: Two possible loss profiles at a given value of x. Left: if f1 , f2 , and f3 are all minimized at
the same value of y, that is likely to be the correct value and the total loss is likely to be small. Right:
conversely, if f1 , f2 , and f3 are small at differing values of y, then the loss is likely to be large.
large. This intuition is formalized by Dawid and Skene (1979) when the fv measure the 0/1-loss of
independent classifiers; in particular, if rv is the prediction of a classifier based on xv , then Dawid
Pk
Q3
and Skene model the rv as independent given y: p(r1 , r2 , r3 ) = j=1 p(y = j) v=1 p(rv | y = j).
They then use the learned parameters of this model to compute the 0/1-loss.
Partial specification. Dawid and Skene?s approach relies on the prediction rv only taking on k values.
In this case, the full distribution p(r1 , r2 , r3 ) can be parametrized by k?k conditional probability
matrices p(rv | y) and marginals p(y). However, as shown in Figure 1, we want to estimate continuous
losses such as the log loss. We must therefore work with the prediction vector fv ? Rk rather than a
single predicted output rv ? {1, . . . , k}. To fully model p(f1 , f2 , f3 ) would require nonparametric
estimation, resulting in an undesirable sample complexity exponential in k?in contrast to the discrete
case, conditional independence effectively only partially specifies a model for the losses.
To sidestep this issue, we make use of the method of moments, which has recently been used to
fit non-convex latent variable models (e.g. Anandkumar et al., 2012). In fact, it has a much older
history in the econometrics literature, where it is used as a tool for making causal identifications
under structural assumptions, even when no explicit form for the likelihood is known (Anderson and
Rubin, 1949; 1950; Sargan, 1958; 1959; Hansen, 1982; Powell, 1994; Hansen, 2014). It is this latter
perspective that we draw upon. The key insight is that even in the absence of a fully-specified model,
certain moment equations?such as E[f1 f2 | y] = E[f1 | y]E[f2 | y]?can be derived solely from the
assumed conditional independence. Solving these equations yields estimates of E[fv | y], which can
in turn be used to estimate the risk. Importantly, our procedure avoids estimation of the full loss
distribution p(f1 , f2 , f3 ), on which we make no assumptions other than conditional independence.
Our paper is structured as follows. In Section 2, we present our basic framework, and state and prove
our main result on estimating the risk. In Section 3, we extend our framework in several directions,
including to conditional random fields. In Section 4, we present a gradient-based learning algorithm
and show that the sample complexity needed for learning is d ? poly(k)/2 , where d is the dimension
of the parameters ?. In Section 5, we investigate how our method performs empirically.
Related Work. While the formal problem of unsupervised risk estimation was only posed recently
by Donmez et al. (2010), several older ideas from domain adaptation and semi-supervised learning
are also relevant. The covariate shift assumption posits access to labeled samples from a training
distribution p0 (x, y) for which p? (y | x) = p0 (y | x). If p? (x) and p0 (x) are close, we can
approximate p? by p0 via importance weighting (Shimodaira, 2000; Qui?onero-Candela et al., 2009).
If p? and p0 are not close, another approach is to assume a well-specified discriminative model family
?, such that p0 (y | x) = p? (y | x) = p?? (y | x) for some ?? ? ?; then the only error when moving
from p0 to p? is statistical error in the estimation of ?? (Blitzer et al., 2011; Li et al., 2011). Such
assumptions are restrictive?importance weighting only allows small perturbations from p0 to p? , and
mis-specified models of p(y | x) are common in practice; many authors report that mis-specification
can lead to severe issues in semi-supervised settings (Merialdo, 1994; Nigam et al., 1998; Cozman
and Cohen, 2006; Liang and Klein, 2008; Li and Zhou, 2015). More sophisticated approaches based
on discrepancy minimization (Mansour et al., 2009) or learning invariant representations (Ben-David
et al., 2006; Johansson et al., 2016) typically also make some form of the covariate shift assumption.
Our approach is closest to Dawid and Skene (1979) and some recent extensions (Zhang et al., 2014;
Platanios, 2015; Jaffe et al., 2015; Fetaya et al., 2016). Similarly to Zhang et al. (2014) and Jaffe
et al. (2015), we use the method of moments for estimating latent-variable models. However, those
papers use it for parameter estimation in the face of non-convexity, rather than as a way to avoid full
estimation of p(fv | y). The insight that the method of moments works under partial specification lets
us extend beyond the simple discrete settings they consider to handle more complex continuous and
structured losses. The intriguing work of Balasubramanian et al. (2011) provides an alternate approach
2
yt?2
y
label:
yt?1
yt
yt+1
y
z
inputs:
x1
x2
xt?2
x3
xt?1
xt
xt+1
x1
x2
x3
Figure 2: Left: our basic 3-view setup (Assumption 1). Center: Extension 1, to CRFs; the embedding
of 3 views into the CRF is indicated in blue. Right: Extension 3, to include a mediating variable z.
to continuous losses; they show that the distribution of losses L | y is often approximately Gaussian,
and use that to estimate the risk. Among all this work, ours is the first to perform gradient-based
learning and the first to handle a structured loss (the log loss for conditional random fields).
2
Framework and Estimation Algorithm
We will focus on multiclass classification; we assume an unknown true distribution p? (x, y) over
X ? Y, where Y = {1, . . . , k}, and are given unlabeled samples x(1) , . . . , x(m) drawn i.i.d. from
p? (x). Given parameters ? ? Rd and a loss function L(?; x, y), our goal is to estimate the risk of ?
def
on p? : R(?) = Ex,y?p? [L(?; x, y)]. Throughout, we will make the 3-view assumption:
Assumption 1 (3-view). Under p? , x can be split into x1 , x2 , x3 , which are conditionally independent
given y (see Figure 2). Moreover, the loss decomposes additively across views: L(?; x, y) =
P3
A(?; x) ? v=1 fv (?; xv , y), for some functions A and fv .
Note that each xv can be large (e.g. they could be vectors in Rd ). If we have V > 3 views, we
can combine views to obtain V = 3 without loss of generality. It also suffices for just the fv to be
independent rather than the xv . Given only 2 views, the risk can be shown to be unidentifiable in
general, although obtaining upper bounds may be possible.
We give some examples where Assumption 1 holds, then state and prove our main result (see Section 3
for additional examples). We start with logistic regression, which will be our primary focus later on:
Example 1 (Logistic Regression). Suppose that we have a log-linear model p? (y | x) =
exp ?> (?1 (x1 , y) + ?2 (x2 , y) + ?3 (x3 , y)) ? A(?; x) , where x1 , x2 , and x3 are independent
conditioned on y. If our loss function is the log-loss L(?; x, y) = ? log p? (y | x), then Assumption 1
holds with fv (?; xv , y) = ?> ?v (xv , y) and A(?; x) equal to the partition function of p? .
Assumption 1 does not hold for the hinge loss (see Appendix A for details), but it does hold for a
modified hinge loss, where we apply the hinge separately to each view:
P3
Example 2 (Modified Hinge Loss). Suppose that L(?; x, y) = v=1 (1 + maxj6=y ?> ?v (xv , j) ?
?> ?v (xv , y))+ . In other words, L is the sum of 3 hinge losses, one for each view. Then Assumption 1
holds with A = 0, and ?fv equal to the hinge loss for view v.
The model can also be non-linear within each view xv , as long as the views are combined additively:
Example 3 (Neural Networks). Suppose that for each view v we have a neural network whose output
is a score for each of the k classes, (fv (?; xv , j))kj=1 . Sum the scores f1 + f2 + f3 , apply a soft-max,
P3
and evaluate using the log loss; then L(?; x, y) = A(?; x) ? v=1 fv (?; xv , y), where A(?; x) is the
log-normalization constant of the softmax, and hence L satisfies Assumption 1.
We are now ready to present our main result on recovering the risk R(?). The key starting point is the
conditional risk matrices Mv ? Rk?k , defined as (suppressing the dependence on ?)
(Mv )ij = E[fv (?; xv , i) | y = j].
(1)
In the case of the 0/1-loss, the Mv are confusion matrices; in general, (Mv )ij measures how strongly
we predict class i when the true class is j. If we could recover these matrices along with the marginal
def
class probabilities ?j = p? (y = j), then estimating the risk would be straightforward; indeed,
"
#
3
k
3
X
X
X
R(?) = E A(?; x) ?
fv (?; xv , y) = E[A(?; x)] ?
?j
(Mv )j,j ,
(2)
v=1
j=1
3
v=1
where E[A(?; x)] can be estimated from unlabeled data alone.
Caveat: Class permutation. Suppose that at training time, we learn to predict whether an image contains the digit 0 or 1. At test time, nothing changes except the definitions of 0 and 1 are reversed. It is
clearly impossible to detect this from unlabeled data? mathematically, the risk matrices Mv are only
recoverable up to column permutation. We will end up computing the minimum risk over these permudef
?
tations, which we call the optimistic risk and denote R(?)
= min??Sym(k) Ex,y?p? [L(?; x, ?(y))].
This equals the true risk as long as ? is at least aligned with the correct classes in the sense that
Ex [L(?; x, j) | y =j] ? Ex [L(?; x, j 0 ) | y = j] for j 0 6= j. The optimal ? can be computed from
Mv and ? in O k 3 time using maximum weight bipartite matching; see Section B for details.
Our main result, Theorem 1, says that we can recover both Mv and ? up to permutation, with a
number of samples that is polynomial in k:
Theorem 1. Suppose Assumption 1 holds. Then, for any , ? ? (0, 1), we can estimate Mv and ? up
to column permutation, to error (in Frobenius and ?-norm respectively). Our algorithm requires
log(2/?)
?1
samples to succeed with probability 1 ? ?, where
m = poly k, ?min
, ??1 , ? ?
2
k
P
def 3
def
def
(3)
?min = min p? (y = j), ? = E v,j fv (?; xv , j)2 , and ? = min ?k (Mv ),
v=1
j=1
and ?k denotes the kth singular value. Moreover, the algorithm runs in time m ? poly(k).
? via (2); see Algorithm 1 below for details. ImEstimates for Mv and ? imply an estimate for R
portantly, the sample complexity in Theorem 1 depends on the number of classes k, but not on the
dimension d of ?. Moreover, Theorem 1 holds even if p? lies outside the model family ?, and even if
the train and test distributions are very different (in fact, the result is agnostic to how the model ? was
produced). The only requirement is the 3-view assumption for p? and that ?, ?min 6= 0.
Let us interpret each term in (3). First, ? tracks the variance of the loss, and we should expect the
difficulty of estimating the risk to increase with this variance. The log(2/?)
term is typical and shows
2
up even when estimating the parameter of a random variable to accuracy from m samples. The
?1
?min
term appears because, if one of the classes is very rare, we need to wait a long time to observe
even a single sample from that class, and even longer to estimate the risk on that class accurately.
Perhaps least intuitive is the ??1 term, which is large e.g. when two classes have similar conditional
risk vectors E[(fv (?; xv , i))ki=1 | y = j]. To see why this matters, consider an extreme where x1 , x2 ,
and x3 are independent not only of each other but also of y. Then p? (y) is completely unconstrained,
and it is impossible to estimate R at all. Why does this not contradict Theorem 1? The answer is
that in this case, all rows of Mv are equal and hence Mv has rank 1, ? = 0, ??1 = ?, and we need
infinitely many samples for Theorem 1 to hold; ? measures how close we are to this degenerate case.
Proof of Theorem 1. We now outline a proof of Theorem 1. Recall the goal is to estimate the
conditional risk matrices Mv , defined as (Mv )ij = E[fv (?; xv , i) | y = j]; from these we can
recover the risk itself using (2). The key insight is that certain moments of p? (y | x) can be expressed
as polynomial functions of the matrices Mv , and therefore we can solve for the Mv even without
explicitly estimating p? . Our approach follows the technical machinery behind the spectral method of
moments (e.g., Anandkumar et al., 2012), which we explain below for completeness.
Define the loss vector hv (xv ) = (fv (?; xv , i))ki=1 , which measures the loss that would be incurred
under each of the k classes. The conditional independence of the xv means that E[h1 (x1 )h2 (x2 )> |
y] = E[h1 (x1 ) | y]E[h2 (x2 ) | y]> , and similarly for higher-order conditional moments. Marginalizing over y, we see that there is low-rank structure in the moments of h that we can exploit; in
particular (letting ? denote outer product and A?,j denote the jth column of A):
E[hv (xv )] =
k
X
?j ?(Mv )?,j , E[hv (xv )?hv0 (xv0 )] =
j=1
E[h1 (x1 )?h2 (x2 )?h3 (x3 )] =
k
X
k
X
?j ?(Mv )?,j ?(Mv0 )?,j for v 6= v 0 , and
j=1
?j ?(M1 )?,j ?(M2 )?,j ?(M3 )?,j .
(4)
j=1
The left-hand-side of each equation can be estimated from unlabeled data; using tensor decomposition
(Lathauwer, 2006; Comon et al., 2009; Anandkumar et al., 2012; 2013; Kuleshov et al., 2015), it is
4
?
Algorithm 1 Algorithm for estimating R(?)
from unlabeled data.
Input: unlabeled samples x(1) , . . . , x(m) ? p? (x).
Estimate the left-hand-side of each term in (4) using x(1:m) .
? and ?
Compute approximations M
? to Mv and ? using tensor decomposition.
Pk v
P3
? v )j,?(j) using maximum bipartite matching.
Compute ? maximizing j=1 ?
??(j) v=1 (M
P
Pk
P3
m
1
(i)
? v )j,?(j) .
??(j) v=1 (M
5: Output: estimated risk, m i=1 A(?; x ) ? j=1 ?
1:
2:
3:
4:
then possible to solve for Mv and ?. In particular, we can recover M and ? up to permutation: that is,
? and ?
? i,?(j) and ?j ? ?
we recover M
? such that Mi,j ? M
??(j) for some permutation ? ? Sym(k).
This then yields Theorem 1; see Section C for a full proof.
Assumption 1 thus yields a set of moment equations (4) whose solution lets us estimate the risk
without any labels y. The procedure is summarized in Algorithm 1: we (i) approximate the left-handside of each term in (4) by sample averages; (ii) use tensor decomposition to solve for ? and Mv ; (iii)
? from ? and Mv .
use maximum matching to compute the permutation ?; and (iv) use (2) to obtain R
3
Extensions
Theorem 1 provides a basic building block which admits several extensions to more complex model
structures. We go over several cases below, omitting most proofs to avoid tedium.
Extension 1 (Conditional Random Field). Most importantly, the variable y need not belong to a
small discrete set; we can handle structured outputs such as a CRF as long as p? has the right structure.
This contrasts with previous work on unsupervised risk estimation that was restricted to multiclass
classification (though in a different vein, it is close to Proposition 8 of Anandkumar et al. (2012)).
Suppose that p? (x1:T , y1:T ) factorizes as a hidden Markov model, and that p? is a CRF respecting
QT
QT
the HMM structure: p? (y1:T | x1:T ) ? t=2 f? (yt?1 , yt ) ? t=1 g? (yt , xt ). For the log-loss
L(?; x, y) = ? log p? (y1:T | x1:T ), we can exploit the decomposition
T
T
X
X
? log p? (y1:T | x1:T ) =
? log p? (yt?1 , yt | x1:T ) ?
? log p? (yt | x1:T ) .
(5)
|
{z
} t=1 |
{z
}
t=2
def
def 0
= `t
= `t
Each of the components `t and `0t satisfy Assumption 1 (see Figure 2; for `t , the views are
x1:t?2 , xt?1:t , xt+1:T , and for `0t they are x1:t?1 , xt , xt+1:T ). We use Theorem 1 to estimate
each E[`t ], E[`0t ] individually, and thus also the full risk E[L]. (We actually estimate the risk for
y2:T ?1 | x1:T due to the 3-view assumption failing at the boundaries.)
In general, the idea in (5) applies to any structured output problem that is a sum of local 3-view
structures. It would be interesting to extend our results to other structures such as more general
graphical models (Chaganty and Liang, 2014) and parse trees (Hsu et al., 2012).
Extension 2 (Exponential Loss). We can also relax the additivity L = A?f1 ?f2 ?f3 in Assumption 1.
P3
For instance, suppose L(?; x, y) = exp(??> v=1 ?v (xv , y)) is the exponential loss. Theorem 1
lets us estimate the matrices Mv corresponding to fv (?; xv , y) = exp(??> ?v (xv , y)). Then
" 3
#
3
Y
X Y
R(?) = E
fv (?; xv , y) =
?j
E [fv (?; xv , j) | y = j]
(6)
v=1
j
v=1
P
Q3
by conditional independence, so the risk can be computed as j ?j v=1 (Mv )j,j . This idea extends
Pn Q3
to any loss expressible as L(?; x, y) = A(?; x) + i=1 v=1 fiv (?; xv , y) for some functions fiv .
Extension 3 (Mediating Variable). Assuming that x1:3 are independent conditioned only on y may
not be realistic; there might be multiple subclasses of a class (e.g., multiple ways to write the digit 4)
which would induce systematic correlations across views. To address this, we show that independence
need only hold conditioned on a mediating variable z, rather than on the class y itself.
Let z be a refinement of y (in the sense that knowing z determines y) which takes on k 0 values, and
suppose that the views x1 , x2 , x3 are independent conditioned on z, as in Figure 2. Then we can
5
try to estimate the risk by defining L0 (?; x, z) = L(?; x, y(z)), which satisfies Assumption 1. The
problem is that the corresponding risk matrices Mv0 will only have k distinct rows and hence have
rank k < k 0 . To fix this, suppose that the loss vector hv (xv ) = (fv (xv , j))kj=1 can be extended
0
to a vector h0v (xv ) ? Rk , such that (i) the first k coordinates of h0v (xv ) are hv (xv ) and (ii) the
0
conditional risk matrix Mv corresponding to h0v has full rank. Then, Theorem 1 allows us to recover
Mv0 and hence also Mv (since it is a sub-matrix of Mv0 ) and thereby estimate the risk.
4
From Estimation to Learning
We now turn our attention to unsupervised learning, i.e., minimizing R(?) over ? ? Rd . Unsupervised
learning is impossible without some additional information, since even if we could learn the k classes,
we wouldn?t know which class had which label (this is the same as the class permutation issue from
before). Thus we assume that we have a small amount of information to break this symmetry:
? 0 ) = R(?0 ).
Assumption 2 (Seed Model). We have access to a ?seed model? ?0 such that R(?
Assumption 2 is very weak ? it merely asks for ?0 to be aligned with the true classes on average.
We can obtain ?0 from a small amount of labeled data (semi-supervised learning) or by training in a
nearby domain (domain adaptation). We define gap(?0 ) to be the difference between R(?0 ) and the
def
next smallest permutation of the classes?i.e., gap(?0 ) = min?6=id E[L(?0 ; x, ?(y)) ? L(?0 ; x, y)]?
which will affect the difficulty of learning.
For simplicity we will focus on the case of logistic regression, and show how to learn given only
Assumptions 1 and 2. Our algorithm extends to general losses, as we show in Section F.
Learning from moments. Note that for logistic regression (Example 1), we have
3
3
h
i
X
X
? where ?? def
R(?) = E A(?; x) ? ?>
?v (xv , y) = E[A(?; x)] ? ?> ?,
=
E[?v (xv , y)]. (7)
v=1
v=1
? after which all terms on the right-hand-side of (7)
From (7), we see that it suffices to estimate ?,
?
?
are known. Given an approximation ? to ? (we will show how to obtain ?? below), we can learn a
near-optimal ? by solving the following convex optimization problem:
?
?? = arg min E[A(?; x)] ? ?> ?.
(8)
k?k2 ??
In practice we would need to approximate E[A(?; x)] by samples, but we ignore this for simplicity
(it generally only contributes lower-order terms to the error). The reason for the `2 -constraint on ? is
? In particular (see Section D for a proof):
that it imparts robustness to the error between ?? and ?.
? ?k
? 2 ? . Then the output ?? from (8) satisfies R(?)
? ? mink?k ?? R(?)+2?.
Lemma 1. Suppose k??
2
? ?
If the optimal ?? has `2 -norm at most ?, Lemma 1 says that ?? nearly minimizes the risk: R(?)
?
?
R(? ) + 2?. The problem of learning ? thus reduces to computing a good estimate ?? of ?.
? Estimating ?? can be done in a manner similar to how we estimated R(?) in Section 2.
Computing ?.
In addition to the conditional risk matrix Mv ? Rk?k , we compute the conditional moment matrix
def
Gv ? Rdk?k , which tracks the conditional expectation of ?v : (Gv )i+(r?1)k,j = E[?v (?; xv , i)r |
P
P
k
3
y = j], where r indexes 1, . . . , d. We then have ??r = j=1 ?j v=1 (Gv )j+(r?1)k,j .
As in Theorem 1, we can solve for G1 , G2 , and G3 using a tensor factorization similar to (4), though
some care is needed
to avoid explicitly forming the (kd) ? (kd) ? (kd) tensor that would result
(since O k 3 d3 memory is intractable for even moderate values of d). We take a standard approach
based on random projections (Halko et al., 2011) and described in Section 6.1.2 of Anandkumar et al.
(2013). We refer the reader to the aforementioned references for details, and cite only the resulting
sample complexity and runtime, which are both roughly d times larger than in Theorem 1.
Theorem 2. Suppose that Assumptions 1 and 2 hold. Let ? < 1 and < min(1, gap(?0 )).
?1
Then, given m = poly k, ?min
, ??1 , ? ? log(2/?)
samples, where ? and ? are as defined in (3),
2
6
Figure 3: A few sample train images (left) and test images (right) from the modified MNIST data set.
1.0
0.8
validation error
entropy
tensor
tensor + L-BFGS
true
baseline
tensor + L-BFGS
oracle
1.0
0.8
0.4
0.5
Risk R(?)
0.6
0.6
Risk R(?)
Estimated Risk
0.8
1.2
baseline
tensor + L-BFGS
oracle
0.7
0.4
0.3
0.6
0.4
0.2
0.2
0.2
0.1
0.0
0
2
4
6
Distortion (a)
(a)
8
10
0.0
0
2
4
6
Distortion (a)
(b)
8
10
0.0
0
2
4
6
Distortion (?)
8
10
(c)
Figure 4: Results on the modified MNIST data set. (a) Risk estimation for varying degrees of
distortion a. (b) Domain adaptation with 10,000 training and 10,000 test examples. (c) Domain
adaptation with 300 training and 10,000 test examples.
with probability
1 ? ? we can recover Mv and ? to error , and Gv to error (B/? ), where
P
B 2 = E[ i,v k?v (xv , i)k22 ] measures the `2 -norm of the features. The algorithm runs in time
O (d (m + poly(k))), and the errors are in Frobenius norm for M and G, and ?-norm for ?.
See Section E for a proof sketch. Whereas before we estimated the risk matrix Mv to error , now
? to error (B/? ). To achieve error in estimating
we estimate the gradient matrix Gv (and hence ?)
log(2/?)
?1
Gv requires (B/? )2 ? poly k, ?min , ??1 , ?
samples, which is (B/? )2 times as large as in
2
2
Theorem 1. The quantity (B/? ) typically grows as O(d), and so the sample complexity needed to
estimate ?? is typically d times larger than the sample complexity needed to estimate R. This matches
the behavior of the supervised case where we need d times as many samples for learning as compared
to (supervised) risk estimation of a fixed model.
Summary. We have shown how to perform unsupervised logistic regression, given only a seed model
?0 . This enables unsupervised learning under fairly weak assumptions (only the multi-view and seed
model assumptions) even for mis-specified models and zero train-test overlap, and without assuming
covariate shift. See Section F for learning under more general losses.
5
Experiments
To better understand the behavior of our algorithms, we perform experiments on a version of the
MNIST data set that is modified to ensure that the 3-view assumption holds. To create an image
I, we sample a class in {0, . . . , 9}, then sample 3 images I1 , I2 , I3 at random from that class,
letting every third pixel in I come from the respective image Iv . This guarantees there are 3
conditionally independent views. To explore train-test variation, we dim pixel p in the image by
exp (a (kp ? p0 k2 ? 0.4)), where p0 is the image center and distances are normalized to be at most
1. We show example images for a = 0 (train) and a = 5 (a possible test distribution) in Figure 3.
Risk estimation. We use Algorithm 1 to perform unsupervised risk estimation for a model trained
on a = 0, testing on various values of a ? [0, 10]. We trained the model with AdaGrad (Duchi
et al., 2010) on 10,000 training examples, and used 10,000 test examples to estimate the risk. To
solve for ? and M in (4), we first use the tensor power method implemented by Chaganty and Liang
(2013) to initialize, and then locally minimize a weighted `2 -norm of the moment errors in (4) using
L-BFGS. We compared with two other methods: (i) validation
error from held-out samples (which
P
would be valid if train = test), and (ii) predictive entropy j ?p? (j | x) log p? (j | x) on the test set
(which would be valid if the predictions were well-calibrated). The results are shown in Figure 4a;
both the tensor method in isolation and tensor + L-BFGS estimate the risk accurately, with the latter
performing slightly better.
Unsupservised domain adaptation. We next evaluate our learning algorithm in an unsupervised
domain adaptation setting, where we receive labeled training data at a = 0 and unlabeled test data
at a different value of a. We use the training data to obtain a seed model ?0 , and then perform
7
unsupervised learning (Section 4), setting ? = 10 in (8). The results are shown in Figure 4b. For
small values of a, our algorithm performs worse than the baseline of directly using ?0 , likely due
to finite-sample effects. However, our algorithm is far more robust as a increases, and tracks the
performance of an oracle that was trained on the same distribution as the test examples.
Because we only need to provide our algorithm with a seed model for disentangling the classes, we
do not need much data when training ?0 . To verify this, we tried obtaining ?0 from only 300 labeled
examples. Tensor decomposition sometimes led to bad initializations in this limited data regime, in
which case we obtained a different ?0 by training with a smaller step size. The results are shown in
Figure 4c. Our algorithm generally performs well, but has higher variability than before, seemingly
due to higher condition number of the matrices Mv .
Summary. Our experiments show that given 3 views, we can estimate the risk and perform unsupervised domain adaptation, even with limited labeled data from the source domain.
6
Discussion
We have presented a method for estimating the risk from unlabeled data, which relies only on
conditional independence structure and hence makes no parametric assumptions about the true
distribution. Our approach applies to a large family of losses and extends beyond classification tasks
to conditional random fields. We can also perform unsupervised learning given only a seed model that
can distinguish between classes in expectation; the seed model can be trained on a related domain, on
a small amount of labeled data, or any combination of the two, and thus provides a pleasingly general
formulation highlighting the similarities between domain adaptation and semi-supervised learning.
Previous approaches to domain adaptation and semi-supervised learning have also exploited multiview structure. Given two views, Blitzer et al. (2011) perform domain adaptation with zero
source/target overlap (covariate shift is still assumed). Two-view approaches (e.g. co-training
and CCA) are also used in semi-supervised learning (Blum and Mitchell, 1998; Ando and Zhang,
2007; Kakade and Foster, 2007; Balcan and Blum, 2010). These methods all assume some form of
low noise or low regret, as do, e.g., transductive SVMs (Joachims, 1999). By focusing on the central
problem of risk estimation, our work connects multi-view learning approaches for domain adaptation
and semi-supervised learning, and removes covariate shift and low-noise/low-regret assumptions
(though we make stronger independence assumptions, and specialize to discrete prediction tasks).
In addition to reliability and unsupervised learning, our work is motivated by the desire to build
machine learning systems with contracts, a challenge recently posed by Bottou (2015); the goal is for
machine learning systems to satisfy a well-defined input-output contract in analogy with software
systems (Sculley et al., 2015). Theorem 1 provides the contract that under the 3-view assumption the
test error is close to our estimate of the test error; the typical (weak) contract of ML systems is that if
train and test are similar, then the test error is close to the training error. One other interesting contract
is to provide prediction regions that contain the truth with probability 1 ? (Shafer and Vovk, 2008;
Khani et al., 2016), which includes abstaining when uncertain as a special case (Li et al., 2011).
The most restrictive part of our framework is the three-view assumption, which is inappropriate if the
views are not completely independent or if the data have structure that is not captured in terms of
multiple views. Since Balasubramanian et al. (2011) obtain results under Gaussianity (which would
be implied by many somewhat dependent views), we are optimistic that unsupervised risk estimation
is possible for a wider family of structures. Along these lines, we end with the following questions:
Open question. In the 3-view setting, suppose the views are not completely independent. Is it still
possible to estimate the risk? How does the degree of dependence affect the number of views needed?
Open question. Given only two independent views, can one obtain an upper bound on the risk R(?)?
The results of this paper have caused us to adopt the following perspective: To leverage unlabeled
data, we should make generative structural assumptions, but still optimize discriminative model
performance. This hybrid approach allows us to satisfy the traditional machine learning goal of
predictive accuracy, while handling lack of supervision and under-specification in a principled way.
Perhaps, then, what is truly needed for learning is understanding the structure of a domain.
Acknowledgments. This research was supported by a Fannie & John Hertz Foundation Fellowship,
a NSF Graduate Research Fellowship, and a Future of Life Institute grant.
8
References
A. Anandkumar, D. Hsu, and S. M. Kakade. A method of moments for mixture models and hidden Markov
models. In COLT, 2012.
A. Anandkumar, R. Ge, D. Hsu, S. M. Kakade, and M. Telgarsky. Tensor decompositions for learning latent
variable models. arXiv, 2013.
T. W. Anderson and H. Rubin. Estimation of the parameters of a single equation in a complete system of
stochastic equations. The Annals of Mathematical Statistics, pages 46?63, 1949.
T. W. Anderson and H. Rubin. The asymptotic properties of estimates of the parameters of a single equation in a
complete system of stochastic equations. The Annals of Mathematical Statistics, pages 570?582, 1950.
R. K. Ando and T. Zhang. Two-view feature generation model for semi-supervised learning. In COLT, 2007.
K. Balasubramanian, P. Donmez, and G. Lebanon. Unsupervised supervised learning II: Margin-based classification without labels. JMLR, 12:3119?3145, 2011.
M. Balcan and A. Blum. A discriminative model for semi-supervised learning. JACM, 57(3), 2010.
S. Ben-David, J. Blitzer, K. Crammer, and F. Pereira. Analysis of representations for domain adaptation. In
NIPS, pages 137?144, 2006.
J. Blitzer, S. Kakade, and D. P. Foster. Domain adaptation with coupled subspaces. In AISTATS, 2011.
A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In COLT, 1998.
L. Bottou. Two high stakes challenges in machine learning. Invited talk at ICML, 2015.
A. Chaganty and P. Liang. Spectral experts for estimating mixtures of linear regressions. In ICML, 2013.
A. Chaganty and P. Liang. Estimating latent-variable graphical models using moments and likelihoods. In ICML,
2014.
P. Comon, X. Luciani, and A. L. D. Almeida. Tensor decompositions, alternating least squares and other tales.
Journal of Chemometrics, 23(7):393?405, 2009.
F. Cozman and I. Cohen. Risks of semi-supervised learning: How unlabeled data can degrade performance of
generative classifiers. In Semi-Supervised Learning. 2006.
A. P. Dawid and A. M. Skene. Maximum likelihood estimation of observer error-rates using the EM algorithm.
Applied Statistics, 1:20?28, 1979.
P. Donmez, G. Lebanon, and K. Balasubramanian. Unsupervised supervised learning I: Estimating classification
and regression errors without labels. JMLR, 11:1323?1351, 2010.
J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization.
In COLT, 2010.
J. Edmonds and R. M. Karp. Theoretical improvements in algorithmic efficiency for network flow problems.
JACM, 19(2):248?264, 1972.
E. Fetaya, B. Nadler, A. Jaffe, Y. Kluger, and T. Jiang. Unsupervised ensemble learning with dependent
classifiers. In AISTATS, pages 351?360, 2016.
N. Halko, P.-G. Martinsson, and J. Tropp. Finding structure with randomness: Probabilistic algorithms for
constructing approximate matrix decompositions. SIAM Review, 53:217?288, 2011.
L. P. Hansen. Large sample properties of generalized method of moments estimators. Econometrica, 1982.
L. P. Hansen. Uncertainty outside and inside economic models. Journal of Political Economy, 122(5), 2014.
D. Hsu, S. M. Kakade, and P. Liang. Identifiability and unmixing of latent parse trees. In NIPS, 2012.
A. Jaffe, B. Nadler, and Y. Kluger. Estimating the accuracies of multiple classifiers without labeled data. In
AISTATS, pages 407?415, 2015.
T. Joachims. Transductive inference for text classification using support vector machines. In ICML, 1999.
F. Johansson, U. Shalit, and D. Sontag. Learning representations for counterfactual inference. In ICML, 2016.
S. M. Kakade and D. P. Foster. Multi-view regression via canonical correlation analysis. In COLT, 2007.
F. Khani, M. Rinard, and P. Liang. Unanimous prediction for 100% precision with application to learning
semantic mappings. In ACL, 2016.
V. Kuleshov, A. Chaganty, and P. Liang. Tensor factorization via matrix factorization. In AISTATS, 2015.
L. D. Lathauwer. A link between the canonical decomposition in multilinear algebra and simultaneous matrix
diagonalization. SIAM Journal of Matrix Analysis and Applications, 28(3):642?666, 2006.
L. Li, M. L. Littman, T. J. Walsh, and A. L. Strehl. Knows what it knows: a framework for self-aware learning.
Machine learning, 82(3):399?443, 2011.
Y. Li and Z. Zhou. Towards making unlabeled data never hurt. IEEE TPAMI, 37(1):175?188, 2015.
P. Liang and D. Klein. Analyzing the errors of unsupervised learning. In HLT/ACL, 2008.
Y. Mansour, M. Mohri, and A. Rostamizadeh. Domain adaptation: Learning bounds and algorithms. In COLT,
2009.
B. Merialdo. Tagging English text with a probabilistic model. Computational Linguistics, 20:155?171, 1994.
K. Nigam, A. McCallum, S. Thrun, and T. Mitchell. Learning to classify text from labeled and unlabeled
documents. In Association for the Advancement of Artificial Intelligence (AAAI), 1998.
E. A. Platanios. Estimating accuracy from unlabeled data. Master?s thesis, Carnegie Mellon University, 2015.
J. L. Powell. Estimation of semiparametric models. In Handbook of Econometrics, volume 4. 1994.
J. Qui?onero-Candela, M. Sugiyama, A. Schwaighofer, and N. D. Lawrence. Dataset shift in machine learning.
The MIT Press, 2009.
J. D. Sargan. The estimation of economic relationships using instrumental variables. Econometrica, 1958.
J. D. Sargan. The estimation of relationships with autocorrelated residuals by the use of instrumental variables.
Journal of the Royal Statistical Society: Series B (Statistical Methodology), pages 91?105, 1959.
D. Sculley, G. Holt, D. Golovin, E. Davydov, T. Phillips, D. Ebner, V. Chaudhary, M. Young, J. Crespo, and
D. Dennison. Hidden technical debt in machine learning systems. In NIPS, pages 2494?2502, 2015.
G. Shafer and V. Vovk. A tutorial on conformal prediction. JMLR, 9:371?421, 2008.
H. Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function.
Journal of Statistical Planning and Inference, 90:227?244, 2000.
J. Steinhardt, G. Valiant, and S. Wager. Memory, communication, and statistical queries. In COLT, 2016.
N. Tomizawa. On some techniques useful for solution of transportation network problems. Networks, 1971.
Y. Zhang, X. Chen, D. Zhou, and M. I. Jordan. Spectral methods meet EM: A provably optimal algorithm for
crowdsourcing. arXiv, 2014.
9
| 6201 |@word mild:1 version:1 polynomial:2 norm:6 johansson:2 stronger:1 instrumental:2 open:2 additively:2 tried:1 jacob:1 p0:10 decomposition:9 asks:1 thereby:1 moment:16 contains:1 score:2 series:1 ours:1 suppressing:1 document:1 intriguing:1 must:1 john:1 realistic:1 partition:1 enables:1 gv:6 remove:1 alone:1 generative:2 intelligence:1 advancement:1 mccallum:1 caveat:1 provides:5 completeness:1 zhang:5 mathematical:2 along:2 lathauwer:2 prove:2 specialize:1 combine:1 inside:1 manner:1 tagging:1 indeed:1 behavior:2 roughly:1 planning:1 multi:3 balasubramanian:4 inappropriate:1 provided:1 spain:1 underlying:1 estimating:15 moreover:3 agnostic:1 what:2 minimizes:1 differing:1 finding:1 guarantee:1 every:1 subclass:1 runtime:1 classifier:5 k2:2 grant:1 before:3 local:1 xv:40 id:1 jiang:1 analyzing:1 meet:1 solely:1 approximately:1 might:1 acl:2 initialization:1 conversely:2 co:2 factorization:3 limited:2 walsh:1 graduate:1 acknowledgment:1 merialdo:2 testing:1 practice:2 block:1 regret:2 x3:8 digit:2 procedure:2 powell:2 matching:3 projection:1 word:1 induce:1 holt:1 wait:1 unlabeled:17 close:8 undesirable:1 risk:54 impossible:4 optimize:1 transportation:1 yt:10 center:2 crfs:1 straightforward:1 maximizing:1 starting:1 independently:1 convex:2 go:1 attention:1 formalized:1 simplicity:2 m2:1 estimator:2 insight:3 importantly:2 embedding:1 handle:3 crowdsourcing:1 coordinate:1 variation:1 hurt:1 annals:2 target:1 suppose:13 kuleshov:2 agreement:1 dawid:5 econometrics:2 labeled:9 vein:1 hv:5 region:1 principled:1 intuition:2 convexity:1 complexity:6 respecting:1 econometrica:2 littman:1 trained:4 solving:2 algebra:1 predictive:3 upon:1 bipartite:2 f2:13 efficiency:1 completely:3 cozman:2 various:1 talk:1 autocorrelated:1 train:9 additivity:1 distinct:1 kp:1 artificial:1 query:1 outside:2 harnessing:1 whose:2 stanford:4 larger:3 posed:2 say:2 solve:5 otherwise:1 relax:1 distortion:4 statistic:3 g1:1 think:1 transductive:2 itself:2 seemingly:1 online:1 tpami:1 product:1 h0v:3 adaptation:15 relevant:1 aligned:2 combining:1 degenerate:1 achieve:1 intuitive:1 frobenius:2 chemometrics:1 requirement:1 r1:2 unmixing:1 telgarsky:1 ben:2 wider:1 blitzer:4 tale:1 ij:3 qt:2 h3:1 strong:1 recovering:1 c:2 predicted:1 come:1 implemented:1 direction:1 posit:1 correct:2 stochastic:3 kluger:2 require:1 f1:13 suffices:2 fix:1 proposition:1 multilinear:1 mathematically:1 extension:8 hold:11 ground:1 exp:4 seed:8 algorithmic:1 predict:4 nadler:2 mapping:1 lawrence:1 adopt:1 smallest:1 failing:1 estimation:24 label:6 hansen:4 individually:1 create:1 mv0:4 tool:2 weighted:1 minimization:1 mit:1 clearly:1 gaussian:1 modified:5 rather:4 i3:1 zhou:3 avoid:3 pn:1 factorizes:1 varying:1 karp:1 luciani:1 q3:3 derived:1 focus:3 l0:1 joachim:2 improvement:1 rank:4 likelihood:4 contrast:2 political:1 baseline:3 detect:1 sense:2 dim:1 inference:4 economy:1 dependent:2 rostamizadeh:1 typically:3 hidden:3 expressible:1 i1:1 provably:1 pixel:2 issue:3 among:1 classification:6 aforementioned:1 colt:7 arg:1 softmax:1 fairly:1 initialize:1 marginal:1 field:6 equal:5 f3:11 aware:1 never:1 special:1 unsupervised:23 nearly:1 icml:5 discrepancy:1 minimized:1 report:1 future:1 few:1 connects:1 ando:2 investigate:1 severe:1 truly:1 extreme:1 mixture:2 platanios:2 behind:2 held:1 wager:1 partial:2 respective:1 machinery:1 pleasingly:1 tree:2 iv:2 shalit:1 causal:1 theoretical:1 uncertain:1 instance:1 column:3 soft:1 classify:1 rare:1 predictor:1 answer:1 combined:1 calibrated:1 siam:2 systematic:1 contract:5 probabilistic:2 dennison:1 thesis:1 central:1 satisfied:1 aaai:1 worse:1 expert:1 sidestep:1 li:5 bfgs:5 summarized:1 fannie:1 includes:1 gaussianity:1 matter:1 satisfy:3 explicitly:2 mv:31 depends:2 caused:1 later:1 view:38 h1:3 try:1 candela:2 optimistic:2 break:1 observer:1 start:1 recover:7 hazan:1 identifiability:1 minimize:1 square:1 accuracy:7 variance:2 efficiently:1 ensemble:1 yield:3 generalize:1 weak:4 identification:1 accurately:2 produced:1 onero:2 randomness:1 history:1 explain:1 simultaneous:1 hlt:1 definition:1 proof:6 mi:4 rdk:1 unsupservised:1 hsu:4 dataset:1 mitchell:3 recall:1 counterfactual:1 sophisticated:1 actually:1 appears:1 focusing:1 higher:3 supervised:15 methodology:1 formulation:1 unidentifiable:1 though:3 strongly:1 anderson:3 generality:1 just:1 done:1 correlation:2 fetaya:2 hand:3 sketch:1 parse:2 tropp:1 lack:1 xv0:1 jaffe:4 logistic:5 indicated:1 perhaps:2 grows:1 building:2 omitting:1 k22:1 normalized:1 true:8 y2:1 effect:1 verify:1 hence:7 contain:1 alternating:1 i2:1 semantic:1 conditionally:2 self:1 generalized:1 outline:1 complete:2 crf:3 confusion:1 multiview:1 performs:3 percy:1 duchi:2 balcan:2 image:9 novel:1 recently:3 common:1 donmez:4 empirically:1 cohen:2 volume:1 extend:4 belong:1 m1:1 martinsson:1 marginals:1 interpret:1 association:1 refer:1 mellon:1 phillips:1 chaganty:5 rd:3 unconstrained:1 similarly:2 sugiyama:1 maxj6:1 had:1 reliability:1 moving:1 access:3 specification:4 longer:1 similarity:1 supervision:1 closest:1 recent:1 perspective:2 moderate:1 certain:4 life:1 exploited:1 captured:1 minimum:1 additional:3 care:1 somewhat:1 signal:1 semi:11 rv:6 full:6 recoverable:1 ii:4 multiple:4 reduces:1 technical:3 match:1 long:4 davydov:1 prediction:8 imparts:1 basic:3 regression:8 expectation:2 arxiv:2 normalization:1 sometimes:1 preserved:1 addition:2 want:1 separately:1 whereas:1 receive:1 fellowship:2 semiparametric:1 singular:1 source:2 handside:1 invited:1 rest:1 flow:1 jordan:1 anandkumar:7 call:1 structural:2 near:1 leverage:1 split:1 iii:1 independence:9 fit:1 affect:2 isolation:1 economic:2 idea:3 knowing:1 multiclass:3 shift:7 whether:1 motivated:1 sontag:1 generally:2 useful:1 amount:3 nonparametric:1 locally:1 svms:1 specifies:1 nsf:1 canonical:2 tutorial:1 estimated:8 track:3 klein:2 blue:1 edmonds:1 discrete:4 write:1 carnegie:1 independency:3 key:3 blum:4 drawn:1 d3:1 abstaining:1 subgradient:1 merely:1 sum:4 run:2 uncertainty:1 master:1 extends:4 family:8 throughout:1 reader:1 p3:7 draw:1 appendix:1 qui:2 def:11 bound:3 ki:2 cca:1 distinguish:1 oracle:3 constraint:1 x2:10 software:1 nearby:1 min:12 performing:1 skene:5 structured:6 alternate:1 combination:1 shimodaira:2 kd:3 hertz:1 across:2 slightly:1 smaller:1 em:2 kakade:6 g3:1 making:2 comon:2 invariant:1 restricted:1 handling:1 crespo:1 equation:8 turn:2 r3:2 needed:6 know:3 letting:2 ge:1 singer:1 end:2 conformal:1 apply:2 observe:1 spectral:3 robustness:1 denotes:1 include:1 ensure:1 linguistics:1 graphical:2 hinge:6 rinard:1 exploit:3 restrictive:2 build:1 unanimous:1 society:1 tensor:16 implied:1 question:3 quantity:1 parametric:3 primary:1 dependence:2 traditional:1 gradient:4 kth:1 subspace:1 reversed:1 distance:1 link:1 thrun:1 parametrized:1 outer:1 hmm:1 degrade:1 reason:1 assuming:4 index:1 relationship:2 minimizing:2 liang:10 setup:1 mediating:3 disentangling:1 mink:1 unknown:1 pliang:1 perform:10 upper:2 ebner:1 markov:2 finite:1 defining:1 extended:1 variability:1 communication:1 y1:4 mansour:2 perturbation:1 arbitrary:1 david:2 specified:5 fv:25 learned:1 barcelona:1 nip:4 address:1 beyond:4 below:4 regime:1 challenge:2 encompasses:1 royal:1 including:3 reliable:1 max:1 memory:2 power:1 overlap:2 debt:1 difficulty:2 hybrid:1 predicting:1 residual:1 older:2 tations:1 imply:1 ready:1 coupled:1 kj:2 text:3 review:1 literature:1 understanding:1 marginalizing:1 adagrad:1 asymptotic:1 fully:3 loss:45 permutation:9 expect:1 interesting:2 generation:1 analogy:1 validation:2 h2:3 foundation:1 incurred:1 degree:2 rubin:3 foster:3 strehl:1 row:2 summary:2 mohri:1 surprisingly:1 supported:1 sym:2 jth:1 english:1 formal:1 side:3 understand:1 institute:1 taking:1 face:1 boundary:1 dimension:3 evaluating:1 avoids:1 valid:2 author:1 refinement:1 wouldn:1 adaptive:1 far:1 lebanon:2 approximate:4 observable:1 contradict:1 ignore:1 ml:1 handbook:1 assumed:2 discriminative:4 continuous:3 latent:5 decomposes:2 why:2 learn:4 robust:1 golovin:1 obtaining:2 nigam:2 improving:1 symmetry:1 contributes:1 hv0:1 poly:7 complex:2 bottou:2 domain:19 constructing:1 aistats:4 pk:3 main:4 noise:2 shafer:2 profile:1 nothing:1 x1:20 precision:1 sub:1 pereira:1 explicit:1 exponential:5 lie:3 jmlr:3 weighting:3 third:1 young:1 rk:4 theorem:18 bad:1 xt:9 covariate:6 r2:2 admits:1 intractable:1 mnist:3 effectively:1 importance:2 valiant:1 diagonalization:1 conditioned:5 margin:1 gap:3 flavor:1 chen:1 entropy:2 halko:2 led:1 jacm:2 likely:6 infinitely:1 forming:1 explore:1 steinhardt:2 highlighting:1 expressed:1 desire:1 schwaighofer:1 partially:1 g2:1 applies:2 cite:1 truth:2 satisfies:4 relies:2 determines:1 succeed:1 conditional:25 goal:4 sculley:2 towards:1 absence:2 change:1 typical:2 except:1 vovk:2 lemma:2 total:1 stake:1 m3:1 almeida:1 support:1 latter:2 crammer:1 evaluate:2 ex:5 |
5,750 | 6,202 | Hierarchical Question-Image Co-Attention
for Visual Question Answering
Jiasen Lu? , Jianwei Yang? , Dhruv Batra?? , Devi Parikh??
? Virginia Tech, ? Georgia Institute of Technology
{jiasenlu, jw2yang, dbatra, parikh}@vt.edu
Abstract
A number of recent works have proposed attention models for Visual Question
Answering (VQA) that generate spatial maps highlighting image regions relevant to
answering the question. In this paper, we argue that in addition to modeling ?where
to look? or visual attention, it is equally important to model ?what words to listen
to? or question attention. We present a novel co-attention model for VQA that
jointly reasons about image and question attention. In addition, our model reasons
about the question (and consequently the image via the co-attention mechanism)
in a hierarchical fashion via a novel 1-dimensional convolution neural networks
(CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3% to
60.5%, and from 61.6% to 63.3% on the COCO-QA dataset. By using ResNet, the
performance is further improved to 62.1% for VQA and 65.4% for COCO-QA.1 .
1
Introduction
Visual Question Answering (VQA) [2, 6, 14, 15, 27] has emerged as a prominent multi-discipline
research problem in both academia and industry. To correctly answer visual questions about an
image, the machine needs to understand both the image and question. Recently, visual attention based
models [18, 21?23] have been explored for VQA, where the attention mechanism typically produces
a spatial map highlighting image regions relevant to answering the question.
So far, all attention models for VQA in literature have focused on the problem of identifying ?where
to look? or visual attention. In this paper, we argue that the problem of identifying ?which words to
listen to? or question attention is equally important. Consider the questions ?how many horses are
in this image?? and ?how many horses can you see in this image?". They have the same meaning,
essentially captured by the first three words. A machine that attends to the first three words would
arguably be more robust to linguistic variations irrelevant to the meaning and answer of the question.
Motivated by this observation, in addition to reasoning about visual attention, we also address the
problem of question attention. Specifically, we present a novel multi-modal attention model for VQA
with the following two unique features:
Co-Attention: We propose a novel mechanism that jointly reasons about visual attention and question
attention, which we refer to as co-attention. Unlike previous works, which only focus on visual
attention, our model has a natural symmetry between the image and question, in the sense that the
image representation is used to guide the question attention and the question representation(s) are
used to guide image attention.
Question Hierarchy: We build a hierarchical architecture that co-attends to the image and question
at three levels: (a) word level, (b) phrase level and (c) question level. At the word level, we embed the
words to a vector space through an embedding matrix. At the phrase level, 1-dimensional convolution
neural networks are used to capture the information contained in unigrams, bigrams and trigrams.
1
The source code can be downloaded from https://github.com/jiasenlu/HieCoAttenVQA
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Answer:
?green
?
What
?color
?on
?the
?stop
?light
?is
?lit
?up
?
?
?
?
?
color
?stop
?light
?lit
?
the
?
What
?
stop
? light
?
? ?
?
?
color
?
?
?
color
?
light
?
?
?
?
?
?
the
?stop
?light
?
Image
?
What
? color
? ?
? stop
?
? light
?
? ?
?
?
Ques%on:
?What
?color
?on
?the
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?stop
?light
?is
?lit
?up
?
??
?
?
?
?
?
?
stop
?
?
co-??a7en%on
?
Figure 1: Flowchart of our proposed hierarchical co-attention model. Given a question, we extract its word
level, phrase level and question level embeddings. At each level, we apply co-attention on both the image and
question. The final answer prediction is based on all the co-attended image and question features.
Specifically, we convolve word representations with temporal filters of varying support, and then
combine the various n-gram responses by pooling them into a single phrase level representation. At
the question level, we use recurrent neural networks to encode the entire question. For each level
of the question representation in this hierarchy, we construct joint question and image co-attention
maps, which are then combined recursively to ultimately predict a distribution over the answers.
Overall, the main contributions of our work are:
? We propose a novel co-attention mechanism for VQA that jointly performs question-guided
visual attention and image-guided question attention. We explore this mechanism with two
strategies, parallel and alternating co-attention, which are described in Sec. 3.3;
? We propose a hierarchical architecture to represent the question, and consequently construct
image-question co-attention maps at 3 different levels: word level, phrase level and question
level. These co-attended features are then recursively combined from word level to question
level for the final answer prediction;
? At the phrase level, we propose a novel convolution-pooling strategy to adaptively select the
phrase sizes whose representations are passed to the question level representation;
? Finally, we evaluate our proposed model on two large datasets, VQA [2] and COCO-QA [15].
We also perform ablation studies to quantify the roles of different components in our model.
2
Related Work
Many recent works [2, 6, 11, 14, 15, 25] have proposed models for VQA. We compare and relate our
proposed co-attention mechanism to other vision and language attention mechanisms in literature.
Image attention. Instead of directly using the holistic entire-image embedding from the fully
connected layer of a deep CNN (as in [2, 13?15]), a number of recent works have explored image
attention models for VQA. Zhu et al. [26] add spatial attention to the standard LSTM model for
pointing and grounded QA. Andreas et al. [1] propose a compositional scheme that consists of a
language parser and a number of neural modules networks. The language parser predicts which neural
module network should be instantiated to answer the question. Some other works perform image
attention multiple times in a stacked manner. In [23], the authors propose a stacked attention network,
which runs multiple hops to infer the answer progressively. To capture fine-grained information from
the question, Xu et al. [22] propose a multi-hop image attention scheme. It aligns words to image
patches in the first hop, and then refers to the entire question for obtaining image attention maps in
the second hop. In [18], the authors generate image regions with object proposals and then select the
regions relevant to the question and answer choice. Xiong et al. [21] augments dynamic memory
network with a new input fusion module and retrieves an answer from an attention based GRU. In
2
concurrent work, [5] collected ?human attention maps? that are used to evaluate the attention maps
generated by attention models for VQA. Note that all of these approaches model visual attention
alone, and do not model question attention. Moreover, [22, 23] model attention sequentially, i.e., later
attention is based on earlier attention, which is prone to error propagation. In contrast, we conduct
co-attention at three levels independently.
Language Attention. Though no prior work has explored question attention in VQA, there are
some related works in natural language processing (NLP) in general that have modeled language
attention. In order to overcome difficulty in translation of long sentences, Bahdanau et al. [3]
propose RNNSearch to learn an alignment over the input sentences. In [8], the authors propose an
attention model to circumvent the bottleneck caused by fixed width hidden vector in text reading and
comprehension. A more fine-grained attention mechanism is proposed in [16]. The authors employ
a word-by-word neural attention mechanism to reason about the entailment in two sentences. Also
focused on modeling sentence pairs, the authors in [24] propose an attention-based bigram CNN for
jointly performing attention between two CNN hierarchies. In their work, three attention schemes are
proposed and evaluated. In [17], the authors propose a two-way attention mechanism to project the
paired inputs into a common representation space.
3
Method
We begin by introducing the notation used in this paper. To ease understanding, our full model
is described in parts. First, our hierarchical question representation is described in Sec. 3.2 and
the proposed co-attention mechanism is then described in Sec. 3.3. Finally, Sec. 3.4 shows how to
recursively combine the attended question and image features to output answers.
3.1
Notation
Given a question with T words, its representation is denoted by Q = {q1 , . . . qT }, where qt is the
feature vector for the t-th word. We denote qtw , qtp and qts as word embedding, phrase embedding and
question embedding at position t, respectively. The image feature is denoted by V = {v1 , ..., vN },
where vn is the feature vector at the spatial location n. The co-attention features of image and
question at each level in the hierarchy are denoted as v?r and q?r where r ? {w, p, s}. The weights in
different modules/layers are denoted with W , with appropriate sub/super-scripts as necessary. In the
exposition that follows, we omit the bias term b to avoid notational clutter.
3.2
Question Hierarchy
Given the 1-hot encoding of the question words Q = {q1 , . . . , qT }, we first embed the words to
a vector space (learnt end-to-end) to get Qw = {q1w , . . . , qTw }. To compute the phrase features,
we apply 1-D convolution on the word embedding vectors. Concretely, at each word location, we
compute the inner product of the word vectors with filters of three window sizes: unigram, bigram
and trigram. For the t-th word, the convolution output with window size s is given by
p
w
q?s,t
= tanh(Wcs qt:t+s?1
), s ? {1, 2, 3}
(1)
s
where Wc is the weight parameters. The word-level features Qw are appropriately 0-padded before
feeding into bigram and trigram convolutions to maintain the length of the sequence after convolution.
Given the convolution result, we then apply max-pooling across different n-grams at each word
location to obtain phrase-level features
p
p
p
qtp = max(q?1,t
, q?2,t
, q?3,t
), t ? {1, 2, . . . , T }
(2)
Our pooling method differs from those used in previous works [9] in that it adaptively selects different
gram features at each time step, while preserving the original sequence length and order. We use a
LSTM to encode the sequence qtp after max-pooling. The corresponding question-level feature qts is
the LSTM hidden vector at time t.
Our hierarchical representation of the question is depicted in Fig. 3(a).
3.3
Co-Attention
We propose two co-attention mechanisms that differ in the order in which image and question
attention maps are generated. The first mechanism, which we call parallel co-attention, generates
3
Ques+on
Ques+on
x
Q
A
aq
WqQ
q?
q?
Q
s?
A
x
C
3.
?
0
?
1.
?
x
WvV
V
v?
av
V
x
A
v?
2.
?
Image
Image
(a)
?
(b)
?
?
Figure 2: (a) Parallel co-attention mechanism; (b) Alternating co-attention mechanism.
image and question attention simultaneously. The second mechanism, which we call alternating
co-attention, sequentially alternates between generating image and question attentions. See Fig. 2.
These co-attention mechanisms are executed at all three levels of the question hierarchy.
Parallel Co-Attention. Parallel co-attention attends to the image and question simultaneously.
Similar to [22], we connect the image and question by calculating the similarity between image and
question features at all pairs of image-locations and question-locations. Specifically, given an image
feature map V ? Rd?N , and the question representation Q ? Rd?T , the affinity matrix C ? RT ?N
is calculated by
C = tanh(QT Wb V )
(3)
where Wb ? Rd?d contains the weights. After computing this affinity matrix, one possible way of
computing the image (or question) attention is to simply maximize out the affinity over the locations
of other modality, i.e. av [n] = maxi (Ci,n ) and aq [t] = maxj (Ct,j ). Instead of choosing the max
activation, we find that performance is improved if we consider this affinity matrix as a feature and
learn to predict image and question attention maps via the following
H v = tanh(Wv V + (Wq Q)C), H q = tanh(Wq Q + (Wv V )C T )
T
T
av = softmax(whv
H v ), aq = softmax(whq
Hq)
(4)
where Wv , Wq ? Rk?d , whv , whq ? Rk are the weight parameters. av ? RN and aq ? RT are
the attention probabilities of each image region vn and word qt respectively. The affinity matrix C
transforms question attention space to image attention space (vice versa for C T ). Based on the above
attention weights, the image and question attention vectors are calculated as the weighted sum of the
image features and question features, i.e.,
v? =
N
X
avn vn ,
q? =
n=1
T
X
aqt qt
(5)
t=1
The parallel co-attention is done at each level in the hierarchy, leading to v?r and q?r where r ?
{w, p, s}.
Alternating Co-Attention. In this attention mechanism, we sequentially alternate between generating image and question attention. Briefly, this consists of three steps (marked in Fig. 2b): 1)
summarize the question into a single vector q; 2) attend to the image based on the question summary
q; 3) attend to the question based on the attended image feature.
? = A(X; g), which takes the image (or question)
Concretely, we define an attention operation x
features X and attention guidance g derived from question (or image) as inputs, and outputs the
attended image (or question) vector. The operation can be expressed in the following steps
H = tanh(Wx X + (Wg g)1T )
T
ax = softmax(whx
H)
X
x
?=
x
ai xi
4
(6)
LSTM
?
LSTM
?
LSTM
?
LSTM ?ques1on ?
encoding
?
?
Answer
?
LSTM
?
so;max
?
?
?
hs
Max-??over ?di?erent ?
?lter ?pooling ?layer
hp
?
?
Convolu1on ?layer ?
with ?mul1ple ??lter ?
of ?di?erent ?widths
hw
?
?
Word ?embedding
?What
?
?
?
?
?
?
?color
?
?
?
?
?
?on
?
?
?
?the
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?up
?
???
?
q? s +
? v? s
q? p +
? v? p
q? w+
? v? w
(a)
?
(b)
?
Figure 3: (a) Hierarchical question encoding (Sec. 3.2); (b) Encoding for predicting answers (Sec. 3.4).
where 1 is a vector with all elements to be 1. Wx , Wg ? Rk?d and whx ? Rk are parameters. ax
is the attention weight of feature X.
The alternating co-attention process is illustrated in Fig. 2 (b). At the first step of alternating coattention, X = Q, and g is 0; At the second step, X = V where V is the image features, and the
guidance g is intermediate attended question feature s? from the first step; Finally, we use the attended
image feature v? as the guidance to attend the question again, i.e., X = Q and g = v?. Similar to the
parallel co-attention, the alternating co-attention is also done at each level of the hierarchy.
3.4
Encoding for Predicting Answers
Following [2], we treat VQA as a classification task. We predict the answer based on the coattended image and question features from all three levels. We use a multi-layer perceptron (MLP) to
recursively encode the attention features as shown in Fig. 3(b).
hw = tanh(Ww (?
qw + v
?w ))
p
p
h = tanh(Wp [(?
q +v
?p ), hw ])
hs = tanh(Ws [(?
qs + v
?s ), hp ])
s
p = softmax(Wh h )
(7)
where Ww , Wp , Ws and Wh are the weight parameters. [?] is the concatenation operation on two
vectors. p is the probability of the final answer.
4
4.1
Experiment
Datasets
We evaluate the proposed model on two datasets, the VQA dataset [2] and the COCO-QA dataset
[15].
VQA dataset [2] is the largest dataset for this problem, containing human annotated questions and
answers on Microsoft COCO dataset [12]. The dataset contains 248,349 training questions, 121,512
validation questions, 244,302 testing questions, and a total of 6,141,630 question-answers pairs.
There are three sub-categories according to answer-types including yes/no, number, and other. Each
question has 10 free-response answers. We use the top 1000 most frequent answers as the possible
outputs similar to [2]. This set of answers covers 86.54% of the train+val answers. For testing, we
train our model on VQA train+val and report the test-dev and test-standard results from the VQA
evaluation server. We use the evaluation protocol of [2] in the experiment.
COCO-QA dataset [15] is automatically generated from captions in the Microsoft COCO dataset
[12]. There are 78,736 train questions and 38,948 test questions in the dataset. These questions
are based on 8,000 and 4,000 images respectively. There are four types of questions including
object, number, color, and location. Each type takes 70%, 7%, 17%, and 6% of the whole dataset,
respectively. All answers in this data set are single word. As in [15], we report classification accuracy
as well as Wu-Palmer similarity (WUPS) in Table 2.
5
Table 1: Results on the VQA dataset. ?-? indicates the results is not available.
Open-Ended
Multiple-Choice
test-dev
4.2
test-std
test-dev
test-std
Method
Y/N
Num
Other
All
All
Y/N
Num
Other
All
All
LSTM Q+I [2]
Region Sel. [18]
SMem [22]
SAN [23]
FDA [10]
DMN+ [21]
80.5
80.9
79.3
81.1
80.5
36.8
37.3
36.6
36.2
36.8
43.0
43.1
46.1
45.8
48.3
57.8
58.0
58.7
59.2
60.3
58.2
58.2
58.9
59.5
60.4
80.5
77.6
81.5
-
38.2
34.3
39.0
-
53.0
55.8
54.7
-
62.7
62.4
64.0
-
63.1
64.2
-
Oursp +VGG
Oursa +VGG
Oursa +ResNet
79.5
79.6
79.7
38.7
38.4
38.7
48.3
49.1
51.7
60.1
60.5
61.8
62.1
79.5
79.7
79.7
39.8
40.1
40.0
57.4
57.9
59.8
64.6
64.9
65.8
66.1
Setup
We use Torch [4] to develop our model. We use the Rmsprop optimizer with a base learning rate
of 4e-4, momentum 0.99 and weight-decay 1e-8. We set batch size to be 300 and train for up to
256 epochs with early stopping if the validation accuracy has not improved in the last 5 epochs. For
COCO-QA, the size of hidden layer Ws is set to 512 and 1024 for VQA since it is a much larger
dataset. All the other word embedding and hidden layers were vectors of size 512. We apply dropout
with probability 0.5 on each layer. Following [23], we rescale the image to 448 ? 448, and then take
the activation from the last pooling layer of VGGNet [19] or ResNet [7] as its feature.
4.3
Results and Analysis
There are two test scenarios on VQA: open-ended and multiple-choice. The best performing method
deeper LSTM Q + norm I from [2] is used as our baseline. For open-ended test scenario, we
compare our method with the recent proposed SMem [22], SAN [23], FDA [10] and DMN+ [21].
For multiple choice, we compare with Region Sel. [18] and FDA [10]. We compare with 2VIS+BLSTM [15], IMG-CNN [13] and SAN [23] on COCO-QA. We use Oursp to refer to our
parallel co-attention, Oursa for alternating co-attention.
Table 1 shows results on the VQA test sets for both open-ended and multiple-choice settings. We can
see that our approach improves the state of art from 60.4% (DMN+ [21]) to 62.1% (Oursa +ResNet) on
open-ended and from 64.2% (FDA [10]) to 66.1% (Oursa +ResNet) on multiple-choice. Notably, for
the question type Other and Num, we achieve 3.4% and 1.4% improvement on open-ended questions,
and 4.0% and 1.1% on multiple-choice questions. As we can see, ResNet features outperform or
match VGG features in all cases. Our improvements are not solely due to the use of a better CNN.
Specifically, FDA [10] also uses ResNet [7], but Oursa +ResNet outperforms it by 1.8% on test-dev.
SMem [22] uses GoogLeNet [20] and the rest all use VGGNet [19], and Ours+VGG outperforms
them by 0.2% on test-dev (DMN+ [21]).
Table 2 shows results on the COCO-QA test set. Similar to the result on VQA, our model improves the
state-of-the-art from 61.6% (SAN(2,CNN) [23]) to 65.4% (Oursa +ResNet). We observe that parallel
co-attention performs better than alternating co-attention in this setup. Both attention mechanisms
have their advantages and disadvantages: parallel co-attention is harder to train because of the dot
product between image and text which compresses two vectors into a single value. On the other hand,
alternating co-attention may suffer from errors being accumulated at each round.
4.4
Ablation Study
In this section, we perform ablation studies to quantify the role of each component in our model.
Specifically, we re-train our approach by ablating certain components:
? Image Attention alone, where in a manner similar to previous works [23], we do not use any
question attention. The goal of this comparison is to verify that our improvements are not the
result of orthogonal contributions. (say better optimization or better CNN features).
6
Table 2: Results on the COCO-QA dataset. ?-? indicates the results is not available.
Method
Object
Number
Color
Location
Accuracy
WUPS0.9
WUPS0.0
2-VIS+BLSTM [15]
IMG-CNN [13]
SAN(2, CNN) [23]
58.2
64.5
44.8
48.6
49.5
57.9
47.3
54.0
55.1
58.4
61.6
65.3
68.5
71.6
88.6
89.7
90.9
Oursp +VGG
Oursa +VGG
Oursa +ResNet
65.6
65.6
68.0
49.6
48.9
51.0
61.5
59.8
62.9
56.8
56.7
58.8
63.3
62.9
65.4
73.0
72.8
75.1
91.3
91.3
92.0
? Question Attention alone, where no image attention is performed.
? W/O Conv, where no convolution and pooling is performed to represent phrases. Instead, we
stack another word embedding layer on the top of word level outputs.
? W/O W-Atten, where no word level co-attention is performed. We replace the word level attention
with a uniform distribution. Phrase and question level co-attentions are still modeled.
? W/O P-Atten, where no phrase level co-attention is performed, and the phrase level attention is
set to be uniform. Word and question level co-attentions are still modeled.
? W/O Q-Atten, where no question level co-attention is performed. We replace the question level
attention with a uniform distribution. Word and phrase level co-attentions are still modeled.
Table 3 shows the comparison of our full approach w.r.t these ablations on the VQA validation set
(test sets are not recommended to be used for such experiments). The deeper LSTM Q + norm I
baseline in [2] is also reported for comparison. We can see that image-attention-alone does improve
performance over the holistic image feature (deeper LSTM Q + norm I), which is consistent with
findings of previous attention models for VQA [21, 23].
Comparing the full model w.r.t. ablated versions
without word, phrase, question level attentions re- Table 3: Ablation study on the VQA dataset using
veals a clear interesting trend ? the attention mech- Oursa +VGG.
anisms closest to the ?top? of the hierarchy (i.e. quesvalidation
tion) matter most, with a drop of 1.7% in accuracy
if not modeled; followed by the intermediate level
Method
Y/N Num Other
All
(i.e. phrase), with a drop of 0.3%; finally followed
LSTM Q+I
79.8 32.9
40.7
54.3
by the ?bottom? of the hierarchy (i.e. word), with
Image Atten
79.8 33.9
43.6
55.9
a drop of 0.2% in accuracy. We hypothesize that
Question Atten 79.4 33.3
41.7
54.8
this is because the question level is the ?closest? to
W/O Q-Atten
79.6 32.1
42.9
55.3
the answer prediction layers in our model. Note
W/O P-Atten
79.5 34.1
45.4
56.7
that all levels are important, and our final model
W/O W-Atten
79.6 34.4
45.6
56.8
significantly outperforms not using any linguistic
Full Model
79.6 35.0
45.7
57.0
attention (1.1% difference between Full Model and
Image Atten). The question attention alone model
is better than LSTM Q+I, with an improvement of 0.5% and worse than image attention alone, with a
drop of 1.1%. Oursa further improves if we performed alternating co-attention for one more round,
with an improvement of 0.3%.
4.5
Qualitative Results
We now visualize some co-attention maps generated by our method in Fig. 4. At the word level, our
model attends mostly to the object regions in an image, e.g., heads, bird. At the phrase level, the
image attention has different patterns across images. For the first two images, the attention transfers
from objects to background regions. For the third image, the attention becomes more focused on
the objects. We suspect that this is caused by the different question types. On the question side,
our model is capable of localizing the key phrases in the question, thus essentially discovering the
question types in the dataset. For example, our model pays attention to the phrases ?what color? and
?how many snowboarders?. Our model successfully attends to the regions in images and phrases in the
questions appropriate for answering the question, e.g., ?color of the bird? and bird region. Because
7
Q: what is the man holding a
snowboard on top of a snow
covered? A: mountain
what is the man holding a
snowboard on top of a snow covered
what is the man holding a
snowboard on top of a snow
covered ?
what is the man holding a
snowboard on top of a snow
covered ?
Q: what is the color of the bird? A:
white
what is the color of the bird ?
what is the color of the bird ?
what is the color of the bird ?
Q: how many snowboarders in
formation in the snow, four is
sitting? A: 5
how many snowboarders in
formation in the snow , four is
sitting ?
how many snowboarders in
formation in the snow , four is
sitting ?
how many snowboarders in
formation in the snow , four is
sitting ?
Figure 4: Visualization of image and question co-attention maps on the COCO-QA dataset. From left to right:
original image and question pairs, word level co-attention maps, phrase level co-attention maps and question
level co-attention maps. For visualization, both image and question attentions are scaled (from red:high to
blue:low). Best viewed in color.
our model performs co-attention at three levels, it often captures complementary information from
each level, and then combines them to predict the answer.
5
Conclusion
In this paper, we proposed a hierarchical co-attention model for visual question answering. Coattention allows our model to attend to different regions of the image as well as different fragments
of the question. We model the question hierarchically at three levels to capture information from
different granularities. The ablation studies further demonstrate the roles of co-attention and question
hierarchy in our final performance. Through visualizations, we can see that our model co-attends
to interpretable regions of images and questions for predicting the answer. Though our model was
evaluated on visual question answering, it can be potentially applied to other tasks involving vision
and language.
Acknowledgements
This work was funded in part by NSF CAREER awards to DP and DB, an ONR YIP award to DP, ONR Grant
N00014-14-1-0679 to DB, a Sloan Fellowship to DP, ARO YIP awards to DB and DP, a Allen Distinguished
Investigator award to DP from the Paul G. Allen Family Foundation, ICTAS Junior Faculty awards to DB and
DP, Google Faculty Research Awards to DP and DB, AWS in Education Research grant to DB, and NVIDIA
GPU donations to DB. The views and conclusions contained herein are those of the authors and should not be
interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the
U.S. Government or any sponsor.
References
[1] Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Deep compositional question answering
with neural module networks. In CVPR, 2016.
[2] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick,
and Devi Parikh. Vqa: Visual question answering. In ICCV, 2015.
[3] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning
to align and translate. In ICLR, 2015.
8
[4] R. Collobert, K. Kavukcuoglu, and C. Farabet. Torch7: A matlab-like environment for machine learning.
In BigLearn, NIPS Workshop, 2011.
[5] Abhishek Das, Harsh Agrawal, C Lawrence Zitnick, Devi Parikh, and Dhruv Batra. Human attention
in visual question answering: Do humans and deep networks look at the same regions? arXiv preprint
arXiv:1606.03556, 2016.
[6] Haoyuan Gao, Junhua Mao, Jie Zhou, Zhiheng Huang, Lei Wang, and Wei Xu. Are you talking to a
machine? dataset and methods for multilingual image question answering. In NIPS, 2015.
[7] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition.
In CVPR, 2016.
[8] Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman,
and Phil Blunsom. Teaching machines to read and comprehend. In NIPS, 2015.
[9] Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. Convolutional neural network architectures for
matching natural language sentences. In NIPS, 2014.
[10] Ilija Ilievski, Shuicheng Yan, and Jiashi Feng. A focused dynamic attention model for visual question
answering. arXiv:1604.01485, 2016.
[11] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen,
Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language and vision
using crowdsourced dense image annotations. arXiv preprint arXiv:1602.07332, 2016.
[12] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll?r,
and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014.
[13] Lin Ma, Zhengdong Lu, and Hang Li. Learning to answer questions from image using convolutional neural
network. In AAAI, 2016.
[14] Mateusz Malinowski, Marcus Rohrbach, and Mario Fritz. Ask your neurons: A neural-based approach to
answering questions about images. In ICCV, 2015.
[15] Mengye Ren, Ryan Kiros, and Richard Zemel. Exploring models and data for image question answering.
In NIPS, 2015.
[16] Tim Rockt?schel, Edward Grefenstette, Karl Moritz Hermann, Tom?? Ko?cisk`y, and Phil Blunsom. Reasoning about entailment with neural attention. In ICLR, 2016.
[17] Cicero dos Santos, Ming Tan, Bing Xiang, and Bowen Zhou. Attentive pooling networks. arXiv preprint
arXiv:1602.03609, 2016.
[18] Kevin J Shih, Saurabh Singh, and Derek Hoiem. Where to look: Focus regions for visual question
answering. In CVPR, 2016.
[19] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014.
[20] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru
Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In CVPR, 2015.
[21] Caiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual and textual
question answering. In ICML, 2016.
[22] Huijuan Xu and Kate Saenko. Ask, attend and answer: Exploring question-guided spatial attention for
visual question answering. arXiv preprint arXiv:1511.05234, 2015.
[23] Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. Stacked attention networks for image
question answering. In CVPR, 2016.
[24] Wenpeng Yin, Hinrich Sch?tze, Bing Xiang, and Bowen Zhou. Abcnn: Attention-based convolutional
neural network for modeling sentence pairs. In ACL, 2016.
[25] Peng Zhang, Yash Goyal, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Yin and yang: Balancing
and answering binary visual questions. arXiv preprint arXiv:1511.05099, 2015.
[26] Yuke Zhu, Oliver Groth, Michael Bernstein, and Li Fei-Fei. Visual7w: Grounded question answering in
images. In CVPR, 2016.
[27] C Lawrence Zitnick, Aishwarya Agrawal, Stanislaw Antol, Margaret Mitchell, Dhruv Batra, and Devi
Parikh. Measuring machine intelligence through visual question answering. AI Magazine, 37(1), 2016.
9
| 6202 |@word h:2 cnn:10 version:1 briefly:1 bigram:4 norm:3 faculty:2 open:6 hu:1 shuicheng:1 jacob:1 q1:2 attended:7 mengye:1 harder:1 recursively:4 liu:1 contains:2 fragment:1 hoiem:1 ours:1 outperforms:3 com:1 comparing:1 haoyuan:1 activation:2 gpu:1 academia:1 wx:2 christian:1 hypothesize:1 drop:4 interpretable:1 progressively:1 alone:6 intelligence:1 discovering:1 num:4 location:8 zhang:2 qualitative:1 consists:2 combine:3 dan:1 manner:2 peng:1 notably:1 merity:1 ilievski:1 kiros:1 multi:4 ming:1 automatically:1 window:2 conv:1 spain:1 project:1 moreover:1 begin:1 notation:2 qtw:2 qw:3 qingcai:1 what:15 mountain:1 hinrich:1 interpreted:1 santos:1 finding:1 ended:6 temporal:1 scaled:1 ramanan:1 grant:2 omit:1 arguably:1 before:1 attend:5 treat:1 encoding:5 solely:1 blunsom:2 acl:1 bird:7 co:55 ease:1 shamma:1 palmer:1 unique:1 testing:2 goyal:1 differs:1 maire:1 mech:1 avn:1 dmn:4 yan:1 significantly:1 matching:1 word:39 refers:1 get:1 context:1 map:15 phil:2 attention:138 independently:1 focused:4 tomas:1 identifying:2 q:1 wvv:1 kay:1 embedding:9 variation:1 hierarchy:11 tan:1 parser:2 magazine:1 caption:1 us:2 wups:1 element:1 trend:1 recognition:2 std:2 predicts:1 bottom:1 role:3 module:5 preprint:5 wang:1 capture:4 region:15 connected:1 snowboard:4 sun:1 environment:1 rmsprop:1 dynamic:3 ultimately:1 singh:1 deva:1 joint:1 various:1 retrieves:1 yash:1 stacked:3 train:7 instantiated:1 zemel:1 horse:2 formation:4 choosing:1 kevin:1 jianfeng:1 whose:1 emerged:1 larger:1 cvpr:6 say:1 wg:2 simonyan:1 jointly:5 final:5 sequence:3 advantage:1 agrawal:3 propose:12 aro:1 product:2 frequent:1 relevant:3 ablation:6 holistic:2 translate:1 achieve:1 margaret:2 darrell:1 produce:1 generating:2 aishwarya:2 wqq:1 resnet:10 object:7 attends:6 recurrent:1 develop:1 donation:1 tim:1 andrew:2 rescale:1 erent:2 qt:7 edward:2 kenji:1 quantify:2 differ:1 guided:3 snow:8 hermann:2 annotated:1 filter:2 human:4 education:1 espeholt:1 government:1 feeding:1 ryan:1 comprehension:1 exploring:2 dhruv:5 lawrence:4 predict:4 visualize:1 pointing:1 trigram:3 optimizer:1 early:1 yuke:2 tanh:8 concurrent:1 largest:1 vice:1 successfully:1 weighted:1 biglearn:1 super:1 avoid:1 zhou:3 sel:2 varying:1 linguistic:2 encode:3 derived:1 focus:2 ax:2 notational:1 improvement:5 indicates:2 tech:1 contrast:1 huijuan:1 baseline:2 sense:1 stopping:1 accumulated:1 typically:1 entire:3 torch:1 hidden:4 w:3 perona:1 going:1 selects:1 whq:2 overall:1 classification:2 denoted:4 rnnsearch:1 spatial:5 art:3 softmax:4 yip:2 saurabh:1 construct:2 piotr:1 hop:4 lit:3 look:4 icml:1 report:2 yoshua:1 richard:2 employ:1 simultaneously:2 kalantidis:1 maxj:1 maintain:1 microsoft:3 ab:1 mlp:1 aqt:1 evaluation:2 alignment:1 ablating:1 light:7 antol:2 oliver:2 capable:1 necessary:1 orthogonal:1 conduct:1 re:2 ablated:1 guidance:3 industry:1 modeling:3 earlier:1 wb:2 dev:5 cover:1 disadvantage:1 localizing:1 measuring:1 rabinovich:1 coattention:2 phrase:22 introducing:1 uniform:3 cicero:1 jiashi:1 johnson:1 virginia:1 reported:1 connect:1 answer:29 learnt:1 combined:2 adaptively:2 cho:1 fritz:1 lstm:14 stay:1 discipline:1 michael:2 connecting:1 again:1 aaai:1 containing:1 huang:1 worse:1 leading:1 li:6 szegedy:1 sec:6 matter:1 kate:1 caused:2 sloan:1 vi:2 collobert:1 tsung:1 later:1 script:1 performed:6 unigrams:1 tion:1 view:1 mario:1 red:1 crowdsourced:1 parallel:10 annotation:1 jia:2 contribution:2 accuracy:5 convolutional:4 sitting:4 serge:1 yes:1 zhengdong:2 vincent:1 kavukcuoglu:1 lu:4 ren:2 farabet:1 aligns:1 trevor:1 attentive:1 derek:1 james:1 di:2 stop:7 dataset:19 wh:2 mitchell:2 ask:2 color:16 listen:2 improves:4 visual7w:1 tom:1 modal:1 improved:3 response:2 entailment:2 wei:2 evaluated:2 though:2 done:2 zisserman:1 smola:1 hand:1 propagation:1 google:1 lei:1 xiaodong:1 verify:1 kyunghyun:1 alternating:11 moritz:2 wp:2 read:1 illustrated:1 white:1 round:2 comprehend:1 width:2 prominent:1 demonstrate:1 performs:3 allen:2 dragomir:1 reasoning:2 zhiheng:1 image:85 meaning:2 baotian:1 novel:6 parikh:6 recently:1 cisk:1 common:2 veal:1 googlenet:1 he:2 refer:2 anguelov:1 versa:1 ai:2 rd:3 hp:2 teaching:1 language:9 aq:4 dot:1 funded:1 similarity:2 add:1 base:1 align:1 closest:2 recent:4 irrelevant:1 coco:13 scenario:2 certain:1 server:1 n00014:1 nvidia:1 binary:1 wv:3 onr:2 hay:1 vt:1 yi:1 joshua:1 captured:1 preserving:1 krishna:1 deng:1 xiangyu:1 maximize:1 recommended:1 stephen:1 multiple:8 full:5 infer:1 blstm:2 match:1 hata:1 long:1 lin:2 equally:2 award:6 paired:1 sponsor:1 prediction:3 involving:1 ko:1 essentially:2 vision:3 arxiv:11 represent:2 grounded:2 proposal:1 addition:3 background:1 fine:2 fellowship:1 aws:1 source:1 jian:1 modality:1 appropriately:1 suleyman:1 rest:1 unlike:1 sch:1 pooling:9 suspect:1 bahdanau:2 db:7 call:2 schel:1 yang:3 granularity:1 intermediate:2 bengio:1 embeddings:1 bernstein:1 architecture:3 andreas:2 inner:1 vgg:7 bottleneck:1 motivated:1 passed:1 torch7:1 suffer:1 karen:1 shaoqing:1 compositional:2 matlab:1 deep:5 jie:1 clear:1 covered:4 malinowski:1 dbatra:1 vqa:28 transforms:1 clutter:1 augments:1 category:1 generate:2 http:1 outperform:1 nsf:1 correctly:1 klein:1 blue:1 key:1 four:5 shih:1 yangqing:1 douglas:1 lter:2 v1:1 padded:1 pietro:1 sum:1 run:1 you:2 family:1 wu:1 vn:4 patch:1 endorsement:1 dropout:1 layer:11 ct:1 pay:1 followed:2 summer:1 your:1 alex:1 fei:2 fda:5 wc:1 generates:1 performing:2 according:1 alternate:2 across:2 stephanie:1 iccv:2 jiasen:2 visualization:3 bing:2 mechanism:19 end:2 available:2 operation:3 doll:1 apply:4 observe:1 hierarchical:9 appropriate:2 caiming:1 pierre:1 distinguished:1 xiong:2 batch:1 original:2 compress:1 convolve:1 top:7 nlp:1 calculating:1 build:1 feng:1 implied:1 question:132 strategy:2 rt:2 affinity:5 dp:7 hq:1 iclr:2 ques:3 concatenation:1 argue:2 collected:1 reason:4 stanislaw:2 marcus:2 dzmitry:1 code:1 length:2 modeled:5 reed:1 sermanet:1 zichao:1 setup:2 executed:1 mostly:1 potentially:1 relate:1 holding:4 policy:1 perform:3 av:4 convolution:10 observation:1 datasets:3 qtp:3 neuron:1 rockt:1 head:1 rn:1 ww:2 stack:1 kocisky:1 david:1 pair:5 gru:1 junior:1 sentence:6 herein:1 textual:1 barcelona:1 flowchart:1 nip:6 qa:11 address:1 justin:1 pattern:1 ranjay:1 scott:1 mateusz:1 reading:1 summarize:1 green:1 memory:2 max:6 including:2 hot:1 natural:3 difficulty:1 circumvent:1 predicting:3 residual:1 zhu:3 representing:1 scheme:3 improve:1 github:1 technology:1 smem:3 bowen:2 vggnet:2 harsh:1 extract:1 text:2 prior:1 literature:2 understanding:1 epoch:2 val:2 acknowledgement:1 xiang:2 fully:1 snowboarder:5 interesting:1 validation:3 foundation:1 downloaded:1 vanhoucke:1 consistent:1 kravitz:1 becomes:1 balancing:1 translation:2 karl:2 prone:1 eccv:1 summary:1 last:2 free:1 guide:2 bias:1 understand:1 perceptron:1 institute:1 deeper:4 side:1 overcome:1 calculated:2 gram:3 genome:1 author:7 concretely:2 san:5 far:1 erhan:1 qts:2 hang:2 multilingual:1 mustafa:1 sequentially:3 img:2 belongie:1 xi:1 abhishek:1 table:7 ilija:1 learn:2 jiasenlu:2 robust:1 transfer:1 career:1 obtaining:1 symmetry:1 necessarily:1 protocol:1 official:1 zitnick:4 da:1 main:1 hierarchically:1 dense:1 whole:1 junhua:1 paul:1 complementary:1 xu:3 lasse:1 fig:6 georgia:1 fashion:1 sub:2 position:1 momentum:1 mao:1 answering:22 third:1 grained:2 hw:3 yannis:1 rk:4 embed:2 dumitru:1 unigram:1 maxi:1 explored:3 decay:1 fusion:1 workshop:1 socher:1 corr:1 ci:1 chen:2 depicted:1 yin:2 tze:1 simply:1 explore:1 whx:2 rohrbach:2 devi:5 visual:23 highlighting:2 expressed:2 contained:2 gao:2 kaiming:1 talking:1 groth:2 ma:1 grefenstette:2 marked:1 goal:1 viewed:1 consequently:2 exposition:1 jianwei:1 replace:2 man:4 specifically:5 batra:5 total:1 saenko:1 select:2 wq:3 support:1 investigator:1 evaluate:3 |
5,751 | 6,203 | Understanding the Effective Receptive Field in
Deep Convolutional Neural Networks
Wenjie Luo?
Yujia Li?
Raquel Urtasun
Richard Zemel
Department of Computer Science
University of Toronto
{wenjie, yujiali, urtasun, zemel}@cs.toronto.edu
Abstract
We study characteristics of receptive fields of units in deep convolutional networks.
The receptive field size is a crucial issue in many visual tasks, as the output must
respond to large enough areas in the image to capture information about large
objects. We introduce the notion of an effective receptive field, and show that it
both has a Gaussian distribution and only occupies a fraction of the full theoretical
receptive field. We analyze the effective receptive field in several architecture
designs, and the effect of nonlinear activations, dropout, sub-sampling and skip
connections on it. This leads to suggestions for ways to address its tendency to be
too small.
1
Introduction
Deep convolutional neural networks (CNNs) have achieved great success in a wide range of problems
in the last few years. In this paper we focus on their application to computer vision: where they are
the driving force behind the significant improvement of the state-of-the-art for many tasks recently,
including image recognition [10, 8], object detection [17, 2], semantic segmentation [12, 1], image
captioning [20], and many more.
One of the basic concepts in deep CNNs is the receptive field, or field of view, of a unit in a certain
layer in the network. Unlike in fully connected networks, where the value of each unit depends on the
entire input to the network, a unit in convolutional networks only depends on a region of the input.
This region in the input is the receptive field for that unit.
The concept of receptive field is important for understanding and diagnosing how deep CNNs work.
Since anywhere in an input image outside the receptive field of a unit does not affect the value of that
unit, it is necessary to carefully control the receptive field, to ensure that it covers the entire relevant
image region. In many tasks, especially dense prediction tasks like semantic image segmentation,
stereo and optical flow estimation, where we make a prediction for each single pixel in the input image,
it is critical for each output pixel to have a big receptive field, such that no important information is
left out when making the prediction.
The receptive field size of a unit can be increased in a number of ways. One option is to stack more
layers to make the network deeper, which increases the receptive field size linearly by theory, as
each extra layer increases the receptive field size by the kernel size. Sub-sampling on the other hand
increases the receptive field size multiplicatively. Modern deep CNN architectures like the VGG
networks [18] and Residual Networks [8, 6] use a combination of these techniques.
In this paper, we carefully study the receptive field of deep CNNs, focusing on problems in which
there are many output unites. In particular, we discover that not all pixels in a receptive field contribute
equally to an output unit?s response. Intuitively it is easy to see that pixels at the center of a receptive
?
denotes equal contribution
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
field have a much larger impact on an output. In the forward pass, central pixels can propagate
information to the output through many different paths, while the pixels in the outer area of the
receptive field have very few paths to propagate its impact. In the backward pass, gradients from an
output unit are propagated across all the paths, and therefore the central pixels have a much larger
magnitude for the gradient from that output.
This observation leads us to study further the distribution of impact within a receptive field on the
output. Surprisingly, we can prove that in many cases the distribution of impact in a receptive field
distributes as a Gaussian. Note that in earlier work [20] this Gaussian assumption about a receptive
field is used without justification. This result further leads to some intriguing findings, in particular
that the effective area in the receptive field, which we call the effective receptive field, only occupies a
fraction of the theoretical receptive field, since Gaussian distributions generally decay quickly from
the center.
The theory we develop for effective receptive field also correlates well with some empirical observations. One such empirical observation is that the currently commonly used random initializations
lead some deep CNNs to start with a small effective receptive field, which then grows during training.
This potentially indicates a bad initialization bias.
Below we present the theory in Section 2 and some empirical observations in Section 3, which aim
at understanding the effective receptive field for deep CNNs. We discuss a few potential ways to
increase the effective receptive field size in Section 4.
2
Properties of Effective Receptive Fields
We want to mathematically characterize how much each input pixel in a receptive field can impact
the output of a unit n layers up the network, and study how the impact distributes within the receptive
field of that output unit. To simplify notation we consider only a single channel on each layer, but
similar results can be easily derived for convolutional layers with more input and output channels.
Assume the pixels on each layer are indexed by (i, j), with their center at (0, 0). Denote the (i, j)th
pixel on the pth layer as xpi,j , with x0i,j as the input to the network, and yi,j = xni,j as the output
on the nth layer. We want to measure how much each x0i,j contributes to y0,0 . We define the
effective receptive field (ERF) of this central output unit as region containing any input pixel with a
non-negligible impact on that unit.
The measure of impact we use in this paper is the partial derivative ?y0,0 /?x0i,j . It measures how
much y0,0 changes as x0i,j changes by a small amount; it is therefore a natural measure of the
importance of x0i,j with respect to y0,0 . However, this measure depends not only on the weights of
the network, but are in most cases also input-dependent, so most of our results will be presented in
terms of expectations over input distribution.
The partial derivative ?y0,0 /?x0i,j can be computed with back-propagation. In the standard setting,
back-propagation propagates the error gradient with respect to a certain loss function. Assuming we
P
?y 0 0
have an arbitrary loss l, by the chain rule we have ?x?l0 = i0 ,j 0 ?y?l0 0 ?xi0,j .
i,j
i ,j
i,j
?y0,0 /?x0i,j ,
Then to get the quantity
we can set the error gradient ?l/?y0,0 = 1 and ?l/?yi,j = 0
for all i 6= 0 and j 6= 0, then propagate this gradient from there back down the network. The resulting
?l/?x0i,j equals the desired ?y0,0 /?x0i,j . Here we use the back-propagation process without an
explicit loss function, and the process can be easily implemented with standard neural network tools.
In the following we first consider linear networks, where this derivative does not depend on the input
and is purely a function of the network weights and (i, j), which clearly shows how the impact of the
pixels in the receptive field distributes. Then we move forward to consider more modern architecture
designs and discuss the effect of nonlinear activations, dropout, sub-sampling, dilation convolution
and skip connections on the ERF.
2.1
The simplest case: a stack of convolutional layers of weights all equal to one
Consider the case of n convolutional layers using k ? k kernels with stride one, one single channel
on each layer and no nonlinearity, stacked into a deep linear CNN. In this analysis we ignore the
biases on all layers. We begin by analyzing convolution kernels with weights all equal to one.
2
Denote g(i, j, p) = ?l/?xpi,j as the gradient on the pth layer, and let g(i, j, n) = ?l/?yi,j . Then
g(, , 0) is the desired gradient image of the input. The back-propagation process effectively convolves
g(, , p) with the k ? k kernel to get g(, , p ? 1) for each p.
In this special case, the kernel is a k ? k matrix of 1?s, so the 2D convolution can be decomposed
into the product of two 1D convolutions. We therefore focus exclusively on the 1D case. We have the
initial gradient signal u(t) and kernel v(t) formally defined as
k?1
X
1, t = 0
u(t) = ?(t), v(t) =
?(t ? m), where ?(t) =
(1)
0, t 6= 0
m=0
and t = 0, 1, ?1, 2, ?2, ... indexes the pixels.
The gradient signal on the input pixels is simply o = u ? v ? ? ? ? ? v, convolving u with n such v?s. To
compute this convolution, we can use the Discrete Time Fourier Transform to convert the signals into
the Fourier domain, and obtain
U (?) =
?
X
u(t)e?j?t = 1,
V (?) =
t=??
?
X
v(t)e?j?t =
t=??
k?1
X
e?j?m
(2)
m=0
Applying the convolution theorem, we have the Fourier transform of o is
n
F(o) = F(u ? v ? ? ? ? ? v)(?) = U (?) ? V (?) =
k?1
X
!n
e
?j?m
(3)
m=0
Next, we need to apply the inverse Fourier transform to get back o(t):
!n
Z ? k?1
X
1
e?j?m
ej?t d?
(4)
o(t) =
2? ?? m=0
Z ?
1
1, s = t
(5)
e?j?s ej?t d? =
0, s 6= t
2? ??
P
n
k?1 ?j?m
We can see that o(t) is simply the coefficient of e?j?t in the expansion of
.
m=0 e
P
n
k?1 ?j?m
Case k = 2: Now let?s consider the simplest nontrivial case of k = 2, where
=
m=0 e
n
n
?j? n
?j?t
(1 + e
) . The coefficient for e
is then the standard binomial coefficient t , so o(t) = t .
It is quite well known that binomial coefficients distributes with respect to t like a Gaussian as n
becomes large (see for example [13]), which means the scale of the coefficients decays as a squared
exponential as t deviates from the center. When multiplying two 1D Gaussian together, we get a 2D
Gaussian, therefore in this case, the gradient on the input plane is distributed like a 2D Gaussian.
Case k > 2: In this case the coefficients are known as ?extended binomial coefficients? or ?polynomial coefficients?, and they too distribute like Gaussian, see for example [3, 16]. This is included as a
special case for the more general case presented later in Section 2.3.
2.2
Random weights
Now let?s consider the case of random weights. In general, we have
g(i, j, p ? 1) =
k?1
X k?1
X
p
wa,b
g(i + a, i + b, p)
(6)
a=0 b=0
p
with pixel indices properly shifted for clarity, and wa,b
is the convolution weight at (a, b) in the
convolution kernel on layer p. At each layer, the initial weights are independently drawn from a fixed
distribution with zero mean and variance C. We assume that the gradients g are independent from the
weights. This assumption is in general not true if the network contains nonlinearities, but for linear
p
networks these assumptions hold. As Ew [wa,b
] = 0, we can then compute the expectation
Ew,input [g(i, j, p ? 1)] =
k?1
X k?1
X
p
Ew [wa,b
]Einput [g(i + a, i + b, p)] = 0, ?p
a=0 b=0
3
(7)
Here the expectation is taken over w distribution as well as the input data distribution. The variance
is more interesting, as
Var[g(i, j, p?1)] =
k?1
X k?1
X
p
Var[wa,b
]Var[g(i+a, i+b, p)] = C
a=0 b=0
k?1
X k?1
X
Var[g(i+a, i+b, p)] (8)
a=0 b=0
This is equivalent to convolving the gradient variance image Var[g(, , p)] with a k ? k convolution
kernel full of 1?s, and then multiplying by C to get Var[g(, , p ? 1)].
Based on this we can apply exactly the same analysis as in Section 2.1 on the gradient variance
images. The conclusions carry over easily that Var[g(., ., 0)] has a Gaussian shape, with only a slight
change of having an extra C n constant factor multiplier on the variance gradient images, which does
not affect the relative distribution within a receptive field.
2.3
Non-uniform kernels
More generally, each pixel in the kernel window can have different weights, or as in the random
weight case, they may have different variances. Let?s again consider the 1D case, u(t) = ?(t) as
Pk?1
before, and the kernel signal v(t) =
m=0 w(m)?(t ? m), where w(m) is the weight for the
mth
P pixel in the kernel. Without loss of generality, we can assume the weights are normalized, i.e.
m w(m) = 1.
Applying the Fourier transform and convolution theorem as before, we get
!n
k?1
X
?j?m
U (?) ? V (?) ? ? ? V (?) =
w(m)e
(9)
m=0
the space domain signal o(t) is again the coefficient of e?j?t in the expansion; the only difference is
that the e?j?m terms are weighted by w(m).
These coefficients turn out to be well studied in the combinatorics literature, see for example [3] and
the references therein for more details. In [3], it was shown
Pnthat if w(m) are normalized, then o(t)
exactly equals to the probability p(Sn = t), where Sn = i=1 Xi and Xi ?s are i.i.d. multinomial
variables distributed according to w(m)?s, i.e. p(Xi = m) = w(m). Notice the analysis there
requires that w(m) > 0. But we can reduce to variance analysis for the random weight case, where
the variances are always nonnegative while the weights can be negative. The analysis for negative
w(m) is more difficult and is left to future work. However empirically we found the implications of
the analysis in this section still applies reasonably well to networks with negative weights.
?
From the central limit theorem point of view, as n ? ?, the distribution of n( n1 Sn ? E[X])
converges to Gaussian N (0, Var[X]) in distribution. This means, for a given n large enough, Sn is
going to be roughly Gaussian with mean nE[X] and variance nVar[X]. As o(t) = p(Sn = t), this
further implies that o(t) also has a Gaussian shape. When w(m)?s are normalized, this Gaussian has
the following mean and variance:
?
!2 ?
k?1
k?1
k?1
X
X
X
E[Sn ] = n
mw(m), Var[Sn ] = n ?
m2 w(m) ?
mw(m) ?
(10)
m=0
m=0
m=0
This indicates that o(t) decays from the center of the receptive field squared exponentially according
to the Gaussian distribution. The rate of decay is related to the variance of this Gaussian. If we take
one standard deviation
size which is roughly the radius of the
pas the effective
p receptive field (ERF)
?
ERF, then this size is Var[Sn ] = nVar[Xi ] = O( n).
On the other hand, as we stack more convolutional layers, the theoretical receptive field grows linearly,
?
therefore relative to the theoretical receptive field, the ERF actually shrinks at a rate of O(1/ n),
which we found surprising.
In the simple case of uniform weighting, we can further see that the ERF size grows linearly with
kernel size k. As w(m) = 1/k, we have
v
!2 r
u k?1
k?1
X m2
X m
p
? u
?
n(k 2 ? 1)
t
Var[Sn ] = n
?
=
= O(k n)
(11)
k
k
12
m=0
m=0
4
Remarks: The result derived in this section, i.e., the distribution of impact within a receptive field
in deep CNNs converges to Gaussian, holds under the following conditions: (1) all layers in the CNN
use the same set of convolution weights. This is in general not true, however, when we apply the
analysis of variance, the weight variance on all layers are usually the same up to a constant factor. (2)
The convergence derived is convergence ?in distribution?, as implied by the central limit theorem.
This means that the cumulative probability distribution function converges to that of a Gaussian, but
at any single
? point in space the probability can deviate from the Gaussian. (3) The convergence result
states that n( n1 Sn ?E[X]) ? N (0, Var[X]), hence Sn approaches N (nE[X], nVar[X]), however
the convergence of Sn here is not well defined as N (nE[X], nVar[X]) is not a fixed distribution, but
instead it changes with n. Additionally, the distribution of Sn can deviate from Gaussian on a finite
set. But the overall shape of the distribution is still roughly Gaussian.
2.4
Nonlinear activation functions
Nonlinear activation functions are an integral part of every neural network. We use ? to represent an
arbitrary nonlinear activation function. During the forward pass, on each layer the pixels are first
passed through ? and then convolved with the convolution kernel to compute the next layer. This
ordering of operations is a little non-standard but equivalent to the more usual ordering of convolving
first and passing through nonlinearity, and it makes the analysis slightly easier. The backward pass in
this case becomes
k?1
X k?1
X p
p 0
g(i, j, p ? 1) = ?i,j
wa,b g(i + a, i + b, p)
(12)
a=0 b=0
p 0
where we abused notation a bit and use ?i,j
to represent the gradient of the activation function for
pixel (i, j) on layer p.
p 0
For ReLU nonlinearities, ?i,j
= I[xpi,j > 0] where I[.] is the indicator function. We have to
make some extra assumptions about the activations xpi,j to advance the analysis, in addition to
the assumption that it has zero mean and unit variance. A standard assumption is that xpi,j has a
symmetric distribution around 0 [7]. If we make an extra simplifying assumption that the gradients
? 0 are independent from the weights
and g in the upper layers, we can simplify the variance as
p 02 P P
p
p 02
p 0
Var[g(i, j, p ? 1)] = E[?i,j
] a b Var[wa,b
]Var[g(i + a, i + b, p)], and E[?i,j
] = Var[?i,j
]=
1/4 is a constant factor. Following the variance analysis we can again reduce this case to the uniform
weight case.
Sigmoid and Tanh nonlinearities are harder to analyze. Here we only use the observation that when
the network is initialized the weights are usually small and therefore these nonlinearities will be in
the linear region, and the linear analysis applies. However, as the weights grow bigger during training
their effect becomes hard to analyze.
2.5
Dropout, Subsampling, Dilated Convolution and Skip-Connections
Here we consider the effect of some standard CNN approaches on the effective receptive field.
Dropout is a popular technique to prevent overfitting; we show that dropout does not change the
Gaussian ERF shape. Subsampling and dilated convolutions turn out to be effective ways to increase
receptive field size quickly. Skip-connections on the other hand make ERFs smaller. We present the
analysis for all these cases in the Appendix.
3
Experiments
In this section, we empirically study the ERF for various deep CNN architectures. We first use
artificially constructed CNN models to verify the theoretical results in our analysis. We then present
our observations on how the ERF changes during the training of deep CNNs on real datasets. For all
ERF studies, we place a gradient signal of 1 at the center of the output plane and 0 everywhere else,
and then back-propagate this gradient through the network to get input gradients.
3.1
Verifying theoretical results
We first verify our theoretical results in artificially constructed deep CNNs. For computing the ERF
we use random inputs, and for all the random weight networks we followed [7, 5] for proper random
initialization. In this section, we verify the following results:
5
5 layers, theoretical RF size=11
10 layers, theoretical RF size=21
Uniform
Random
Random + ReLU
20 layers, theoretical RF size=41
Uniform
Random
Random + ReLU
40 layers, theoretical RF size=81
Uniform
Uniform
Random
Random + ReLU
Random
Random + ReLU
Figure 1: Comparing the effect of number of layers, random weight initialization and nonlinear
activation on the ERF. Kernel size is fixed at 3 ? 3 for all the networks here. Uniform: convolutional
kernel weights are all ones, no nonlinearity; Random: random kernel weights, no nonlinearity;
Random + ReLU: random kernel weights, ReLU nonlinearity.
ERFs are Gaussian distributed: As shown in Fig. 1, we can observe perfect Gaussian shapes for uniformly and randomly weighted convolution kernels without nonlinear
activations, and near Gaussian shapes for randomly weighted kernels with nonlinearity.
Adding the ReLU nonlinearity makes the distribution a
bit less Gaussian, as the ERF distribution depends on the
input as well. Another reason is that ReLU units output
exactly zero for half of its inputs and it is very easy to
get a zero output for the center pixel on the output plane,
ReLU
Tanh
Sigmoid
which means no path from the receptive field can reach
the output, hence the gradient is all zero. Here the ERFs are averaged over 20 runs with different
random seed. The figures on the right shows the ERF for networks with 20 layers of random weights,
with different nonlinearities. Here the results are averaged both across 100 runs with different
random weights as well as different random inputs. In this setting the receptive fields are a lot more
Gaussian-like.
?
?
n absolute growth and 1/ n relative shrinkage: In Fig. 2, we show the change of ERF size and
the relative ratio of ERF over theoretical RF w.r.t number of convolution layers. The best fitting line
for ERF size gives slope of 0.56 in log domain,
? while the line for ERF ratio gives slope of -0.43. This
indicates ERF size is growing linearly w.r.t N and ERF ratio is shrinking linearly w.r.t. ?1N . Note
here we use 2 standard deviations as our measurement for ERF size, i.e. any pixel with value greater
than 1 ? 95.45% of center point is considered as in ERF. The ERF size is represented by the square
root of number of pixels within ERF, while the theoretical RF size is the side length of the square in
which all pixel has a non-zero impact on the output pixel, no matter how small. All experiments here
are averaged over 20 runs.
Subsampling & dilated convolution increases receptive field: The figure on the right shows
the effect of subsampling and dilated convolution. The reference baseline is a convnet with
15 dense convolution layers. Its ERF is shown in the left-most figure. We then replace 3 of
the 15 convolutional layers with stride-2 convolution to get the ERF for the ?Subsample? figure,
and replace them with dilated convolution with factor
2,4 and 8 for the ?Dilation? figure. As we see, both of
them are able to increase the effect receptive field significantly. Note the ?Dilation? figure shows a rectangular
ERF shape typical for dilated convolutions.
3.2
Conv-Only
How the ERF evolves during training
Subsample
Dilation
In this part, we take a look at how the ERF of units in the top-most convolutional layers of a
classification CNN and a semantic segmentation CNN evolve during training. For both tasks, we
adopt the ResNet architecture which makes extensive use of skip-connections. As the analysis shows,
the ERF of this network should be significantly smaller than the theoretical receptive field. This is
indeed what we have observed initially. Intriguingly, as the networks learns, the ERF gets bigger, and
at the end of training is significantly larger than the initial ERF.
6
Figure 2: Absolute growth (left) and relative shrink (right) for ERF
CIFAR 10
Before Training
CamVid
After Training
Before Training
After Training
Figure 3: Comparison of ERF before and after training for models trained on CIFAR-10 classification
and CamVid semantic segmentation tasks. CIFAR-10 receptive fields are visualized in the image
space of 32 ? 32.
For the classification task we trained a ResNet with 17 residual blocks on the CIFAR-10 dataset. At
the end of training this network reached a test accuracy of 89%. Note that in this experiment we did
not use pooling or downsampling, and exclusively focus on architectures with skip-connections. The
accuracy of the network is not state-of-the-art but still quite high. In Fig. 3 we show the effective
receptive field on the 32?32 image space at the beginning of training (with randomly initialized
weights) and at the end of training when it reaches best validation accuracy. Note that the theoretical
receptive field of our network is actually 74 ? 74, bigger than the image size, but the ERF is still not
able to fully fill the image. Comparing the results before and after training, we see that the effective
receptive field has grown significantly.
For the semantic segmentation task we used the CamVid dataset for urban scene segmentation. We
trained a ?front-end? model [21] which is a purely convolutional network that predicts the output
at a slightly lower resolution. This network plays the same role as the VGG network does in many
previous works [12]. We trained a ResNet with 16 residual blocks interleaved with 4 subsampling
operations each with a factor of 2. Due to these subsampling operations the output is 1/16 of the
input size. For this model, the theoretical receptive field of the top convolutional layer units is quite
big at 505 ? 505. However, as shown in Fig. 3, the ERF only gets a fraction of that with a diameter
of 100 at the beginning of training. Again we observe that during training the ERF size increases and
at the end it reaches almost a diameter around 150.
4
Reduce the Gaussian Damage
The above analysis shows that the ERF only takes a small portion of the theoretical receptive field,
which is undesirable for tasks that require a large receptive field.
New Initialization. One simple way to increase the effective receptive field is to manipulate the
initial weights. We propose a new random weight initialization scheme that makes the weights at the
center of the convolution kernel to have a smaller scale, and the weights on the outside to be larger;
this diffuses the concentration on the center out to the periphery. Practically, we can initialize the
network with any initialization method, then scale the weights according to a distribution that has a
lower scale at the center and higher scale on the outside.
7
In the extreme case, we can optimize the w(m)?s to maximize the ERF size or equivalently the
variance in Eq. 10. Solving this optimization problem leads to the solution that put weights equally at
the 4 corners of the convolution kernel while leaving everywhere else 0. However, using this solution
to do random weight initialization is too aggressive, and leaving a lot of weights to 0 makes learning
slow. A softer version of this idea usually works better.
We have trained a CNN for the CIFAR-10 classification task with this initialization method, with
several random seeds. In a few cases we get a 30% speed-up of training compared to the more
standard initializations [5, 7]. But overall the benefit of this method is not always significant.
We note that no matter what we do to change w(m), the effective receptive field is still distributed
like a Gaussian so the above proposal only solves the problem partially.
Architectural changes. A potentially better approach is to make architectural changes to the CNNs,
which may change the ERF in more fundamental ways. For example, instead of connecting each unit
in a CNN to a local rectangular convolution window, we can sparsely connect each unit to a larger
area in the lower layer using the same number of connections. Dilated convolution [21] belongs to
this category, but we may push even further and use sparse connections that are not grid-like.
5
Discussion
Connection to biological neural networks. In our analysis we have established that the effective
receptive field in deep CNNs actually grows a lot slower than we used to think. This indicates
that a lot of local information is still preserved even after many convolution layers. This finding
contradicts some long-held relevant notions in deep biological networks. A popular characterization
of mammalian visual systems involves a split into "what" and "where" pathways [19]. Progressing
along the what or where pathway, there is a gradual shift in the nature of connectivity: receptive
field sizes increase, and spatial organization becomes looser until there is no obvious retinotopic
organization; the loss of retinotopy means that single neurons respond to objects such as faces
anywhere in the visual field [9]. However, if the ERF is smaller than the RF, this suggests that
representations may retain position information, and also raises an interesting question concerning
changes in the size of these fields during development.
A second relevant effect of our analysis is that it suggests that convolutional networks may automatically create a form of foveal representation. The fovea of the human retina extracts high-resolution
information from an image only in the neighborhood of the central pixel. Sub-fields of equal resolution are arranged such that their size increases with the distance from the center of the fixation.
At the periphery of the retina, lower-resolution information is extracted, from larger regions of the
image. Some neural networks have explicitly constructed representations of this form [11]. However,
because convolutional networks form Gaussian receptive fields, the underlying representations will
naturally have this character.
Connection to previous work on CNNs. While receptive fields in CNNs have not been studied
extensively, [7, 5] conduct similar analyses, in terms of computing how the variance evolves through
the networks. They developed a good initialization scheme for convolution layers following the
principle that variance should not change much when going through the network.
Researchers have also utilized visualizations in order to understand how neural networks work. [14]
showed the importance of using natural-image priors and also what an activation of the convolutional
layer would represent. [22] used deconvolutional nets to show the relation of pixels in the image and
the neurons that are firing. [23] did empirical study involving receptive field and used it as a cue for
localization. There are also visualization studies using gradient ascent techniques [4] that generate
interesting images, such as [15]. These all focus on the unit activations, or feature map, instead of the
effective receptive field which we investigate here.
6
Conclusion
In this paper, we carefully studied the receptive fields in deep CNNs, and established a few surprising
results about the effective receptive field size. In particular, we have shown that the distribution of
impact within the receptive field is asymptotically Gaussian, and the effective receptive field only
takes up a fraction of the full theoretical receptive field. Empirical results echoed the theory we
established. We believe this is just the start of the study of effective receptive field, which provides a
new angle to understand deep CNNs. In the future we hope to study more about what factors impact
effective receptive field in practice and how we can gain more control over them.
8
References
[1] Vijay Badrinarayanan, Ankur Handa, and Roberto Cipolla. Segnet: A deep convolutional encoder-decoder
architecture for robust semantic pixel-wise labelling. arXiv preprint arXiv:1505.07293, 2015.
[2] Xiaozhi Chen, Kaustav Kundu, Yukun Zhu, Andrew Berneshawi, Huimin Ma, Sanja Fidler, and Raquel
Urtasun. 3d object proposals for accurate object class detection. In NIPS, 2015.
[3] Steffen Eger. Restricted weighted integer compositions and extended binomial coefficients. Journal of
Integer Sequences, 16(13.1):3, 2013.
[4] Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. Visualizing higher-layer features of
a deep network. University of Montreal, 1341, 2009.
[5] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural
networks. In AISTATS, pages 249?256, 2010.
[6] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition.
arXiv preprint arXiv:1512.03385, 2015.
[7] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing
human-level performance on imagenet classification. In ICCV, pages 1026?1034, 2015.
[8] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks.
arXiv preprint arXiv:1603.05027, 2016.
[9] Nancy Kanwisher, Josh McDermott, and Marvin M Chun. The fusiform face area: a module in human
extrastriate cortex specialized for face perception. The Journal of Neuroscience, 17(11):4302?4311, 1997.
[10] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional
neural networks. In NIPS, pages 1097?1105, 2012.
[11] Hugo Larochelle and Geoffrey E Hinton. Learning to combine foveal glimpses with a third-order boltzmann
machine. In NIPS, pages 1243?1251, 2010.
[12] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In CVPR, pages 3431?3440, 2015.
[13] L Lovsz, J Pelikn, and K Vesztergombi. Discrete mathematics: elementary and beyond, 2003.
[14] Aravindh Mahendran and Andrea Vedaldi. Understanding deep image representations by inverting them.
In CVPR, pages 5188?5196. IEEE, 2015.
[15] Alexander Mordvintsev, Christopher Olah, and Mike Tyka. Inceptionism: Going deeper into neural
networks. Google Research Blog. Retrieved June, 20, 2015.
[16] Thorsten Neuschel. A note on extended binomial coefficients. Journal of Integer Sequences, 17(2):3, 2014.
[17] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection
with region proposal networks. In NIPS, pages 91?99, 2015.
[18] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition.
CoRR, abs/1409.1556, 2014.
[19] Leslie G Ungerleider and James V Haxby. ?what?and ?where?in the human brain. Current opinion in
neurobiology, 4(2):157?165, 1994.
[20] Kelvin Xu, Jimmy Ba, Ryan Kiros, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua
Bengio. Show, attend and tell: Neural image caption generation with visual attention. arXiv preprint
arXiv:1502.03044, 2015.
[21] Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. arXiv preprint
arXiv:1511.07122, 2015.
[22] Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In ECCV,
pages 818?833. Springer, 2014.
[23] Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Object detectors emerge
in deep scene cnns. arXiv preprint arXiv:1412.6856, 2014.
9
| 6203 |@word fusiform:1 cnn:11 version:1 polynomial:1 gradual:1 propagate:4 simplifying:1 harder:1 carry:1 extrastriate:1 initial:4 contains:1 exclusively:2 foveal:2 deconvolutional:1 current:1 comparing:2 luo:1 surprising:2 activation:11 intriguing:1 must:1 shape:7 haxby:1 half:1 cue:1 plane:3 beginning:2 characterization:1 provides:1 contribute:1 toronto:2 zhang:3 diagnosing:1 along:1 constructed:3 olah:1 koltun:1 prove:1 fixation:1 fitting:1 pathway:2 combine:1 introduce:1 kanwisher:1 indeed:1 andrea:1 roughly:3 growing:1 kiros:1 steffen:1 brain:1 multi:1 salakhutdinov:1 decomposed:1 automatically:1 little:1 window:2 becomes:4 spain:1 discover:1 notation:2 begin:1 conv:1 retinotopic:1 underlying:1 lapedriza:1 what:7 developed:1 finding:2 every:1 growth:2 wenjie:2 exactly:3 control:2 unit:21 kelvin:1 before:6 negligible:1 attend:1 local:2 limit:2 analyzing:1 path:4 firing:1 initialization:11 studied:3 therein:1 ankur:1 suggests:2 range:1 averaged:3 practice:1 block:2 evan:1 area:5 empirical:5 significantly:4 vedaldi:1 get:12 undesirable:1 put:1 context:1 applying:2 optimize:1 equivalent:2 map:1 center:12 attention:1 independently:1 jimmy:1 rectangular:2 resolution:4 m2:2 rule:1 fill:1 notion:2 justification:1 inceptionism:1 play:1 caption:1 pa:1 recognition:3 utilized:1 lovsz:1 mammalian:1 sparsely:1 predicts:1 observed:1 role:1 module:1 preprint:6 mike:1 capture:1 verifying:1 region:7 connected:1 sun:4 ordering:2 trained:5 depend:1 solving:1 raise:1 purely:2 localization:1 easily:3 convolves:1 various:1 represented:1 grown:1 stacked:1 effective:24 abused:1 zemel:3 tell:1 outside:3 neighborhood:1 quite:3 larger:6 cvpr:2 encoder:1 erf:41 simonyan:1 think:1 transform:4 sequence:2 net:1 propose:1 product:1 relevant:3 eger:1 sutskever:1 convergence:4 darrell:1 captioning:1 perfect:1 converges:3 object:7 resnet:3 develop:1 andrew:1 montreal:1 x0i:9 eq:1 solves:1 implemented:1 c:1 skip:6 implies:1 involves:1 larochelle:1 radius:1 cnns:16 occupies:2 human:4 softer:1 opinion:1 require:1 biological:2 elementary:1 ryan:1 mathematically:1 hold:2 practically:1 around:2 considered:1 ungerleider:1 great:1 seed:2 mapping:1 echoed:1 matthew:1 driving:1 adopt:1 torralba:1 estimation:1 ruslan:1 currently:1 tanh:2 ross:1 create:1 tool:1 weighted:4 hope:1 clearly:1 gaussian:31 always:2 aim:1 camvid:3 zhou:1 ej:2 shrinkage:1 agata:1 derived:3 focus:4 june:1 l0:2 nvar:4 improvement:1 properly:1 indicates:4 baseline:1 progressing:1 dependent:1 i0:1 entire:2 initially:1 mth:1 relation:1 diffuses:1 going:3 pixel:27 issue:1 overall:2 classification:6 pascal:1 development:1 art:2 special:2 initialize:1 spatial:1 field:74 equal:6 having:1 intriguingly:1 sampling:3 look:1 yu:1 future:2 yoshua:3 simplify:2 richard:2 few:5 retina:2 modern:2 randomly:3 n1:2 ab:1 detection:3 organization:2 investigate:1 extreme:1 behind:1 held:1 chain:1 implication:1 accurate:1 xni:1 integral:1 partial:2 necessary:1 glimpse:1 indexed:1 conduct:1 initialized:2 desired:2 girshick:1 theoretical:18 increased:1 earlier:1 cover:1 leslie:1 deviation:2 uniform:8 krizhevsky:1 too:3 front:1 characterize:1 connect:1 fundamental:1 xpi:5 retain:1 together:1 quickly:2 connecting:1 ilya:1 connectivity:1 squared:2 central:6 again:4 containing:1 corner:1 convolving:3 derivative:3 li:1 aggressive:1 potential:1 distribute:1 nonlinearities:5 stride:2 dilated:8 coefficient:12 matter:2 combinatorics:1 explicitly:1 depends:4 later:1 view:2 lot:4 root:1 analyze:3 reached:1 start:2 portion:1 option:1 aggregation:1 slope:2 contribution:1 square:2 accuracy:3 convolutional:21 variance:19 characteristic:1 vincent:1 ren:4 multiplying:2 researcher:1 detector:1 reach:3 trevor:1 james:1 obvious:1 naturally:1 propagated:1 gain:1 dataset:2 popular:2 nancy:1 segmentation:7 carefully:3 actually:3 back:7 focusing:1 higher:2 response:1 zisserman:1 arranged:1 shrink:2 generality:1 anywhere:2 just:1 until:1 hand:3 christopher:1 nonlinear:7 propagation:4 google:1 grows:4 believe:1 aude:1 effect:8 concept:2 true:2 multiplier:1 normalized:3 verify:3 hence:2 fidler:1 xavier:1 symmetric:1 semantic:7 visualizing:2 during:8 image:24 handa:1 wise:1 recently:1 sigmoid:2 specialized:1 multinomial:1 empirically:2 hugo:1 exponentially:1 xi0:1 slight:1 he:4 surpassing:1 significant:2 measurement:1 composition:1 grid:1 mathematics:1 nonlinearity:7 sanja:1 cortex:1 showed:1 retrieved:1 belongs:1 periphery:2 certain:2 blog:1 success:1 yi:3 mcdermott:1 greater:1 xiangyu:3 maximize:1 signal:6 full:3 faster:1 long:2 cifar:5 bolei:1 concerning:1 equally:2 manipulate:1 bigger:3 impact:13 prediction:3 involving:1 basic:1 oliva:1 vision:1 expectation:3 arxiv:12 kernel:22 represent:3 achieved:1 proposal:3 addition:1 want:2 preserved:1 else:2 grow:1 leaving:2 jian:4 crucial:1 extra:4 unlike:1 ascent:1 pooling:1 mahendran:1 flow:1 call:1 integer:3 mw:2 near:1 vesztergombi:1 feedforward:1 split:1 enough:2 easy:2 bengio:3 affect:2 relu:10 architecture:7 reduce:3 idea:1 vgg:2 shift:1 passed:1 stereo:1 passing:1 shaoqing:4 remark:1 deep:28 antonio:1 generally:2 amount:1 extensively:1 visualized:1 category:1 simplest:2 diameter:2 generate:1 shifted:1 notice:1 neuroscience:1 discrete:2 drawn:1 urban:1 clarity:1 prevent:1 backward:2 asymptotically:1 fraction:4 year:1 convert:1 run:3 inverse:1 everywhere:2 angle:1 respond:2 raquel:2 place:1 almost:1 architectural:2 looser:1 appendix:1 bit:2 dropout:5 layer:39 interleaved:1 followed:1 courville:2 nonnegative:1 nontrivial:1 marvin:1 alex:1 scene:2 fourier:5 speed:1 optical:1 department:1 according:3 combination:1 vladlen:1 across:2 slightly:2 smaller:4 y0:8 contradicts:1 character:1 evolves:2 making:1 rob:1 intuitively:1 restricted:1 iccv:1 thorsten:1 taken:1 visualization:2 discus:2 turn:2 end:5 segnet:1 operation:3 apply:3 observe:2 slower:1 convolved:1 denotes:1 binomial:5 ensure:1 subsampling:6 top:2 zeiler:1 especially:1 implied:1 move:1 question:1 quantity:1 receptive:70 damage:1 concentration:1 usual:1 gradient:21 fovea:1 convnet:1 distance:1 decoder:1 outer:1 urtasun:3 reason:1 assuming:1 length:1 index:2 multiplicatively:1 ratio:3 downsampling:1 equivalently:1 difficult:1 potentially:2 negative:3 ba:1 design:2 proper:1 boltzmann:1 upper:1 observation:6 convolution:29 datasets:1 neuron:2 finite:1 extended:3 hinton:2 neurobiology:1 stack:3 arbitrary:2 inverting:1 extensive:1 connection:10 imagenet:2 established:3 barcelona:1 nip:5 address:1 able:2 beyond:1 below:1 usually:3 perception:1 yujia:1 rf:7 including:1 critical:1 natural:2 force:1 difficulty:1 indicator:1 residual:5 kundu:1 nth:1 zhu:1 scheme:2 tyka:1 badrinarayanan:1 ne:3 extract:1 roberto:1 sn:13 deviate:3 prior:1 understanding:6 literature:1 evolve:1 relative:5 fully:3 loss:5 suggestion:1 interesting:3 generation:1 var:16 geoffrey:2 validation:1 shelhamer:1 propagates:1 principle:1 eccv:1 surprisingly:1 last:1 bias:2 side:1 deeper:2 understand:2 wide:1 face:3 emerge:1 absolute:2 sparse:1 distributed:4 benefit:1 cumulative:1 forward:3 commonly:1 pth:2 erhan:1 correlate:1 ignore:1 overfitting:1 xi:4 fergus:1 khosla:1 dilation:4 additionally:1 channel:3 reasonably:1 delving:1 nature:1 robust:1 contributes:1 expansion:2 artificially:2 domain:3 did:2 pk:1 dense:2 aistats:1 linearly:5 big:2 subsample:2 unites:1 xu:1 fig:4 kaustav:1 slow:1 shrinking:1 sub:4 position:1 explicit:1 exponential:1 weighting:1 third:1 learns:1 down:1 theorem:4 dumitru:1 bad:1 rectifier:1 decay:4 aravindh:1 chun:1 glorot:1 adding:1 effectively:1 importance:2 corr:1 magnitude:1 labelling:1 push:1 chen:1 easier:1 vijay:1 simply:2 visual:4 josh:1 aditya:1 kaiming:4 partially:1 applies:2 cipolla:1 springer:1 extracted:1 ma:1 yukun:1 identity:1 mordvintsev:1 towards:1 replace:2 fisher:1 retinotopy:1 change:13 hard:1 included:1 typical:1 uniformly:1 distributes:4 pas:4 tendency:1 ew:3 aaron:2 formally:1 jonathan:1 alexander:1 |
5,752 | 6,204 | Provable Efficient Online Matrix Completion via
Non-convex Stochastic Gradient Descent
Chi Jin
UC Berkeley
[email protected]
Sham M. Kakade
University of Washington
[email protected]
Praneeth Netrapalli
Microsoft Research India
[email protected]
Abstract
Matrix completion, where we wish to recover a low rank matrix by observing a
few entries from it, is a widely studied problem in both theory and practice with
wide applications. Most of the provable algorithms so far on this problem have
been restricted to the offline setting where they provide an estimate of the unknown
matrix using all observations simultaneously. However, in many applications, the
online version, where we observe one entry at a time and dynamically update our
estimate, is more appealing. While existing algorithms are efficient for the offline
setting, they could be highly inefficient for the online setting.
In this paper, we propose the first provable, efficient online algorithm for matrix
completion. Our algorithm starts from an initial estimate of the matrix and then
performs non-convex stochastic gradient descent (SGD). After every observation,
it performs a fast update involving only one row of two tall matrices, giving near
linear total runtime. Our algorithm can be naturally used in the offline setting as
well, where it gives competitive sample complexity and runtime to state of the art
algorithms. Our proofs introduce a general framework to show that SGD updates
tend to stay away from saddle surfaces and could be of broader interests to other
non-convex problems.
1
Introduction
Low rank matrix completion refers to the problem of recovering a low rank matrix by observing the
values of only a tiny fraction of its entries. This problem arises in several applications such as video
denoising [13], phase retrieval [3] and most famously in movie recommendation engines [15]. In the
context of recommendation engines for instance, the matrix we wish to recover would be user-item
rating matrix where each row corresponds to a user and each column corresponds to an item. Each
entry of the matrix is the rating given by a user to an item. Low rank assumption on the matrix is
inspired by the intuition that rating of an item by a user depends on only a few hidden factors, which
are much fewer than the number of users or items. The goal is to estimate the ratings of all items by
users given only partial ratings of items by users, which would then be helpful in recommending new
items to users.
The seminal works of Cand?s and Recht [4] first identified regularity conditions under which low
rank matrix completion can be solved in polynomial time using convex relaxation ? low rank matrix
completion could be ill-posed and NP-hard in general without such regularity assumptions [9].
Since then, a number of works have studied various algorithms under different settings for matrix
completion: weighted and noisy matrix completion, fast convex solvers, fast iterative non-convex
solvers, parallel and distributed algorithms and so on.
Most of this work however deals only with the offline setting where all the observed entries are
revealed at once and the recovery procedure does computation using all these observations simultaneously. However in several applications [5, 18], we encounter the online setting where observations are
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
only revealed sequentially and at each step the recovery algorithm is required to maintain an estimate
of the low rank matrix based on the observations so far. Consider for instance recommendation
engines, where the low rank matrix we are interested in is the user-item rating matrix. While we make
an observation only when a user rates an item, at any point of time, we should have an estimate of the
user-item rating matrix based on all prior observations so as to be able to continuously recommend
items to users. Moreover, this estimate should get better as we observe more ratings.
Algorithms for offline matrix completion can be used to solve the online version by rerunning the
algorithm after every additional observation. However, performing so much computation for every
observation seems wasteful and is also impractical. For instance, using alternating minimization,
which is among the fastest known algorithms for the offline problem, would mean that we take several
passes of the entire data for every additional observation. This is simply not feasible in most settings.
Another natural approach is to group observations into batches and do an update only once for each
batch. This however induces a lag between observations and estimates which is undesirable. To the
best of our knowledge, there is no known provable, efficient, online algorithm for matrix completion.
On the other hand, in order to deal with the online matrix completion scenario in practical applications,
several heuristics (with no convergence guarantees) have been proposed in literature [2, 19]. Most
of these approaches are based on starting with an estimate of the matrix and doing fast updates of
this estimate whenever a new observation is presented. One of the update procedures used in this
context is that of stochastic gradient descent (SGD) applied to the following non-convex optimization
problem
min kM ? UV> k2F
U,V
s.t. U ? Rd1 ?k , V ? Rd2 ?k ,
(1)
where M is the unknown matrix of size d1 ? d2 , k is the rank of M and UV> is a low rank
factorization of M we wish to obtain. The algorithm starts with some U0 and V0 , and given a
new observation (M)ij , SGD updates the ith -row and the j th -row of the current iterates Ut and Vt
respectively by
(i)
(i)
(j)
Ut+1 = Ut ? 2?d1 d2 Ut Vt> ? M ij Vt , and,
(j)
(j)
(i)
Vt+1 = Vt ? 2?d1 d2 Ut Vt> ? M ij Ut ,
(2)
where ? is an appropriately chosen stepsize, and U(i) denote the ith row of matrix U. Note that each
update modifies only one row of the factor matrices U and V, and the computation only involves one
row of U, V and the new observed entry (M)ij and hence are extremely fast. These fast updates
make SGD extremely appealing in practice. Moreover, SGD, in the context of matrix completion, is
also useful for parallelization and distributed implementation [23].
1.1
Our Contributions
In this work we present the first provable efficient algorithm for online matrix completion by showing
that SGD (2) with a good initialization converges to a true factorization of M at a geometric rate. Our
main contributions are as follows.
? We provide the first provable, efficient, online algorithm for matrix completion. Starting with
a good initialization, after each observation, the algorithm makes quick updates each taking
F
time O(k 3 ) and requires O(?dk?4 (k + log kMk
) log d) observations to reach accuracy,
where ? is the incoherence parameter, d = max(d1 , d2 ), k is the rank and ? is the condition
number of M.
? Moreover, our result features both sample complexity and total runtime linear in d, and is
competitive to even the best existing offline results for matrix completion. (either improve
over or is incomparable, i.e., better in some parameters and worse in others, to these results).
See Table 1 for the comparison.
? To obtain our results, we introduce a general framework to show SGD updates tend to stay
away from saddle surfaces. In order to do so, we consider distances from saddle surfaces,
show that they behave like sub-martingales under SGD updates and use martingale convergence techniques to conclude that the iterates stay away from saddle surfaces. While [24]
shows that SGD updates stay away from saddle surfaces, the stepsizes they can handle are
2
Table 1: Comparison of sample complexity and runtime of our algorithm with existing algorithms in
e hides log d factors. See Section 1.2 for more discussion.
order to obtain Frobenius norm error . O(?)
Algorithm
Sample complexity
Nuclear Norm [22]
Alternating
minimization [14]
Alternating
minimization [8]
Projected gradient
descent[12]
SGD [24]
Our result
e
O(?dk)
Total runtime
e 3 /?)
O(d
8
e
O(?dk?
log 1 )
2 8
e
O(?dk
? log 1 )
e ?dk 2 ?2 k + log
O
1
e ?dk 3 ?2 k + log
O
Online?
No
No
1
No
5
e
O(?dk
)
7
e
O(?dk
log 1 )
No
e 2 dk 7 ?6 )
O(?
e ?dk?4 k + log 1
O
poly(?, d, k, ?) log 1
e ?dk 4 ?4 log 1
O
Yes
Yes
quite small (scaling as 1/poly(d1 , d2 )), leading to suboptimal computational complexity.
Our framework makes it possible to establish the same statement for much larger step sizes,
giving us near-optimal runtime. We believe these techniques may be applicable in other
non-convex settings as well.
1.2
Related Work
In this section we will mention some more related work.
Offline matrix completion: There has been a lot of work on designing offline algorithms for matrix
completion, we provide the detailed comparison with our algorithm in Table 1. The nuclear norm
relaxation algorithm [22] has near-optimal sample complexity for this problem but is computationally
expensive. Motivated by the empirical success of non-convex heuristics, a long line of works,
[14, 8, 12, 24] and so on, has obtained convergence guarantees for alternating minimization, gradient
descent, projected gradient descent etc. Even the best of these are suboptimal in sample complexity
by poly(k, ?) factors. Our sample complexity is better than that of [14] and is incomparable to those
of [8, 12]. To the best of our knowledge, the only provable online algorithm for this problem is that
of Sun and Luo [24]. However the stepsizes they suggest are quite small, leading to suboptimal
computational complexity by factors of poly(d1 , d2 ). The runtime of our algorithm is linear in d,
which makes poly(d) improvements over it.
Other models for online matrix completion: Another variant of online matrix completion studied
in the literature is where observations are made on a column by column basis e.g., [16, 26]. These
models can give improved offline performance in terms of space and could potentially work under
relaxed regularity conditions. However, they do not tackle the version where only entries (as opposed
to columns) are observed.
Non-convex optimization: Over the last few years, there has also been a significant amount of
work in designing other efficient algorithms for solving non-convex problems. Examples include
eigenvector computation [6, 11], sparse coding [20, 1] etc. For general non-convex optimization, an
interesting line of recent work is that of [7], which proves gradient descent with noise can also escape
saddle point, but they only provide polynomial rate without explicit dependence. Later [17, 21] show
that without noise, the space of points from where gradient descent converges to a saddle point is a
measure zero set. However, they do not provide a rate of convergence. Another related piece of work
to ours is [10], proves global convergence along with rates of convergence, for the special case of
computing matrix squareroot.
1.3
Outline
The rest of the paper is organized as follows. In Section 2 we formally describe the problem and all
relevant parameters. In Section 3, we present our algorithms, results and some of the key intuition
3
behind our results. In Section 4 we give proof outline for our main results. We conclude in Section 5.
All formal proofs are deferred to the Appendix.
2
Preliminaries
In this section, we introduce our notation, formally define the matrix completion problem and
regularity assumptions that make the problem tractable.
2.1
Notation
We use [d] to denote {1, 2, ? ? ? , d}. We use bold capital letters A, B to denote matrices and bold
lowercase letters u, v to denote vectors. Aij means the (i, j)th entry of matrix A. kwk denotes the
`2 -norm of vector w and kAk/kAkF /kAk? denotes the spectral/Frobenius/infinity norm of matrix
A. ?i (A) denotes the ith largest singular value of A and ?min (A) denotes the smallest singular
value of A. We also let ?(A) = kAk /?min (A) denote the condition number of A (i.e., the ratio
of largest to smallest singular value). Finally, for orthonormal bases of a subspace W, we also use
PW = WW> to denote the projection to the subspace spanned by W.
2.2
Problem statement and assumptions
Consider a general rank k matrix M ? Rd1 ?d2 . Let ? ? [d1 ] ? [d2 ] be a subset of coordinates, which
are sampled uniformly and independently from [d1 ] ? [d2 ]. We denote P? (M) to be the projection
of M on set ? so that:
Mij , if (i, j) ? ?
[P? (M)]ij =
0,
if (i, j) 6? ?
Low rank matrix completion is the task of recovering M by only observing P? (M). This task is
ill-posed and NP-hard in general [9]. In order to make this tractable, we make by now standard
assumptions about the structure of M.
Definition 2.1. Let W ? Rd?k be an orthonormal basis of a subspace of Rd of dimension k. The
coherence of W is defined to be
2
d
def d
2
?(W) =
max kPW ei k = max
e>
i W
k 1?i?d
k 1?i?d
Assumption 2.2 (?-incoherence[4, 22]). We assume M is ?-incoherent, i.e., max{?(X), ?(Y)} ?
?, where X ? Rd1 ?k , Y ? Rd2 ?k are the left and right singular vectors of M.
3
Main Results and Intuition
In this section, we present our main result. We will first state result for a special case where M is a
symmetric positive semi-definite (PSD) matrix, where the algorithm and analysis are much simpler.
We will then discuss the general case.
3.1
Symmetric PSD Case
def
Consider the special case where M is symmetric PSD and let d = d1 = d2 . Then, we can parametrize
a rank k symmetric PSD matrix by UU> where U ? Rd?k . Our algorithm for this case is given
in Algorithm 1. The following theorem provides guarantees on the performance of Algorithm 1.
The algorithm starts by using an initial set of samples ?init to construct a crude approximation to
the low rank of factorization of M. It then observes samples from M one at a time and updates its
factorization after every observation.
Theorem 3.1. Let M ? Rd?d be a rank k, symmetric PSD matrix with ?-incoherence. There
exist some absolute constants c0 and c such that if |?init | ? c0 ?dk 2 ?2 (M) log d, learning rate
c
1
2
1
? ? ?dk?3 (M)kMk
log d , then with probability at least 1 ? d8 , we will have for all t ? d that :
t
2
1
1
2
kUt U>
?
Mk
?
1
?
?
?
?
(M)
?
(M)
.
min
min
t
F
2
10
1
W.L.O.G, we can always assume t < d2 , otherwise we already observed the entire matrix.
4
Algorithm 1 Online Algorithm for PSD Matrix Completion.
Input: Initial set of uniformly random samples ?init of a symmetric PSD matrix M ? Rd?d ,
learning rate ?, iterations T
Output: U such that UU> ? M
d2
U0 U>
0 ? top k SVD of |?init | P?init (M)
for t = 0, ? ? ? , T ? 1 do
Observe Mij where (i, j) ? Unif ([d] ? [d])
>
>
Ut+1 ? Ut ? 2?d2 (Ut U>
t ? M)ij (ei ej + ej ei )Ut
end for
Return UT
Remarks:
? The algorithm uses an initial set of observations ?init to produce a warm start iterate U0 ,
then enters the online stage, where it performs SGD.
? The sample complexity of the warm start phase is O(?dk 2 ?2 (M) log d). The initialization
consists of a top-k SVD on a sparse matrix, whose runtime is O(?dk 3 ?2 (M) log d).
c
? For the online phase (SGD), if we choose ? = ?dk?3 (M)kMk
log d , the number
>
of observations T required for the error kUT UT ? MkF to be smaller than is
O(?dk?(M)4 log d log ?min(M) ).
? Since each SGD step modifies two rows of Ut , its runtime is O(k) with a total runtime for
online phase of O(kT ).
Our proof approach is to essentially show that the objective function is well-behaved (i.e., is smooth
and strongly convex) in a local neighborhood of the warm start region, and then use standard
techniques to show that SGD obtains geometric convergence in this setting. The most challenging and
novel part of our analysis comprises of showing that the iterate does not leave this local neighborhood
while performing SGD updates. Refer Section 4 for more details on the proof outline.
3.2
General Case
Let us now consider the general case where M ? Rd1 ?d2 can be factorized as UV> with U ? Rd1 ?k
and V ? Rd2 ?k . In this scenario, we denote d = max{d1 , d2 }. We recall our remarks from the
previous section that our analysis of the performance of SGD depends on the smoothness and strong
convexity properties of the objective function in a local neighborhood of the iterates. Having U 6= V
introduces additional challenges in this approach since for any nonsingular k-by-k matrix C, and
def
def
U0 = UC> , V0 = VC?1 , we have U0 V0> = UV> . Suppose for instance C is a very small scalar
times the identity i.e., C = ?I for some small ? > 0. In this case, U0 will be large while V0 will be
small. This drastically deteriorates the smoothness and strong convexity properties of the objective
function in a neighborhood of (U0 , V0 ).
Algorithm 2 Online Algorithm for Matrix Completion (Theoretical)
Input: Initial set of uniformly random samples ?init of M ? Rd1 ?d2 , learning rate ?, iterations T
Output: U, V such that UV> ? M
d1 d2
U0 V0> ? top k SVD of |?
P?init (M)
init |
for t = 0, ? ? ? , T ? 1 do
WU DWV> ? SVD(Ut Vt> )
? t ? WU D 21 , V
? t ? WV D 21
U
Observe Mij where (i, j) ? Unif ([d] ? [d])
? t ? 2?d1 d2 (U
? tV
? > ? M)ij ei e> V
?t
Ut+1 ? U
t
j
? t ? 2?d1 d2 (U
? tV
? > ? M)ij ej e> U
?t
Vt+1 ? V
t
i
end for
Return UT , VT .
5
?t ?
To preclude such a scenario, we would ideally like to renormalize after each step by doing U
1
1
>
>
? t ? WV D 2 , where WU DW is the SVD of matrix Ut V . This algorithm is described
WU D 2 , V
t
V
in Algorithm 2. However, a naive implementation of Algorithm 2, especially the SVD step, would
incur O(min{d1 , d2 }) computation per iteration, resulting in a runtime overhead of O(d) over both
the online PSD case (i.e., Algorithm 1) as well as the near linear time offline algorithms (see Table 1).
It turns out that we can take advantage of the fact that in each iteration we only update a single
row of Ut and a single row of Vt , and do efficient (but more complicated) update steps instead of
doing an SVD on d1 ? d2 matrix. The resulting algorithm is given in Algorithm 3. The key idea
>
is that in order to implement the updates, it suffices to do an SVD of U>
t Ut and Vt Vt which are
3
k ? k matrices. So the runtime of each iteration is at most O(k ). The following lemma shows the
equivalence between Algorithms 2 and 3.
Algorithm 3 Online Algorithm for Matrix Completion (Practical)
Input: Initial set of uniformly random samples ?init of M ? Rd1 ?d2 , learning rate ?, iterations T
Output: U, V such that UV> ? M
d2
U0 V0> ? top k SVD of d?1init
P?init (M)
for t = 0, ? ? ? , T ? 1 do
>
RU DU R>
U ? SVD(Ut Ut )
>
RV DV RV ? SVD(Vt> Vt )
1
1
>
2
2 >
QU DQ>
V ? SVD(DU RU RV (DV ) )
Observe Mij where (i, j) ? Unif ([d] ? [d])
?1
1
?1
1
>
>
2
2
Ut+1 ? Ut ? 2?d1 d2 (Ut Vt> ? M)ij ei e>
j Vt RV DV QV QU DU RU
>
>
2
2
Vt+1 ? Vt ? 2?d1 d2 (Ut Vt> ? M)ij ej e>
i Ut RU DU QU QV DV RV
end for
Return UT , VT .
Lemma 3.2. Algorithm 2 and Algorithm 3 are equivalent in the sense that: given same observations
from M and other inputs, the outputs of Algorithm 2, U, V and those of Algorithm 3, U0 , V0 satisfy
UV> = U0 V0> .
Since the output of both algorithms is the same, we can analyze Algorithm 2 (which is easier than
that of Algorithm 3), while implementing Algorithm 3 in practice. The following theorem is the main
result of our paper which presents guarantees on the performance of Algorithm 2.
def
Theorem 3.3. Let M ? Rd1 ?d2 be a rank k matrix with ?-incoherence and let d = max(d1 , d2 ).
There exist some absolute constants c0 and c such that if |?init | ? c0 ?dk 2 ?2 (M) log d, learning rate
c
1
2
? ? ?dk?3 (M)kMk
log d , then with probability at least 1 ? d8 , we will have for all t ? d that:
t
2
1
1
>
2
kUt Vt ? MkF ? 1 ? ? ? ?min (M)
?min (M) .
2
10
Remarks:
? Just as in the case of PSD matrix completion (Theorem 3.1), Algorithm 2 needs an initial
set of observations ?init to provide a warm start U0 and V0 after which it performs SGD.
? The sample complexity and runtime of the warm start phase are the same as in symmetric
PSD case. The stepsize ? and the number of observations T to achieve error in online
phase (SGD) are also the same as in symmetric PSD case.
? However, runtime of each update step in online phase is O(k 3 ) with total runtime for online
phase O(k 3 T ).
The proof of this theorem again follows a similar line of reasoning as that of Theorem 3.1 by first
showing that the local neighborhood of warm start iterate has good smoothness and strong convexity
properties and then use them to show geometric convergence of SGD. Proof of the fact that iterates
do not move away from this local neighborhood however is significantly more challenging due to
renormalization steps in the algorithm. Please see Appendix C for the full proof.
6
4
Proof Sketch
In this section we will provide the intuition and proof sketch for our main results. For simplicity and
highlighting the most essential ideas, we will mostly focus on the symmetric PSD case (Theorem
3.1). For the asymmetric case, though the high-level ideas are still valid, a lot of additional effort is
required to address the renormalization step in Algorithm 2. This makes the proof more involved.
First, note that our algorithm for the PSD case consists of an initialization and then stochastic descent
steps. The following lemma provides guarantees on the error achieved by the initial iterate U0 .
Lemma 4.1. Let M ? Rd?d be a rank-k PSD matrix with ?-incoherence. There exists a constant c0
such that if |?init | ? c0 ?dk 2 ?2 (M) log d, then with probability at least 1 ? d110 , the top-k SVD of
d2
>
|?init | P?init (M) (denote as U0 U0 ) satisfies:
kM ? U0 U>
0 kF ?
1
?min (M)
20
and
2
? 10?k?(M) kMk
max
e>
j U0
j
d
(3)
By Lemma 4.1, we know the initialization algorithm already gives U0 in the local region given by
Eq.(3). Intuitively, stochastic descent steps should keep doing local search within this local region.
2
To establish linear convergence on kUt U>
t ? MkF and obtain final result, we first establish several
important lemmas describing the properties of this local regions. Throughout this section, we always
denote SVD(M) = XSX> , where X ? Rd?k , and diagnal matrix S ? Rk?k . We postpone all the
formal proofs in Appendix.
Lemma 4.2. For function f (U) = kM ? UU> k2F and any U1 , U2 ? {U| kUk ? ?}, we have:
k?f (U1 ) ? ?f (U2 )kF ? 16 max{?2 , kMk} ? kU1 ? U2 kF
Lemma 4.3. For function f (U) = kM ? UU> k2F and any U ? {U|?min (X> U) ? ?}, we have:
k?f (U)k2F ? 4? 2 f (U)
Lemma 4.2 tells function f is smooth if spectral norm of U is not very large. On the other hand,
?min (X> U) not too small requires both ?min (U> U) and ?min (X> W) are not too small, where W
is top-k eigenspace of UU> . That is, Lemma 4.3 tells function f has a property similar to strongly
convex in standard optimization literature, if U is rank k in a robust sense (?k (U) is not too small),
and the angle between the top k eigenspace of UU> and the top k eigenspace M is not large.
1
Lemma 4.4. Within the region D = {U|
M ? UU>
F ? 10
?k (M)}, we have:
p
p
kUk ? 2 kMk,
?min (X> U) ? ?k (M)/2
1
?k (M)}, matrix U always has a good
Lemma 4.4 tells inside region {U|
M ? UU>
F ? 10
spectral property which gives preconditions for both Lemma 4.2 and 4.3, where f (U) is both smooth
and has a property very similar to strongly convex.
With above three lemmas, we already been able to see the intuition behind linear convergence in
Theorem 3.1. Denote stochastic gradient
>
SG(U) = 2d2 (UU> ? M)ij (ei e>
j + ej ei )U
(4)
where SG(U) is a random matrix depends on the randomness of sample (i, j) of matrix M. Then,
the stochastic update step in Algorithm 1 can be rewritten as:
Ut+1 ? Ut ? ?SG(Ut )
UU> k2F ,
Let f (U) = kM ?
By easy caculation, we know ESG(U) = ?f (U), that is SG(U) is
unbiased. Combine Lemma 4.4 with Lemma 4.2 and Lemma 4.3, we know within region D specified
by Lemma 4.4, we have function f (U) is 32 kMk-smooth, and k?f (U)k2F ? 2?min (M)f (U).
Let?s suppose ideally, we always have U0 , . . . , Ut inside region D, this directly gives:
Ef (Ut+1 ) ? Ef (Ut ) ? ?Eh?f (Ut ), SG(Ut )i + 16? 2 kMk ? EkSG(Ut )k2F
= Ef (Ut ) ? ?Ek?f (Ut )k2F + 16? 2 kMk ? EkSG(Ut )k2F
? (1 ? 2??min (M))Ef (Ut ) + 16? 2 kMk ? EkSG(Ut )k2F
7
One interesting aspect of our main result is that we actually show linear convergence under the
presence of noise in gradient. This is true because for the second-order (? 2 ) term above, we can
roughly see from Eq.(4) that kSG(U)k2F ? h(U) ? f (U), where h(U) is a factor depends on U and
always bounded. That is, SG(U) enjoys self-bounded property ? kSG(U)k2F will goes to zero, as
objective function f (U) goes to zero. Therefore, by choosing learning rate ? appropriately small,
we can have the first-order term always dominate the second-order term, which establish the linear
convergence.
Now, the only remaining issue is to prove that ?U0 , . . . , Ut always stay inside local region D?. In
reality, we can only prove this statement with high probability due to the stochastic nature of the
update. This is also the most challenging part in our proof, which makes our analysis different from
standard convex analysis, and uniquely required due to non-convex setting.
Our key theorem is presented as follows:
2
2
Theorem 4.5. Let f (U) =
UU> ? M
F and gi (U) =
e>
i U . Suppose initial U0 satisfying:
2
?min (M)
10?k?(M)2
f (U0 ) ?
,
max gi (U0 ) ?
kMk
i
20
d
c
Then, there exist some absolute constant c such that for any learning rate ? < ?dk?3 (M)kMk
log d ,
T
with at least 1 ? d10 probability, we will have for all t ? T that:
2
?min (M)
20?k?(M)2
1
,
max gi (Ut ) ?
kMk
(5)
f (Ut ) ? (1 ? ??min (M))t
i
2
10
d
Note function maxi gi (U) indicates the incoherence of matrix U. Theorem 4.5 guarantees if inital
U0 is in the local region which is incoherent and U0 U>
0 is close to M, then with high probability
for all steps t ? T , Ut will always stay in a slightly relaxed local region, and f (Ut ) has linear
convergence.
It is not hard to show that all saddle points of f (U) satisfy ?k (U) = 0, and all local minima are global
minima. Since U0 , . . . , Ut automatically stay in region f (U) ? ( ?min10(M) )2 with high probability,
we know Ut also stay away from all saddle points. The claim that U0 , . . . , Ut stays incoherent is
essential to better control the variance and probability 1 bound of SG(Ut ), so that we can have large
step size and tight convergence rate.
The major challenging in proving Theorem 4.5 is to both prove Ut stays in the local region, and
achieve good sample complexity and running time (linear in d) in the same time. This also requires
the learning rate ? in Algorithm 1 to be relatively large. Let the event Et denote the good event where
U0 , . . . , Ut satisfies Eq.(5). Theorem 4.5 is claiming that P (ET ) is large. The essential steps in
the proof is contructing two supermartingles related to f (Ut )1Et and gi (Ut )1Et (where 1(?) denote
indicator function), and use Bernstein inequalty to show the concentration of supermartingales. The
1Et term allow us the claim all previous U0 , . . . , Ut have all desired properties inside local region.
Finally, we see Theorem 3.1 as a immediate corollary of Theorem 4.5.
5
Conclusion
In this paper, we presented the first provable, efficient online algorithm for matrix completion, based
on nonconvex SGD. In addition to the online setting, our results are also competitive with state of the
art results in the offline setting. We obtain our results by introducing a general framework that helps
us show how SGD updates self-regulate to stay away from saddle points. We hope our paper and
results help generate interest in online matrix completion, and our techniques and framework prompt
tighter analysis for other nonconvex problems.
References
[1] Sanjeev Arora, Rong Ge, Tengyu Ma, and Ankur Moitra. Simple, efficient, and neural algorithms for
sparse coding. arXiv preprint arXiv:1503.00778, 2015.
[2] Matthew Brand. Fast online SVD revisions for lightweight recommender systems. In SDM, pages 37?46.
SIAM, 2003.
8
[3] Emmanuel J Candes, Yonina C Eldar, Thomas Strohmer, and Vladislav Voroninski. Phase retrieval via
matrix completion. SIAM Review, 57(2):225?251, 2015.
[4] Emmanuel J. Cand?s and Benjamin Recht. Exact matrix completion via convex optimization. Foundations
of Computational Mathematics, 9(6):717?772, December 2009.
[5] James Davidson, Benjamin Liebald, Junning Liu, Palash Nandy, Taylor Van Vleet, Ullas Gargi, Sujoy
Gupta, Yu He, Mike Lambert, Blake Livingston, et al. The youtube video recommendation system. In
Proceedings of the fourth ACM conference on Recommender systems, pages 293?296. ACM, 2010.
[6] Christopher De Sa, Kunle Olukotun, and Christopher R?. Global convergence of stochastic gradient
descent for some non-convex matrix problems. arXiv preprint arXiv:1411.1134, 2014.
[7] Rong Ge, Furong Huang, Chi Jin, and Yang Yuan. Escaping from saddle points?online stochastic gradient
for tensor decomposition. arXiv preprint arXiv:1503.02101, 2015.
[8] Moritz Hardt. Understanding alternating minimization for matrix completion. In Foundations of Computer
Science (FOCS), 2014 IEEE 55th Annual Symposium on, pages 651?660. IEEE, 2014.
[9] Moritz Hardt, Raghu Meka, Prasad Raghavendra, and Benjamin Weitz. Computational limits for matrix
completion. In COLT, pages 703?725, 2014.
[10] Prateek Jain, Chi Jin, Sham M Kakade, and Praneeth Netrapalli. Computing matrix squareroot via non
convex local search. arXiv preprint arXiv:1507.05854, 2015.
[11] Prateek Jain, Chi Jin, Sham M Kakade, Praneeth Netrapalli, and Aaron Sidford. Matching matrix
bernstein with little memory: Near-optimal finite sample guarantees for oja?s algorithm. arXiv preprint
arXiv:1602.06929, 2016.
[12] Prateek Jain and Praneeth Netrapalli. Fast exact matrix completion with finite samples. arXiv preprint
arXiv:1411.1087, 2014.
[13] Hui Ji, Chaoqiang Liu, Zuowei Shen, and Yuhong Xu. Robust video denoising using low rank matrix
completion. In CVPR, pages 1791?1798. Citeseer, 2010.
[14] Raghunandan Hulikal Keshavan. Efficient algorithms for collaborative filtering. PhD thesis, STANFORD
UNIVERSITY, 2012.
[15] Yehuda Koren. The BellKor solution to the Netflix grand prize. Netflix prize documentation, 81:1?10,
2009.
[16] Akshay Krishnamurthy and Aarti Singh. Low-rank matrix and tensor completion via adaptive sampling.
In Advances in Neural Information Processing Systems, pages 836?844, 2013.
[17] Jason D Lee, Max Simchowitz, Michael I Jordan, and Benjamin Recht. Gradient descent converges to
minimizers. University of California, Berkeley, 1050:16, 2016.
[18] G. Linden, B. Smith, and J. York. Amazon.com recommendations: item-to-item collaborative filtering.
IEEE Internet Computing, 7(1):76?80, Jan 2003.
[19] Xin Luo, Yunni Xia, and Qingsheng Zhu. Incremental collaborative filtering recommender based on
regularized matrix factorization. Knowledge-Based Systems, 27:271?280, 2012.
[20] Julien Mairal, Francis Bach, Jean Ponce, and Guillermo Sapiro. Online learning for matrix factorization
and sparse coding. The Journal of Machine Learning Research, 11:19?60, 2010.
[21] Ioannis Panageas and Georgios Piliouras. Gradient descent converges to minimizers: The case of nonisolated critical points. arXiv preprint arXiv:1605.00405, 2016.
[22] Benjamin Recht. A simpler approach to matrix completion. Journal of Machine Learning Research,
12(Dec):3413?3430, 2011.
[23] Benjamin Recht and Christopher R?. Parallel stochastic gradient algorithms for large-scale matrix completion. Mathematical Programming Computation, 5(2):201?226, 2013.
[24] Ruoyu Sun and Zhi-Quan Luo. Guaranteed matrix completion via nonconvex factorization. In Foundations
of Computer Science (FOCS), 2015 IEEE 56th Annual Symposium on, pages 270?289. IEEE, 2015.
[25] Joel A Tropp. User-friendly tail bounds for sums of random matrices. Foundations of computational
mathematics, 12(4):389?434, 2012.
[26] Se-Young Yun, Marc Lelarge, and Alexandre Proutiere. Streaming, memory limited matrix completion
with noise. arXiv preprint arXiv:1504.03156, 2015.
9
| 6204 |@word version:3 pw:1 polynomial:2 seems:1 norm:6 c0:6 unif:3 km:5 d2:29 prasad:1 decomposition:1 citeseer:1 sgd:22 mention:1 initial:9 liu:2 lightweight:1 ours:1 existing:3 kmk:14 current:1 com:2 luo:3 update:22 rd2:3 fewer:1 item:14 ith:3 prize:2 smith:1 iterates:4 provides:2 simpler:2 mathematical:1 along:1 symposium:2 yuan:1 consists:2 prove:3 focs:2 overhead:1 combine:1 inside:4 introduce:3 roughly:1 cand:2 chi:4 inspired:1 automatically:1 zhi:1 little:1 preclude:1 solver:2 revision:1 spain:1 moreover:3 notation:2 bounded:2 factorized:1 eigenspace:3 prateek:3 eigenvector:1 impractical:1 guarantee:7 sapiro:1 berkeley:3 every:5 friendly:1 tackle:1 runtime:15 control:1 positive:1 local:16 limit:1 incoherence:6 initialization:5 studied:3 ankur:1 dynamically:1 equivalence:1 challenging:4 fastest:1 factorization:7 limited:1 practical:2 practice:3 d110:1 definite:1 implement:1 postpone:1 yehuda:1 procedure:2 jan:1 empirical:1 significantly:1 projection:2 matching:1 refers:1 suggest:1 get:1 undesirable:1 close:1 context:3 seminal:1 equivalent:1 quick:1 modifies:2 go:2 starting:2 independently:1 convex:20 shen:1 simplicity:1 recovery:2 amazon:1 nuclear:2 orthonormal:2 spanned:1 dominate:1 dw:1 proving:1 handle:1 coordinate:1 krishnamurthy:1 suppose:3 user:13 exact:2 programming:1 us:1 designing:2 documentation:1 expensive:1 satisfying:1 asymmetric:1 observed:4 mike:1 preprint:8 solved:1 enters:1 precondition:1 region:14 sun:2 observes:1 intuition:5 benjamin:6 convexity:3 complexity:12 ideally:2 singh:1 solving:1 tight:1 bellkor:1 incur:1 basis:2 livingston:1 various:1 jain:3 fast:8 describe:1 tell:3 neighborhood:6 choosing:1 quite:2 lag:1 widely:1 posed:2 solve:1 heuristic:2 larger:1 otherwise:1 whose:1 cvpr:1 stanford:1 gi:5 noisy:1 final:1 online:30 advantage:1 sdm:1 simchowitz:1 propose:1 relevant:1 achieve:2 frobenius:2 convergence:15 regularity:4 produce:1 incremental:1 converges:4 leave:1 tall:1 help:2 completion:38 ij:11 sa:1 eq:3 netrapalli:4 recovering:2 c:2 involves:1 strong:3 uu:11 stochastic:11 vc:1 supermartingales:1 implementing:1 suffices:1 preliminary:1 tighter:1 rong:2 blake:1 claim:2 matthew:1 major:1 smallest:2 aarti:1 applicable:1 largest:2 qv:2 weighted:1 minimization:5 hope:1 always:8 ej:5 stepsizes:2 broader:1 corollary:1 focus:1 ponce:1 improvement:1 rank:21 indicates:1 sense:2 helpful:1 minimizers:2 lowercase:1 streaming:1 entire:2 hidden:1 proutiere:1 interested:1 voroninski:1 issue:1 among:1 ill:2 eldar:1 colt:1 art:2 special:3 uc:2 once:2 construct:1 having:1 washington:2 sampling:1 yonina:1 yu:1 k2f:12 np:2 recommend:1 others:1 escape:1 few:3 oja:1 simultaneously:2 phase:9 raghunandan:1 microsoft:2 maintain:1 psd:14 interest:2 highly:1 joel:1 deferred:1 introduces:1 behind:2 strohmer:1 kt:1 partial:1 vladislav:1 taylor:1 desired:1 renormalize:1 theoretical:1 mk:1 instance:4 column:4 sidford:1 introducing:1 entry:8 subset:1 too:3 recht:5 grand:1 siam:2 kut:4 stay:11 lee:1 michael:1 continuously:1 sanjeev:1 again:1 thesis:1 moitra:1 opposed:1 choose:1 huang:1 d8:2 worse:1 ek:1 inefficient:1 leading:2 return:3 ku1:1 de:1 coding:3 bold:2 ioannis:1 satisfy:2 depends:4 piece:1 later:1 lot:2 jason:1 observing:3 francis:1 doing:4 netflix:2 kwk:1 start:9 recover:2 competitive:3 parallel:2 complicated:1 candes:1 weitz:1 contribution:2 collaborative:3 accuracy:1 variance:1 nonsingular:1 yes:2 raghavendra:1 lambert:1 randomness:1 reach:1 whenever:1 definition:1 lelarge:1 involved:1 james:1 naturally:1 proof:14 sampled:1 hardt:2 recall:1 knowledge:3 ut:55 organized:1 actually:1 furong:1 alexandre:1 improved:1 though:1 strongly:3 just:1 stage:1 hand:2 sketch:2 tropp:1 ei:7 christopher:3 keshavan:1 d10:1 behaved:1 believe:1 true:2 unbiased:1 hence:1 alternating:5 symmetric:9 moritz:2 deal:2 self:2 uniquely:1 please:1 kak:3 yun:1 outline:3 performs:4 reasoning:1 novel:1 ef:4 ji:1 tail:1 he:1 xsx:1 significant:1 refer:1 inital:1 smoothness:3 rd:7 uv:7 meka:1 mathematics:2 surface:5 v0:10 etc:2 base:1 hide:1 recent:1 scenario:3 nonconvex:3 wv:2 success:1 vt:21 ruoyu:1 minimum:2 additional:4 relaxed:2 zuowei:1 u0:29 semi:1 rv:5 full:1 sham:4 smooth:4 bach:1 long:1 retrieval:2 jean:1 kunle:1 involving:1 variant:1 essentially:1 arxiv:16 iteration:6 achieved:1 dec:1 addition:1 singular:4 appropriately:2 parallelization:1 rest:1 pass:1 tend:2 quan:1 december:1 jordan:1 near:5 presence:1 yang:1 revealed:2 bernstein:2 easy:1 iterate:4 identified:1 suboptimal:3 escaping:1 incomparable:2 praneeth:5 idea:3 motivated:1 effort:1 york:1 remark:3 useful:1 detailed:1 se:1 amount:1 induces:1 generate:1 exist:3 deteriorates:1 per:1 panageas:1 group:1 key:3 capital:1 wasteful:1 kuk:2 olukotun:1 relaxation:2 fraction:1 year:1 sum:1 angle:1 letter:2 fourth:1 throughout:1 wu:4 squareroot:2 coherence:1 appendix:3 scaling:1 kpw:1 def:5 bound:2 internet:1 guaranteed:1 koren:1 annual:2 infinity:1 u1:2 aspect:1 min:20 analyze:1 extremely:2 performing:2 tengyu:1 relatively:1 tv:2 smaller:1 slightly:1 kakade:3 appealing:2 qu:3 dv:4 restricted:1 intuitively:1 computationally:1 discus:1 turn:1 describing:1 know:4 ge:2 tractable:2 end:3 raghu:1 parametrize:1 rewritten:1 observe:5 away:7 spectral:3 regulate:1 stepsize:2 batch:2 encounter:1 thomas:1 denotes:4 top:8 include:1 remaining:1 running:1 chijin:1 giving:2 emmanuel:2 prof:2 establish:4 especially:1 tensor:2 objective:4 move:1 already:3 rerunning:1 dependence:1 concentration:1 gradient:15 subspace:3 distance:1 provable:8 ru:4 ratio:1 mostly:1 statement:3 potentially:1 claiming:1 implementation:2 unknown:2 recommender:3 observation:23 finite:2 descent:13 jin:4 behave:1 immediate:1 ww:1 prompt:1 rating:8 required:4 specified:1 engine:3 california:1 barcelona:1 nip:1 address:1 able:2 challenge:1 max:11 memory:2 video:3 event:2 critical:1 natural:1 warm:6 eh:1 regularized:1 indicator:1 zhu:1 improve:1 movie:1 julien:1 arora:1 incoherent:3 naive:1 prior:1 literature:3 geometric:3 sg:7 kf:3 review:1 understanding:1 georgios:1 ksg:2 kakf:1 interesting:2 filtering:3 foundation:4 dq:1 famously:1 tiny:1 row:10 guillermo:1 last:1 enjoys:1 offline:12 formal:2 aij:1 drastically:1 allow:1 india:1 wide:1 piliouras:1 taking:1 akshay:1 absolute:3 sparse:4 distributed:2 van:1 xia:1 dimension:1 valid:1 made:1 adaptive:1 projected:2 far:2 obtains:1 mkf:3 keep:1 global:3 sequentially:1 mairal:1 conclude:2 recommending:1 davidson:1 search:2 iterative:1 table:4 reality:1 nature:1 robust:2 init:17 du:4 poly:5 marc:1 main:7 noise:4 xu:1 martingale:2 renormalization:2 sub:1 comprises:1 wish:3 explicit:1 crude:1 young:1 theorem:16 rk:1 yuhong:1 showing:3 maxi:1 dk:21 gupta:1 linden:1 essential:3 exists:1 hui:1 phd:1 easier:1 rd1:8 simply:1 saddle:11 highlighting:1 scalar:1 recommendation:5 u2:3 mij:4 corresponds:2 satisfies:2 acm:2 ma:1 goal:1 identity:1 esg:1 feasible:1 hard:3 youtube:1 uniformly:4 denoising:2 lemma:18 total:5 svd:15 xin:1 brand:1 aaron:1 formally:2 arises:1 d1:18 |
5,753 | 6,205 | Swapout: Learning an ensemble of deep architectures
Saurabh Singh, Derek Hoiem, David Forsyth
Department of Computer Science
University of Illinois, Urbana-Champaign
{ss1, dhoiem, daf}@illinois.edu
Abstract
We describe Swapout, a new stochastic training method, that outperforms ResNets
of identical network structure yielding impressive results on CIFAR-10 and CIFAR100. Swapout samples from a rich set of architectures including dropout [20],
stochastic depth [7] and residual architectures [5, 6] as special cases. When viewed
as a regularization method swapout not only inhibits co-adaptation of units in
a layer, similar to dropout, but also across network layers. We conjecture that
swapout achieves strong regularization by implicitly tying the parameters across
layers. When viewed as an ensemble training method, it samples a much richer
set of architectures than existing methods such as dropout or stochastic depth.
We propose a parameterization that reveals connections to exiting architectures
and suggests a much richer set of architectures to be explored. We show that our
formulation suggests an efficient training method and validate our conclusions on
CIFAR-10 and CIFAR-100 matching state of the art accuracy. Remarkably, our 32
layer wider model performs similar to a 1001 layer ResNet model.
1
Introduction
This paper describes swapout, a stochastic training method for general deep networks. Swapout
is a generalization of dropout [20] and stochastic depth [7] methods. Dropout zeros the output of
individual units at random during training, while stochastic depth skips entire layers at random during
training. In comparison, the most general swapout network produces the value of each output unit
independently by reporting the sum of a randomly selected subset of current and all previous layer
outputs for that unit. As a result, while some units in a layer may act like normal feedforward units,
others may produce skip connections and yet others may produce a sum of several earlier outputs. In
effect, our method averages over a very large set of architectures that includes all architectures used
by dropout and all used by stochastic depth.
Our experimental work focuses on a version of swapout which is a natural generalization of the
residual network [5, 6]. We show that this results in improvements in accuracy over residual networks
with the same number of layers.
Improvements in accuracy are often sought by increasing the depth, leading to serious practical
difficulties. The number of parameters rises sharply, although recent works such as [19, 22] have
addressed this by reducing the filter size [19, 22]. Another issue resulting from increased depth is
the difficulty of training longer chains of dependent variables. Such difficulties have been addressed
by architectural innovations that introduce shorter paths from input to loss either directly [22, 21, 5]
or with additional losses applied to intermediate layers [22, 12]. At the time of writing, the deepest
networks that have been successfully trained are residual networks (1001 layers [6]). We show that
increasing the depth of our swapout networks increases their accuracy.
There is compelling experimental evidence that these very large depths are helpful, though this may
be because architectural innovations introduced to make networks trainable reduce the capacity of
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
the layers. The theoretical evidence that a depth of 1000 is required for practical problems is thin.
Bengio and Dellaleau argue that circuit efficiency constraints suggest increasing depth is important,
because there are functions that require exponentially large shallow networks to compute [1]. Less
experimental interest has been displayed in the width of the networks (the number of filters in a
convolutional layer). We show that increasing the width of our swapout networks leads to significant
improvements in their accuracy; an appropriately wide swapout network is competitive with a deep
residual network that is 1.5 orders of magnitude deeper and has more parameters.
Contributions: Swapout is a novel stochastic training scheme that can sample from a rich set of
architectures including dropout, stochastic depth and residual architectures as special cases. Swapout
improves the performance of the residual networks for a model of the same depth. Wider but much
shallower swapout networks are competitive with very deep residual networks.
2
Related Work
Convolutional neural networks have a long history (see the introduction of [11]). They are now
intensively studied as a result of recent successes (e.g. [9]). Increasing the number of layers in
a network improves performance [19, 22] if the network can be trained. A variety of significant
architectural innovations improve trainability, including: the ReLU [14, 3]; batch normalization [8];
and allowing signals to skip layers.
Our method exploits this skipping process. Highway networks use gated skip connections to allow
information and gradients to pass unimpeded across several layers [21]. Residual networks use
identity skip connections to further improve training [5]; extremely deep residual networks can be
trained, and perform well [6]. In contrast to these architectures, our method skips at the unit level
(below), and does so randomly.
Our method employs randomness at training time. For a review of the history of random methods,
see the introduction of [16], which shows that entirely randomly chosen features can produce an
SVM that generalizes well. Randomly dropping out unit values (dropout [20]) discourages coadaptation between units. Randomly skipping layers (stochastic depth) [7] during training reliably
leads to improvements at test time, likely because doing so regularizes the network. The precise
details of the regularization remain uncertain, but it appears that stochastic depth represents a form
of tying between layers; when a layer is dropped, other layers are encouraged to be able to replace
it. Each method can be seen as training a network that averages over a family of architectures
during inference. Dropout averages over architectures with ?missing? units and stochastic depth
averages over architectures with ?missing? layers. Other successful recent randomized methods
include dropconnect [23] which generalizes dropout by dropping individual connections instead of
units (so dropping several connections together), and stochastic pooling [24] (which regularizes by
replacing the deterministic pooling by randomized pooling). In contrast, our method skips layers
randomly at a unit level enjoying the benefits of each method.
Recent results show that (a) stochastic gradient descent with sufficiently few steps is stable (in the
sense that changes to training data do not unreasonably disrupt predictions) and (b) dropout enhances
that property, by reducing the value of a Lipschitz constant ([4], Lemma 4.4). We show our method
enjoys the same behavior as dropout in this framework.
Like dropout, the network trained with swapout depends on random variables. A reasonable strategy
at test time with such a network is to evaluate multiple instances (with different samples used for
the random variables) and average. Reliable improvements in accuracy are achievable by training
distinct models (which have distinct sets of parameters), then averaging predictions [22], thereby
forming an explicit ensemble. In contrast, each of the instances of our network in an average would
draw from the same set of parameters (we call this an implicit ensemble). Srivastava et al. argue
that, at test time, random values in a dropout network should be replaced with expectations, rather
than taking an average over multiple instances [20] (though they use explicit ensembles, increasing
the computational cost). Considerations include runtime at test; the number of samples required;
variance; and experimental accuracy results. For our model, accurate values of these expectations are
not available. In Section 4, we show that (a) swapout networks that use estimates of these expectations
outperform strong comparable baselines and (b) in turn, these are outperformed by swapout networks
that use an implicit ensemble.
2
Input
(a)
Output
(b) FeedForward
X
(c)
(d) SkipForward
ResNet
F (X)
Y = X + F (X)
0
Y =?
F (X)
X
X + (1
?)
(e)
F (X)
Swapout
Y = ?1
X + ?2
F (X)
X + F (X)
Figure 1: Visualization of architectural differences, showing computations for a block using various
architectures. Each circle is a unit in a grid corresponding to spatial layout, and circles are colored to
indicate what they report. Given input X (a), all units in a feed forward block emit F (X) (b). All
units in a residual network block emit X + F (X) (c). A skipforward network randomly chooses
between reporting X and F (X) per unit (d). Finally, swapout randomly chooses between reporting
0 (and so dropping out the unit), X (skipping the unit), F (X) (imitating a feedforward network at
the unit) and X + F (X) (imitating a residual network unit).
3
Swapout
Notation and terminology: We use capital letters to represent tensors and to represent elementwise product (broadcasted for scalars). We use boldface 0 and 1 to represent tensors of 0 and
1 respectively. A network block is a set of simple layers in some specific configuration e.g. a
convolution followed by a ReLU or a residual network block [5]. Several such potentially different
blocks can be connected in the form of a directed acyclic graph to form the full network model.
Dropout kills individual units randomly; stochastic depth skips entire blocks of units randomly.
Swapout allows individual units to be dropped, or to skip blocks randomly. Implementing swapout is
a straightforward generalization of dropout. Let X be the input to some network block that computes
F (X). The u?th unit produces F (u) (X) as output. Let ? be a tensor of i.i.d. Bernoulli random
variables. Dropout computes the output Y of that block as
Y = ? F (X).
(1)
It is natural to think of dropout as randomly selecting an output from the set F (u) = {0, F (u) (X)}
for the u?th unit.
Swapout generalizes dropout by expanding the choice of F (u) . Now write {?i } for N distinct tensors
of iid Bernoulli random variables indexed by i and with corresponding parameters {?i }. Let {Fi } be
corresponding tensors consisting of values already computed somewhere in the network. Note that
one of these Fi can be X itself (identity). However, Fi are not restricted to being a function of X and
we drop the X to indicate this. Most natural choices for Fi are the outputs of earlier layers. Swapout
computes the output of the layer in question by computing
Y =
N
X
i=1
(u)
? i Fi
(u)
(u)
and so, for unit u, we have F (u) = {F1 , F2 , . . . , F1
simplest case where
(2)
(u)
+ F2 , . . . ,
Y = ?1 X + ?2 F (X)
P
(u)
i
Fi
}. We study the
(3)
so that, for unit u, we have F (u) = {0, X (u) , F (u) (X), X (u) + F (u) (X)}. Thus, each unit in the
layer could be:
1)
2)
3)
4)
dropped (choose 0);
a feedforward unit (choose F (u) (X));
skipped (choose X (u) );
or a residual network unit (choose X (u) + F (u) (X)).
3
Since a swapout network can clearly imitate a residual network, and since residual networks are
currently the best-performing networks on various standard benchmarks, we perform exhaustive
experimental comparisons with them.
If one accepts the view of dropout and stochastic depth as averaging over a set of architectures, then
swapout extends the set of architectures used. Appropriate random choices of ?1 and ?2 yield: all
architectures covered by dropout; all architectures covered by stochastic depth; and block level skip
connections. But other choices yield unit level skip and residual connections.
Swapout retains important properties of dropout. Swapout discourages co-adaptation by dropping
units, but also by on occasion presenting units with inputs that have come from earlier layers. Dropout
has been shown to enhance the stability of stochastic gradient descent ([4], lemma 4.4). This applies
to swapout in its most general form, too. We extend the notation of that paper, and write L for
a Lipschitz constant that applies to the network, ?f (v) for the gradient of the network f with
parameters v, and D?f (v) for the gradient of the dropped out version of the network.
The crucial point in the relevant enabling lemma is that E[|| Df (v) ||] < E[|| ?f (v) ||] ? L (the
inequality implies improvements). Now write ?S [f ] (v) for the gradient of a swapout network, and
?G [f ] (v) for the gradient of the swapout network which achieves the largest Lipschitz constant by
choice of ?i (this exists, because ?i is discrete). First, a Lipschitz constant applies to this network;
second, E[|| ?S [f ] (v) ||] ? E[|| ?G [f ] (v) ||] ? L, so swapout makes stability no worse; third, we
speculate light conditions on f should provide E[|| ?S [f ] (v) ||] < E[|| ?G [f ] (v) ||] ? L, improving
stability ([4] Section 4).
3.1
Inference in Stochastic Networks
A model trained with swapout represents an entire family of networks with tied parameters, where
members of the family were sampled randomly during training. There are two options for inference.
Either replace random variables with their expected values, as recommended by Srivastava et al. [20]
(deterministic inference). Alternatively, sample several members of the family at random, and average
their predictions (stochastic inference). Note that such stochastic inference with dropout has been
studied in [2].
There is an important difference between swapout and dropout. In a dropout network, one can
estimate expectations exactly (as long as the network isn?t trained with batch normalization, below).
This is because E[ReLU[? F (X)]] = ReLU[E[? F (X)]] (recall ? is a tensor of Bernoulli
random variables, and thus non-negative).
In a swapout network, one usually can not estimate expectations exactly. The problem is that
E[ReLU[(?1 X + ?2 Y )]] is not the same as ReLU[E[(?1 X + ?2 Y )]] in general. Estimates of
expectations that ignore this are successful, as the experiments show, but stochastic inference gives
significantly better results.
Srivastava et al. argue that deterministic inference is significantly less expensive in computation.
We believe that Srivastava et al. may have overestimated how many samples are required for an
accurate average, because they use distinct dropout networks in the average (Figure 11 in [20]).
Our experience of stochastic inference with swapout has been positive, with the number of samples
needed for good behavior small (Figure 2). Furthermore, computational costs of inference are smaller
when each instance of the network uses the same parameters
A technically more delicate point is that both dropout and swapout networks interact poorly with batch
normalization if one uses deterministic inference. The problem is that the estimates collected by batch
normalization during training may not reflect test time statistics. To see this consider two random
variables X and Y and let ?1 , ?2 ? Bernoulli(?). While E[?1 X + ?2 Y ] = E[?X + ?Y ] =
?X + ?Y , it can be shown that Var[?1 X + ?2 Y ] ? Var[?X + ?Y ] with equality holding only for
? = 0 and ? = 1. Thus, the variance estimates collected by Batch Normalization during training do
not represent the statistics observed during testing if the expected values of ?1 and ?2 are used in a
deterministic inference scheme. These errors in scale estimation accumulate as more and more layers
are stacked. This may explain why [7] reports that dropout doesn?t lead to any improvement when
used in residual networks with batch normalization.
4
3.2
Baseline comparison methods
ResNets: We compare with ResNet architectures as described in [5](referred to as v1) and
in [6](referred to as v2).
Dropout:
Standard dropout on the output of residual block using Y = ? (X + F (X)).
Layer Dropout: We replace equation 3 by Y = X + ?(1?1) F (X). Here ?(1?1) is a single
Bernoulli random variable shared across all units.
SkipForward: Equation 3 introduces two stochastic parameters ?1 and ?2 . We also explore a
simpler architecture, SkipForward, that introduces only one parameter but samples from a smaller set
F (u) = {X (u) , F (u) (X)} as below. A parallel work refers to this as zoneout [10].
Y = ? X + (1 ? ?) F (X)
4
(4)
Experiments
We experiment extensively on the CIFAR-10 dataset and demonstrate that a model trained with
swapout outperforms a comparable ResNet model. Further, a 32 layer wider model matches the
performance of a 1001 layer ResNet on both CIFAR-10 and CIFAR-100 datasets.
Model: We experiment with ResNet architectures as described in [5](referred to as v1) and
in [6](referred to as v2). However, our implementation (referred to as ResNet Ours) has the following
modifications which improve the performance of the original model (Table 1). Between blocks of
different feature sizes we subsample using average pooling instead of strided convolutions and use
projection shortcuts with learned parameters. For final prediction we follow a scheme similar to Network in Network [13]. We replace average pooling and fully connected layer by a 1 ? 1 convolution
layer followed by global average pooling to predict the logits that are fed into the softmax.
Layers in ResNets are arranged in three groups with all convolutional layers in a group containing
equal number of filters. We represent the number of filters in each group as a tuple with the smallest
size as (16, 32, 64) (as used in [5]for CIFAR-10). We refer to this as width and experiment with
various multiples of this base size represented as W ? 1, W ? 2 etc.
Training: We train using SGD with a batch size of 128, momentum of 0.9 and weight decay of
0.0001. Unless otherwise specified, we train all the models for a total 256 epochs. Starting from an
initial learning rate of 0.1, we drop it by a factor of 10 after 192 epochs and then again after 224
epochs. Standard augmentation of left-right flips and random translations of up to four pixels is used.
For translation, we pad the images by 4 pixels on all the sides and sample a random 32 ? 32 crop.
All the images in a mini-batch use the same crop. Note that dropout slows convergence ([20], A.4),
and swapout should do so too for similar reasons. Thus using the same training schedule for all the
methods should disadvantage swapout.
Models trained with Swapout consistently outperform baselines: Table 1 compares Swapout
with various 20 layer baselines. Models trained with Swapout consistently outperform all other
models of similar architecture.
The stochastic training schedule matters: Different layers in a swapout network could be trained
with different parameters of their Bernoulli distributions (the stochastic training schedule). Table 2
shows that stochastic training schedules have a significant effect on the performance. We report the
performance with deterministic as well as stochastic inference. These schedules differ in how the
values of parameters ?1 and ?2 of the random variables in equation 3 are set for different layers. Note
that ?1 = ?2 = 0.5 corresponds to the maximum stochasticity. A schedule with less randomness in
the early layers (bottom row) performs the best because swapout adds per unit noise and early layers
have the largest number of units. Thus, low stochasticity in early layers significantly reduces the
randomness in the system. We use this schedule for all the experiments unless otherwise stated.
5
Table 1: In comparison with fair baselines on CIFAR-10, swapout is always more accurate. We refer
to the base width of (16, 32, 64) as W ? 1 and others are multiples of it (See Table 3 for details on
width). We report the width along with the number of parameters in each model. Models trained
with swapout consistently outperform all other models of comparable architecture. All stochastic
methods were trained using the Linear(1, 0.5) schedule (Table 2) and use stochastic inference. v1 and
v2 represent residual block architectures in [5] and [6] respectively.
Method
Width
#Params
Error(%)
ResNet v1 [5]
ResNet v1 Ours
Swapout v1
W ?1
W ?1
W ?1
0.27M
0.27M
0.27M
8.75
8.54
8.27
W ?1
W ?1
0.27M
0.27M
8.27
7.97
W ?2
1.09M
6.58
?2
?2
?2
?2
?2
1.09M
1.09M
1.09M
1.09M
1.09M
6.54
5.99
5.87
6.11
5.68
ResNet v2 Ours
Swapout v2
Swapout v1
ResNet v2 Ours
Stochastic Depth v2 Ours
Dropout v2
SkipForward v2
Swapout v2
W
W
W
W
W
Table 2: The choice of stochastic training schedule matters. We evaluate the performance of a 20
layer swapout model (W ? 2) trained with different stochasticity schedules on CIFAR-10. These
schedules differ in how the parameters ?1 and ?2 of the Bernoulli random variables in equation 3 are
set for the different layers. Linear(a, b) refers to linear interpolation from a to b from the first block
to the last (see [7]). Others use the same value for all the blocks. We report the performance for both
the deterministic and stochastic inference (with 30 samples). Schedule with less randomness in the
early layers (bottom row) performs the best.
Method
Swapout (?1
Swapout (?1
Swapout (?1
Swapout (?1
Swapout (?1
= ?2 = 0.5)
= 0.2, ?2 = 0.8)
= 0.8, ?2 = 0.2)
= ?2 = Linear(0.5, 1))
= ?2 = Linear(1, 0.5))
Deterministic Error(%)
Stochastic Error(%)
10.36
10.14
7.58
7.34
6.43
6.69
7.63
6.56
6.52
5.68
Swapout improves over ResNet architecture: From Table 3 it is evident that networks trained
with Swapout consistently show better performance than corresponding ResNets, for most choices
of width investigated, using just the deterministic inference. This difference indicates that the
performance improvement is not just an ensemble effect.
Stochastic inference outperforms deterministic inference: Table 3 shows that the stochastic
inference scheme outperforms the deterministic scheme in all the experiments. Prediction for each
image is done by averaging the results of 30 stochastic forward passes. This difference is not just
due to the widely reported effect that an ensemble of networks is better as networks in our ensemble
share parameters. Instead, stochastic inference produces more accurate expectations and interacts
better with batch normalization.
Stochastic inference needs few samples for a good estimate: Figure 2 shows the estimated
accuracies as a function of the number of forward passes per image. It is evident that relatively few
samples are enough for a good estimate of the mean. Compare Figure-11 of [20], which implies ? 50
samples are required.
Increase in width leads to considerable performance improvements: The number of filters in
a convolutional layer is its width. Table 3 shows that the performance of a 20 layer model improves
considerably as the width is increased both for the baseline ResNet v2 architecture as well as
the models trained with Swapout. Swapout is better able to use the available capacity than the
6
Table 3: Wider swapout models work better. We evaluate the effect of increasing the number of filters
on CIFAR-10. ResNets [5] contain three groups of layers with all convolutional layers in a group
containing equal number of filters. We indicate the number of filters in each group as a tuple below
and report the performance with deterministic as well as stochastic inference with 30 samples. For
each size, model trained with Swapout outperforms the corresponding ResNet model.
Model
Width
Swapout v2 (20) W ? 1
Swapout v2 (20) W ? 2
Swapout v2 (20) W ? 4
Swapout v2 (32) W ? 4
#Params
ResNet v2
Swapout
Deterministic
Stochastic
(16, 32, 64)
(32, 64, 128)
(64, 128, 256)
0.27M
1.09M
4.33M
8.27
6.54
5.62
8.58
6.40
5.43
7.92
5.68
5.09
(64, 128, 256)
7.43M
5.23
4.97
4.76
Table 4: Swapout outperforms comparable methods on CIFAR-10. A 32 layer wider model performs
competitively against a 1001 layer ResNet. Swapout and dropout use stochastic inference.
Method
#Params
Error(%)
DropConnect [23]
NIN [13]
FitNet(19) [17]
DSN [12]
Highway[21]
-
9.32
8.81
8.39
7.97
7.60
ResNet v1(110) [5]
Stochastic Depth v1(1202) [7]
SwapOut v1(20) W ? 2
1.7M
19.4M
1.09M
6.41
4.91
6.58
10.2M
7.43M
7.43M
4.92
4.83
4.76
ResNet v2(1001) [6]
Dropout v2(32) W ? 4
SwapOut v2(32) W ? 4
corresponding ResNet with similar architecture and number of parameters. Table 4 compares models
trained with Swapout with other approaches on CIFAR-10 while Table 5 compares on CIFAR-100.
On both datasets our shallower but wider model compares well with 1001 layer ResNet model.
Swapout uses parameters efficiently: Persistently over tables 1, 3, and 4, swapout models with
fewer parameters outperform other comparable models. For example, Swapout v2(32) W ? 4 gets
4.76% with 7.43M parameters in comparison to the ResNet version at 4.91% with 10.2M parameters.
Experiments on CIFAR-100 confirm our results: Table 5 shows that Swapout is very effective
as it improves the performance of a 20 layer model (ResNet Ours) by more than 2%. Widening
the network and reducing the stochasticity leads to further improvements. Further, a wider but
relatively shallow model trained with Swapout (22.72%; 7.46M params) is competitive with the best
performing, very deep (1001 layer) latest ResNet model (22.71%;10.2M params).
5
Discussion and future work
Swapout is a stochastic training method that shows reliable improvements in performance and leads
to networks that use parameters efficiently. Relatively shallow swapout networks give comparable
performance to extremely deep residual networks.
Preliminary experiments on ImageNet [18] using swapout (Linear(1,0.8)) yield 28.7%/9.2% top1/top-5 validation error while the corresponding ResNet-152 yields 22.4%/5.8% validation errors.
We noticed that stochasticity is a difficult hyper-parameter for deeper networks and a better setting
would likely improve results.
We have shown that different stochastic training schedules produce different behaviors, but have not
searched for the best schedule in any systematic way. It may be possible to obtain improvements by
doing so. We have described an extremely general swapout mechanism. It is straightforward using
7
Table 5: Swapout is strongly competitive with the best methods on CIFAR-100, and uses parameters
efficiently in comparison. A 20 layer model (Swapout v2 (20)) trained with Swapout improves upon
the corresponding 20 layer ResNet model (ResNet v2 Ours (20)). Further, a 32 layer wider model
performs competitively against a 1001 layer ResNet (last row). Swapout uses stochastic inference.
Method
#Params
Error(%)
NIN [13]
DSN [12]
FitNet [17]
Highway [21]
-
35.68
34.57
35.04
32.39
ResNet v1 (110) [5]
Stochastic Depth v1 (110) [7]
ResNet v2 (164) [6]
ResNet v2 (1001) [6]
1.7M
1.7M
1.7M
10.2M
27.22
24.58
24.33
22.71
ResNet v2 Ours (20) W ? 2
1.09M
28.08
1.10M
3.43M
3.43M
7.46M
25.86
24.86
23.46
22.72
SwapOut v2 (20)(Linear(1,0.5)) W
SwapOut v2 (56)(Linear(1,0.5)) W
SwapOut v2 (56)(Linear(1,0.8)) W
SwapOut v2 (32)(Linear(1,0.8)) W
?2
?2
?2
?4
0.15
Standard error ?
Mean error rate ?
9
8
?1 = ?2 = Linear(1, 0.5)
?1 = ?2 = 0.5
0.1
7
6
0.05
0
10
20
30
0
10
20
30
Number of samples ?
Number of samples ?
Figure 2: Stochastic inference needs few samples for a good estimate. We plot the mean error rate
on the left as a function of the number of samples for two stochastic training schedules. Standard
error of the mean is shown as the shaded interval on the left and magnified in the right plot. It is
evident that relatively few samples are needed for a reliable estimate of the mean error. The mean
and standard error was computed using 30 repetitions for each sample count. Note that stochastic
inference quickly overtakes accuracies for deterministic inference in very few samples (2-3)(Table 2).
equation 2 to apply swapout to inception networks [22] (by using several different functions of the
input and a sufficiently general form of convolution); to recurrent convolutional networks [15] (by
choosing Fi to have the form F ? F ? F . . .); and to gated networks. All our experiments focus on
comparisons to residual networks because these are the current top performers on CIFAR-10 and
CIFAR-100. It would be interesting to experiment with other versions of the method.
As with dropout and batch normalization, it is difficult to give a crisp explanation of why swapout
works. We believe that swapout causes some form of improvement in the optimization process. This
is because relatively shallow networks with swapout reliably work as well as or better than quite
deep alternatives; and because swapout is notably and reliably more efficient in its use of parameters
than comparable deeper networks. Unlike dropout, swapout will often propagate gradients while
still forcing units not to co-adapt. Furthermore, our swapout networks involve some form of tying
between layers. When a unit sometimes sees layer i and sometimes layer i ? j, the gradient signal
will be exploited to encourage the two layers to behave similarly. The reason swapout is successful
likely involves both of these points.
Acknowledgments: This work is supported in part by ONR MURI Awards N00014-10-1-0934 and N0001416-1-2007. We would like to thank NVIDIA for donating some of the GPUs used in this work.
8
References
[1] Y. Bengio and O. Delalleau. On the expressive power of deep architectures. In Proceedings of the 22nd
International Conference on Algorithmic Learning Theory, 2011.
[2] Y. Gal and Z. Ghahramani. Bayesian convolutional neural networks with bernoulli approximate variational
inference. 2015.
[3] X. Glorot, A. Bordes, and Y. Bengio. Deep sparse rectifier neural networks. In AISTATS, 2011.
[4] M. Hardt, B. Recht, and Y. Singer. Train faster, generalize better: Stability of stochastic gradient descent.
CoRR, abs/1509.01240, 2015.
[5] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385,
2015.
[6] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. CoRR, abs/1603.05027,
2016.
[7] G. Huang, Y. Sun, Z. Liu, D. Sedra, and K. Q. Weinberger. Deep networks with stochastic depth. CoRR,
abs/1603.09382, 2016.
[8] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal
covariate shift. CoRR, abs/1502.03167, 2015.
[9] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural
networks. In NIPS, 2012.
[10] D. Krueger, T. Maharaj, J. Kram?r, M. Pezeshki, N. Ballas, N. R. Ke, A. Goyal, Y. Bengio, H. Larochelle,
A. Courville, et al. Zoneout: Regularizing rnns by randomly preserving hidden activations. arXiv preprint
arXiv:1606.01305, 2016.
[11] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 1998.
[12] C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeply-supervised nets. AISTATS, 2015.
[13] M. Lin, Q. Chen, and S. Yan. Network in network. CoRR, abs/1312.4400, 2013.
[14] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In ICML, 2010.
[15] P. H. Pinheiro and R. Collobert. Recurrent convolutional neural networks for scene parsing. arXiv preprint
arXiv:1306.2795, 2013.
[16] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPS, 2007.
[17] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio. Fitnets: Hints for thin deep
nets. ICLR, 2015.
[18] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla,
M. Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer
Vision, 2015.
[19] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition.
CoRR, abs/1409.1556, 2014.
[20] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to
prevent neural networks from overfitting. The Journal of Machine Learning Research, 2014.
[21] R. K. Srivastava, K. Greff, and J. Schmidhuber. Training very deep networks. In NIPS, 2015.
[22] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. E. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. CoRR, abs/1409.4842, 2014.
[23] L. Wan, M. Zeiler, S. Zhang, Y. L. Cun, and R. Fergus. Regularization of neural networks using dropconnect.
In ICML, pages 1058?1066, 2013.
[24] M. D. Zeiler and R. Fergus. Stochastic pooling for regularization of deep convolutional neural networks.
arXiv preprint arXiv:1301.3557, 2013.
9
| 6205 |@word version:4 achievable:1 nd:1 propagate:1 sgd:1 thereby:1 coadaptation:1 initial:1 configuration:1 liu:2 selecting:1 hoiem:1 ours:8 document:1 outperforms:6 existing:1 current:2 skipping:3 activation:1 yet:1 parsing:1 romero:1 drop:2 plot:2 selected:1 fewer:1 parameterization:1 imitate:1 colored:1 simpler:1 zhang:4 along:1 dsn:2 introduce:1 notably:1 expected:2 behavior:3 salakhutdinov:1 increasing:7 spain:1 notation:2 circuit:1 what:1 tying:3 unimpeded:1 magnified:1 gal:1 act:1 runtime:1 exactly:2 unit:39 positive:1 dropped:4 path:1 interpolation:1 rnns:1 fitnets:1 studied:2 suggests:2 shaded:1 co:3 directed:1 practical:2 acknowledgment:1 lecun:1 testing:1 block:16 goyal:1 yan:1 significantly:3 matching:1 projection:1 refers:2 suggest:1 get:1 writing:1 crisp:1 deterministic:14 missing:2 layout:1 straightforward:2 latest:1 starting:1 independently:1 ke:1 zoneout:2 stability:4 cifar100:1 us:5 persistently:1 expensive:1 recognition:4 kahou:1 muri:1 observed:1 bottom:2 preprint:3 connected:2 sun:3 deeply:1 trained:19 singh:1 technically:1 upon:1 efficiency:1 f2:2 various:4 represented:1 pezeshki:1 stacked:1 train:3 distinct:4 describe:1 effective:1 hyper:1 choosing:1 exhaustive:1 quite:1 richer:2 widely:1 delalleau:1 otherwise:2 statistic:2 simonyan:1 think:1 itself:1 final:1 net:2 propose:1 product:1 adaptation:2 tu:1 relevant:1 poorly:1 validate:1 sutskever:2 convergence:1 nin:2 produce:7 resnet:31 wider:8 recurrent:2 strong:2 skip:11 indicate:3 come:1 implies:2 differ:2 involves:1 larochelle:1 filter:8 stochastic:53 implementing:1 require:1 f1:2 generalization:3 preliminary:1 sufficiently:2 normal:1 algorithmic:1 predict:1 mapping:1 achieves:2 sought:1 smallest:1 early:4 estimation:1 outperformed:1 currently:1 highway:3 largest:2 repetition:1 successfully:1 clearly:1 always:1 rather:1 focus:2 improvement:13 consistently:4 bernoulli:8 indicates:1 contrast:3 skipped:1 baseline:6 sense:1 maharaj:1 helpful:1 inference:28 dependent:1 entire:3 pad:1 hidden:1 going:1 pixel:2 issue:1 classification:1 art:1 special:2 spatial:1 softmax:1 saurabh:1 equal:2 encouraged:1 identical:1 represents:2 icml:2 thin:2 future:1 others:4 report:6 serious:1 employ:1 few:6 strided:1 randomly:14 hint:1 individual:4 replaced:1 consisting:1 delicate:1 ab:8 interest:1 introduces:2 yielding:1 light:1 chain:1 accurate:4 emit:2 tuple:2 encourage:1 experience:1 shorter:1 unless:2 indexed:1 enjoying:1 circle:2 theoretical:1 uncertain:1 increased:2 instance:4 earlier:3 compelling:1 disadvantage:1 retains:1 rabinovich:1 cost:2 subset:1 krizhevsky:2 successful:3 too:2 reported:1 params:6 considerably:1 chooses:2 recht:2 international:2 randomized:2 overestimated:1 systematic:1 lee:1 enhance:1 together:1 quickly:1 again:1 reflect:1 augmentation:1 containing:2 choose:4 huang:2 wan:1 dropconnect:3 worse:1 leading:1 szegedy:2 speculate:1 includes:1 matter:2 forsyth:1 depends:1 collobert:1 view:1 doing:2 competitive:4 option:1 parallel:1 jia:1 contribution:1 accuracy:9 convolutional:11 variance:2 efficiently:3 ensemble:9 yield:4 generalize:1 bayesian:1 iid:1 ren:2 rectified:1 russakovsky:1 chassang:1 randomness:4 history:2 explain:1 against:2 ss1:1 derek:1 donating:1 sampled:1 dataset:1 hardt:1 intensively:1 recall:1 improves:6 schedule:15 appears:1 feed:1 xie:1 follow:1 supervised:1 zisserman:1 sedra:1 formulation:1 arranged:1 though:2 done:1 strongly:1 furthermore:2 just:3 implicit:2 inception:1 replacing:1 expressive:1 su:1 believe:2 effect:5 contain:1 logits:1 regularization:5 equality:1 during:8 width:12 occasion:1 presenting:1 evident:3 demonstrate:1 performs:5 greff:1 image:6 variational:1 consideration:1 novel:1 fi:7 krueger:1 discourages:2 exponentially:1 broadcasted:1 ballas:2 extend:1 he:2 elementwise:1 accumulate:1 significant:3 refer:2 anguelov:1 grid:1 similarly:1 illinois:2 stochasticity:5 stable:1 impressive:1 longer:1 etc:1 base:2 add:1 recent:4 forcing:1 schmidhuber:1 n00014:1 nvidia:1 top1:1 inequality:1 onr:1 success:1 exploited:1 seen:1 preserving:1 additional:1 performer:1 deng:1 recommended:1 signal:2 multiple:4 full:1 reduces:1 rahimi:1 champaign:1 match:1 adapt:1 faster:1 long:2 cifar:17 lin:1 award:1 prediction:5 crop:2 vision:1 expectation:7 df:1 arxiv:6 resnets:5 normalization:9 represent:6 sometimes:2 kernel:1 remarkably:1 krause:1 fitnet:2 addressed:2 interval:1 crucial:1 appropriately:1 pinheiro:1 unlike:1 pass:2 pooling:7 member:2 call:1 feedforward:4 intermediate:1 bengio:6 enough:1 bernstein:1 variety:1 relu:6 architecture:29 reduce:1 haffner:1 shift:1 accelerating:1 cause:1 deep:19 covered:2 involve:1 karpathy:1 extensively:1 simplest:1 outperform:5 estimated:1 per:3 kill:1 write:3 discrete:1 dropping:5 group:6 four:1 terminology:1 capital:1 prevent:1 v1:12 graph:1 sum:2 swapout:99 letter:1 reporting:3 family:4 reasonable:1 extends:1 architectural:4 draw:1 comparable:7 entirely:1 dropout:39 layer:61 followed:2 courville:1 constraint:1 sharply:1 scene:1 extremely:3 performing:2 inhibits:1 conjecture:1 relatively:5 gpus:1 department:1 across:4 describes:1 remain:1 smaller:2 shallow:4 cun:1 modification:1 restricted:2 imitating:2 equation:5 visualization:1 turn:1 count:1 mechanism:1 needed:2 singer:1 flip:1 fed:1 generalizes:3 available:2 competitively:2 apply:1 v2:29 appropriate:1 batch:11 alternative:1 weinberger:1 original:1 unreasonably:1 top:2 include:2 zeiler:2 somewhere:1 exploit:1 ghahramani:1 tensor:6 noticed:1 already:1 question:1 strategy:1 interacts:1 enhances:1 gradient:11 iclr:1 thank:1 capacity:2 argue:3 collected:2 reason:2 boldface:1 reed:1 mini:1 sermanet:1 innovation:3 difficult:2 potentially:1 holding:1 negative:1 rise:1 slows:1 stated:1 implementation:1 reliably:3 boltzmann:1 satheesh:1 gated:2 shallower:2 allowing:1 perform:2 convolution:5 datasets:2 urbana:1 benchmark:1 enabling:1 descent:3 behave:1 displayed:1 regularizes:2 hinton:3 precise:1 overtakes:1 exiting:1 david:1 introduced:1 required:4 specified:1 trainable:1 connection:8 imagenet:3 accepts:1 learned:1 barcelona:1 nip:4 able:2 below:4 usually:1 kram:1 challenge:1 including:3 reliable:3 explanation:1 power:1 natural:3 difficulty:3 widening:1 residual:24 scheme:5 improve:5 isn:1 review:1 epoch:3 deepest:1 loss:2 fully:1 interesting:1 acyclic:1 var:2 validation:2 vanhoucke:1 daf:1 share:1 bordes:1 translation:2 row:3 supported:1 last:2 enjoys:1 side:1 allow:1 deeper:4 wide:1 taking:1 sparse:1 benefit:1 depth:23 rich:2 computes:3 doesn:1 forward:3 erhan:1 approximate:1 ignore:1 implicitly:1 confirm:1 global:1 overfitting:1 reveals:1 ioffe:1 fergus:2 alternatively:1 disrupt:1 khosla:1 why:2 table:18 expanding:1 improving:1 interact:1 investigated:1 bottou:1 aistats:2 subsample:1 noise:1 fair:1 referred:5 momentum:1 explicit:2 tied:1 third:1 specific:1 rectifier:1 covariate:1 showing:1 explored:1 decay:1 svm:1 evidence:2 glorot:1 exists:1 corr:8 magnitude:1 gallagher:1 chen:1 likely:3 explore:1 forming:1 visual:1 gatta:1 scalar:1 applies:3 corresponds:1 ma:1 nair:1 viewed:2 identity:3 replace:4 lipschitz:4 shared:1 change:1 shortcut:1 considerable:1 reducing:4 averaging:3 lemma:3 total:1 pas:1 experimental:5 trainability:1 internal:1 searched:1 evaluate:3 regularizing:1 srivastava:6 |
5,754 | 6,206 | Perspective Transformer Nets: Learning Single-View
3D Object Reconstruction without 3D Supervision
Xinchen Yan1
Jimei Yang2 Ersin Yumer2 Yijie Guo1 Honglak Lee1,3
1
University of Michigan, Ann Arbor
2
Adobe Research
3
Google Brain
{xcyan,guoyijie,honglak}@umich.edu, {jimyang,yumer}@adobe.com
Abstract
Understanding the 3D world is a fundamental problem in computer vision. However, learning a good representation of 3D objects is still an open problem due
to the high dimensionality of the data and many factors of variation involved. In
this work, we investigate the task of single-view 3D object reconstruction from a
learning agent?s perspective. We formulate the learning process as an interaction
between 3D and 2D representations and propose an encoder-decoder network with
a novel projection loss defined by the perspective transformation. More importantly,
the projection loss enables the unsupervised learning using 2D observation without
explicit 3D supervision. We demonstrate the ability of the model in generating 3D
volume from a single 2D image with three sets of experiments: (1) learning from
single-class objects; (2) learning from multi-class objects and (3) testing on novel
object classes. Results show superior performance and better generalization ability
for 3D object reconstruction when the projection loss is involved.
1
Introduction
Understanding the 3D world is at the heart of successful computer vision applications in robotics, rendering and modeling [19]. It is especially important to solve this problem using the most convenient
visual sensory data: 2D images. In this paper, we propose an end-to-end solution to the challenging
problem of predicting the underlying true shape of an object given an arbitrary single image observation of it. This problem definition embodies a fundamental challenge: Imagery observations of
3D shapes are interleaved representations of intrinsic properties of the shape itself (e.g., geometry,
material), as well as its extrinsic properties that depend on its interaction with the observer and the
environment (e.g., orientation, position, and illumination). Physically principled shape understanding
should be able to efficiently disentangle such interleaved factors.
This observation leads to insight that an end-to-end solution to this problem from the perspective
of learning agents (neural networks) should involve the following properties: 1) the agent should
understand the physical meaning of how a 2D observation is generated from the 3D shape, and 2) the
agent should be conscious about the outcome of its interaction with the object; more specifically, by
moving around the object, the agent should be able to correspond the observations to the viewpoint
change. If such properties are embodied in a learning agent, it will be able to disentangle the shape
from the extrinsic factors because these factors are trivial to understand in the 3D world. To enable the
agent with these capabilities, we introduce a built-in camera system that can transform the 3D object
into 2D images in-network. Additionally, we architect the network such that the latent representation
disentangles the shape from view changes. More specifically, our network takes as input an object
image and predicts its volumetric 3D shape so that the perspective transformations of predicted shape
match well with corresponding 2D observations.
We implement this neural network based on a combination of image encoder, volume decoder
and perspective transformer (similar to spatial transformer as introduced by Jaderberg et al. [6]).
During training, the volumetric 3D shape is gradually learned from single-view input and the
feedback of other views through back-propagation. Thus at test time, the 3D shape can be directly
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
generated from a single image. We conduct experimental evaluations using a subset of 3D models
from ShapeNetCore [1]. Results from single-class and multi-class training demonstrate excellent
performance of our network for volumetric 3D reconstruction. Our main contributions are summarized
below.
? We show that neural networks are able to predict 3D shape from single-view without using
the ground truth 3D volumetric data for training. This is made possible by introducing a 2D
silhouette loss function based on perspective transformations.
? We train a single network for multi-class 3D object volumetric reconstruction and show its
generalization potential to unseen categories.
? Compared to training with full azimuth angles, we demonstrate comparatively similar results
when training with partial views.
2
Related Work
Representation learning for 3D objects. Recently, advances have been made in learning deep
neural networks for 3D objects using large-scale CAD databases [22, 1]. Wu et al. [22] proposed a
deep generative model that extends the convolutional deep belief network [11] to model volumetric
3D shapes. Different from [22] that uses volumetric 3D representation, Su et al. [18] proposed
a multi-view convolutional network for 3D shape categorization with a view-pooling mechanism.
These methods focus more on 3D shape recognition instead of 3D shape reconstruction. Recent
work [20, 14, 4, 2] attempt to learn a joint representation for both 2D images and 3D shapes.
Tatarchenko et al. [20] developed a convolutional network to synthesize unseen 3D views from a
single image and demonstrated the synthesized images can be used them to reconstruct 3D shape.
Qi et al. [14] introduced a joint embedding by combining volumetric representation and multi-view
representation together to improve 3D shape recognition performance. Girdhar et al. [4] proposed a
generative model for 3D volumetric data and combined it with a 2D image embedding network for
single-view 3D shape generation. Choy et al. [2] introduce a 3D recurrent neural network (3D-R2N2)
based on long-short term memory (LSTM) to predict the 3D shape of an object from a single view or
multiple views. Compared to these single-view methods, our 3D reconstruction network is learned
end-to-end and the network can be even trained without ground truth volumes.
Concurrent to our work, Renzede et al. [16] introduced a general framework to learn 3D structures
from 2D observations with 3D-2D projection mechanism. Their 3D-2D projection mechanism either
has learnable parameters or adopts non-differentiable component using MCMC, while our perspective
projection nets is both differentiable and parameter-free.
Representation learning by transformations. Learning from transformed sensory data has gained
attention [12, 5, 15, 13, 23, 6, 24] in recent years. Memisevic and Hinton [12] introduced a gated
Boltzmann machine that models the transformations between image pairs using multiplicative
interaction. Reed et al. [15] showed that a disentangled hidden unit representations of Boltzmann
Machines (disBM) could be learned based on the transformations on data manifold. Yang et al. [23]
learned out-of-plane rotation of rendered images to obtain disentangled identity and viewpoint units
by curriculum learning. Kulkarni et al. [9] proposed to learn a semantically interpretable latent
representation from 3D rendered images using variational auto-encoders [8] by including specific
transformations in mini-batches. Complimentary to convolutional networks, Jaderberg et al. [6]
introduced a differentiable sampling layer that directly incorporates geometric transformations into
representation learning. Concurrent to our work, Wu et al. [21] proposed a 3D-2D projection layer
that enables the learning of 3D object structures using 2D keypoints as annotation.
3
Problem Formulation
In this section, we develop neural networks for reconstructing 3D objects. From the perspective of a
learning agent (e.g., neural network), a natural way to understand one 3D object X is from its 2D
views by transformations. By moving around the 3D object, the agent should be able to recognize its
unique features and eventually build a 3D mental model of it as illustrated in Figure 1(a). Assume
that I (k) is the 2D image from k-th viewpoint ?(k) by projection I (k) = P (X; ?(k) ), or rendering in
graphics. An object X in a certain scene is the entanglement of shape, color and texture (its intrinsic
properties) and the image I (k) is the further entanglement with viewpoint and illumination (extrinsic
parameters). The general goal of understanding 3D objects can be viewed as disentangling intrinsic
properties and extrinsic parameters from a single image.
2
?
3D object
I
(4)
S(5)
Input 2D
Image I(1) Volume V
S(4)
camera
dmin
image
dmax
volume V
S(3)
I(3)
2D image: I(1)
volume U
Transformations
{T(1),T(2),...T(n)}
I(2)
S
(2)
Target 2D
mask S(1)
Camera
(a)
(b)
(c)
Figure 1: (a) Understanding 3D object from learning agent?s perspective; (b) Single-view 3D volume
reconstruction with perspective transformation. (c) Illustration of perspective projection. The
minimum and maximum disparity in the screen coordinates are denoted as dmin and dmax .
In this paper, we focus on the 3D shape learning by ignoring the color and texture factors, and
we further simplify the problem by making the following assumptions: 1) the scene is clean white
background; 2) the illumination is constant natural lighting. We use the volumetric representation of
3d shape V where each voxel Vi is a binary unit. In other words, the voxel equals to one, i.e., Vi = 1,
if the i-th voxel sapce is occupied by the shape; otherwise Vi = 0. Assuming the 2D silhouette S (k)
is obtained from the k-th image I (k) , we can specify the 3D-2D projection S (k) = P (V; ?(k) ). Note
that 2D silhouette estimation is typically solved by object segmentation in real-world but it becomes
trivial in our case due to the white background.
In the following sub-sections, we propose a formulation for learning to predict the volumetric 3D
shape V from an image I (k) with and without the 3D volume supervision.
3.1 Learning to Reconstruct Volumetric 3D Shape from Single-View
We consider single-view volumetric 3D reconstruction as a dense prediction problem and develop a
? = f (I (k) ). The encoder
convolutional encoder-decoder network for this learning task denoted by V
network h(?) learns a viewpoint-invariant latent representation h(I (k) ) which is then used by the
? = g(h(I (k) )). In case the ground truth volumetric shapes V
decoder g(?) to generate the volume V
are available, the problem can be easily considered as learning volumetric 3D shapes with a regular
reconstruction objective in 3D space: Lvol (I (k) ) = ||f (I (k) ) ? V||22 .
In practice, however, the ground truth volumetric 3D shapes may not be available for training. For
example, the agent observes the 2D silhouette via its built-in camera without accessing the volumetric
3D shape. Inspired by the space carving theory [10], we propose a silhouette-based volumetric loss
function. In particular, we build on the premise that a 2D silhouette S?(j) projected from the generated
? under certain camera viewpoint ?(j) should match the ground truth 2D silhouette S (j)
volume V
from image observations. In other words, if all the generated silhouettes S?(j) match well with their
corresponding ground truth silhouettes S (j) for all j?s, then we hypothesize that the generated volume
? should be as good as one instance of visual hull equivalent class of the ground truth volume V [10].
V
Therefore, we formulate the learning objective for the k-th image as
Lproj (I
(k)
)=
n
X
n
(j)
Lproj (I (k) ; S (j) , ?(j) )
j=1
1X
=
||P (f (I (k) ); ?(j) ) ? S (j) ||22 ,
n j=1
(1)
where j is the index of output 2D silhouettes, n is the number of silhouettes used for each input image
and P (?) is the 3D-2D projection function. Note that the above training objective Eq. (1) enables
training without using ground-truth volumes. The network diagram is illustrated in Figure 1(b). A
more general learning objective is given by a combination of both objectives:
Lcomb (I (k) ) = ?proj Lproj (I (k) ) + ?vol Lvol (I (k) ),
(2)
where ?proj and ?vol are constants that control the tradeoff between the two losses.
3.2 Perspective Transformer Nets
As defined previously, 2D silhouette S (k) is obtained via perspective projection given input 3D
volume V and specific camera viewpoint ?(k) . In this work, we implement the perspective projection
3
(see Figure 1(c)) with a 4-by-4 transformation matrix ?4?4 , where K is camera calibration matrix
and (R, t) is extrinsic parameters.
K 0 R t
?4?4 = T
(3)
0
1 0T 1
For each point psi = (xsi , yis , zis , 1) in 3D world coordinates, we compute the corresponding point
pti = (xti , yit , 1, dti ) in screen coordinates (plus disparity dti ) using the perspective transformation:
psi ? ?4?4 pti .
Similar to the spatial transformer network introduced in [6], we propose a 2-step procedure : (1)
performing dense sampling from input volume (in 3D world coordinates) to output volume (in screen
coordinates), and (2) flattening the 3D spatial output across disparity dimension. In the experiment,
we assume that transformation matrix is always given as input, parametrized by the viewpoint ?.
Again, the 3D point (xsi , yis , zis ) in input volume V ? RH?W ?D and corresponding point (xti , yit , dti )
0
0
0
in output volume U ? RH ?W ?D is linked by perspective transformation matrix ?4?4 . Here,
0
0
0
(W, H, D) and (W , H , D ) are the width, height and depth of input and output volume, respectively.
We summarize the dense sampling step and channel-wise flattening step as follows.
Ui =
H X
W X
D
X
n
m
Vnml max(0, 1 ? |xsi ? m|) max(0, 1 ? |yis ? n|) max(0, 1 ? |zis ? l|)
l
S
n0 m 0
(4)
= max
U
0
l
n0 m0 l0
Here, Ui is the i-th voxel value corresponding to the point (xti , yit , dti ) (where i ? {1, ..., W 0 ? H 0 ?
D0 }). Note that we use the max operator for projection instead of summation along one dimension
since the volume is represented as a binary cube where the solid voxels have value 1 and empty
voxels have value 0. Intuitively, we have the following two observations: (1) each empty voxel will
not contribute to the foreground pixel of S from any viewpoint; (2) each solid voxel can contribute to
the foreground pixel of S only if it is visible from specific viewpoint.
3.3 Training
As the same volumetric 3D shape is expected to be generated from different images of the object, the
encoder network is required to learn a 3D view-invariant latent representation
h(I (1) ) = h(I (2) ) = ? ? ? = h(I (k) )
(5)
This sub-problem itself is a challenging task in computer vision [23, 9]. Thus, we adopt a two-stage
training procedure: first we learn the encoder network for a 3D view-invariant latent representation
h(I) and then train the volumetric decoder with perspective transformer networks. As shown in [23],
a disentangled representation of 2D synthetic images can be learned from consecutive rotations with
a recurrent network, we pre-train the encoder of our network using a similar curriculum strategy so
that the latent representation only contains 3D view-invariant identity information of the object. Once
we obtain an encoder network that recognizes the identity of single-view images, we next learn the
volume generator regularized by the perspective transformer networks. To encourage the volume
decoder to learn a consistent 3D volume from different viewpoints, we include the projections from
neighboring viewpoints in each mini-batch so that the network has relatively sufficient information to
reconstruct the 3D shape.
4
Experiments
ShapeNetCore. This dataset contains about 51,300 unique 3D models from 55 common object
categories [1]. Each 3D model is rendered from 24 azimuth angles (with steps of 15? ) with fixed
elevation angles (30? ) under the same camera and lighting setup. We then crop and rescale the
centering region of each image to 64 ? 64 ? 3 pixels. For each ground truth 3D shape, we create a
volume of 32 ? 32 ? 32 voxels from its canonical orientation (0? ).
Network Architecture. As shown in Figure 2, our encoder-decoder network has three components:
a 2D convolutional encoder, a 3D up-convolutional decoder and a perspective transformer networks.
The 2D convolutional encoder consists of 3 convolution layers, followed by 3 fully-connected layers
(convolution layers have 64, 128 and 256 channels with fixed filter size of 5 ? 5; the three fullyconnected layers have 1024, 1024 and 512 neurons, respectively). The 3D convolutional decoder
4
Perspective Transformer
Volume Generator
64x64x3
32x32x64
16x16x128
256x6x6x6 96x15x15x15
8x8x256
1x1x1024
1x1x1024
512x3x3x3
latent unit
1x1x 512
5x5 conv
1x32x32x32
4x4x4 conv
5x5x5 conv
6x6x6 conv
5x5 conv
1x32x32x32
1x32x32
Sampler
Grid generator
Target projection
5x5 conv
4x4
transformation
Input image
Encoder
??(G)
Decoder
Figure 2: Illustration of network architecture.
consists of one fully-connected layer, followed by 3 convolution layers (the fully-connected layer
have 3 ? 3 ? 3 ? 512 neurons; convolution layers have 256, 96 and 1 channels with filter size of
4 ? 4 ? 4, 5 ? 5 ? 5 and 6 ? 6 ? 6). For perspective transformer networks, we used perspective
transformation to project 3D volume to 2D silhouette where the transformation matrix is parametrized
by 16 variables and sampling grid is set to 32 ? 32 ? 32. We use the same network architecture for
all the experiments.
Implementation Details. We used the ADAM [7] solver for stochastic optimization in all the
experiments. During the pre-training stage (for encoder), we used mini-batch of size 32, 32, 8, 4,
3 and 2 for training the RNN-1, RNN-2, RNN-4, RNN-8, RNN-12 and RNN-16 as used in Yang
et al. [23]. We used the learning rate 10?4 for RNN-1, and 10?5 for the rest of recurrent neural
networks. During the fine-tuning stage (for volume decoder), we used mini-batch of size 6 and
learning rate 10?4 . For each object in a mini-batch, we include projections from all 24 views as
supervision. The models including the perspective transformer nets are implemented using Torch [3].
To download the code, please refer to the project webpage: http://goo.gl/YEJ2H6.
Experimental Design. As mentioned in the formulation, there are several variants of the model
depending on the hyper-parameters of learning objectives ?proj and ?vol . In the experimental section,
we denote the model trained with projection loss only, volume loss only, and combined loss as
PTN-Proj (PR), CNN-Vol (VO), and PTN-Comb (CO), respectively.
In the experiments, we address the following questions: (1) Will the model trained with combined
loss achieve better single-view 3D reconstruction performance over model trained on volume loss
only (PTN-Comb vs. CNN-Vol)? (2) What is the performance gap between the models with and
without ground-truth volumes (PTN-Comb vs. PTN-Proj)? (3) How do the three models generalize
to instances from unseen categories which are not present in the training set? To answer the questions,
we trained the three models under two experimental settings: single category and multiple categories.
4.1 Training on single category
We select chair category as the training set for single category experiment. For model comparisons,
we first conduct quantitative evaluations on the generated 3D volumes from the test set single-view
images. For each instance in the test set, we generate one volume per view image (24 volumes
generated in total). Given a pair of ground-truth volume and our generated volume (threshold is 0.5),
we computed its intersection-over-union (IU) score and the average IU score is calculated over 24
volumes of all the instances in the test set. In addition, we provide a baseline method based on nearest
neighbor (NN) search. Specifically, for each of the test image, we extract VGG feature from fc6
layer (4096-dim vector) [17] and retrieve the nearest training example using Euclidean distance in the
feature space. The ground-truth 3D volume corresponds to the nearest training example is naturally
regarded as the retrieval result.
Table 1: Prediction IU using the models trained on chair category. Below, ?chair" corresponds to
the setting where each object is observable with full azimuth angles, while ?chair-N" corresponds
to the setting where each object is only observable with narrow range (subset) of azimuth angles.
Method / Evaluation Set
PTN-Proj:single (no vol. supervision)
PTN-Comb:single (vol. supervision)
CNN-Vol:single (vol. supervision)
NN search (vol. supervision)
chair
training
test
0.5712
0.5027
0.6435
0.5067
0.6390
0.4983
?
0.3557
5
chair-N
training
test
0.4882
0.4583
0.5564
0.4429
0.5518
0.4380
?
0.3073
Input
GT (310)
GT (130)
PR (310)
PR (130)
CO (310)
CO (130)
VO (310)
VO (130)
Figure 3: Single-class results. GT: ground truth, PR: PTN-Proj, CO: PTN-Comb, VO: CNN-Vol
(Best viewed in digital version. Zoom in for the 3D shape details). The angles are shown in the
parenthesis. Please also see more examples and video animations on the project webpage.
As shown in Table 1, the model trained without volume supervision (projection loss) performs as
good as model trained with volume supervision (volume loss) on the chair category (testing set). In
addition to the comparisons of overall IU, we measured the view-dependent IU for each model. As
shown in Figure 4, the average prediction error (mean IU) changes as we gradually move from the
first view to the last view (15? to 360? ). For visual comparisons, we provide a side-by-side analysis
for each of the three models we trained. As shown in Figure 3, each row shows an independent
comparison. The first column is the 2D image we used as input of the model. The second and
third column show the ground-truth 3D volume (same volume rendered from two views for better
visualization purpose). Similarly, we list the model trained with projection loss only (PTN-Proj),
combined loss (PTN-Comb) and volume loss only (CNN-Vol) from fourth column up to ninth column.
The volumes predicted by PTN-Proj and PTN-Comb faithfully represent the shape. However, the
volumes predicted by CNN-Vol do not form a solid chair shape in some cases.
0.52
Mean IU
0.5
0.48
PTN-Proj
PTN-Comb
CNN-Vol
0.46
0.44
0.42
0.4
0
50
100
150
200
250
300
350
Azimuth (degree)
Figure 4: View-dependent IU. For illustration, images of a sample chair with corresponding azimuth
angles are shown below the curves. For example, 3D reconstruction from 0? is more difficult than
from 30? due to self-occlusion.
6
Table 2: Prediction IU using the models trained on large-scale datasets.
Test Category
PTN-Proj:multi
PTN-Comb:multi
CNN-Vol:multi
NN search
Test Category
PTN-Proj:multi
PTN-Comb:multi
CNN-Vol:multi
NN search
Input
GT (310)
airplane
0.5556
0.5836
0.5747
0.5564
loudspeaker
0.5868
0.5675
0.5478
0.4600
GT (130)
bench
0.4924
0.5079
0.5142
0.4875
rifle
0.5987
0.6097
0.6031
0.5133
PR (310)
dresser
0.6823
0.7109
0.6975
0.5713
sofa
0.6221
0.6534
0.6467
0.5314
PR (130)
car
0.7123
0.7381
0.7348
0.6519
table
0.4938
0.5146
0.5136
0.3097
CO (310)
chair
0.4494
0.4702
0.4451
0.3512
telephone
0.7504
0.7728
0.7692
0.6696
CO (130)
display
0.5395
0.5473
0.5390
0.3958
vessel
0.5507
0.5399
0.5445
0.4078
VO (310)
lamp
0.4223
0.4158
0.3865
0.2905
VO (130)
Figure 5: Multiclass results. GT: ground truth, PR: PTN-Proj, CO: PTN-Comb, VO: CNN-Vol (Best
viewed in digital version. Zoom in for the 3D shape details). The angles are shown in the parenthesis.
Please also see more examples and video animations on the project webpage.
Training with partial views. We also conduct control experiments where each object is only
observable from narrow range of azimuth angles (e.g., 8 out of 24 views such as 0? , 15? , ? ? ? , 105? ).
We include the detailed description in the supplementary materials. As shown in Table 1 (last two
columns), performances of all three models drop a little bit but the conclusion is similar: the proposed
network (1) learns better 3D shape with projection regularization and (2) is capable of learning the
3D shape by providing 2D observations only.
4.2
Training on multiple categories
We conducted multiclass experiment using the same setup in the single-class experiment. For multicategory experiment, the training set includes 13 major categories: airplane, bench, dresser, car,
chair, display, lamp, loudspeaker, rifle, sofa, table, telephone and vessel. Basically,
we preserved 20% of instances from each category as testing data. As shown in Table 2, the
quantitative results demonstrate (1) model trained with combined loss is superior to volume loss in
most cases and (2) model trained with projection loss perform as good as volume/combined loss.
From the visualization results shown in Figure 5, all three models predict volumes reasonably well.
There is only subtle performance difference in object part such as the wing of airplane.
7
Table 3: Prediction IU in out-of-category tests.
Method / Test Category
PTN-Proj:single (no vol. supervision)
PTN-Comb:single (vol. supervision)
CNN-Vol:single (vol. supervision)
PTN-Proj:multi (no vol. supervision)
PTN-Comb:multi (vol. supervision)
CNN-Vol:multi (vol. supervision)
Input
GT (310)
GT (130)
PR (310)
bed
0.1801
0.1507
0.1558
0.1944
0.1647
0.1586
bookshelf
0.1707
0.1186
0.1183
0.3448
0.3195
0.3037
PR (130)
CO (310)
cabinet
0.3937
0.2626
0.2588
0.6484
0.5257
0.4977
CO (310)
motorbike
0.1189
0.0643
0.0580
0.3216
0.1914
0.2253
VO (130)
train
0.1550
0.1044
0.0956
0.3670
0.3744
0.3740
VO (310)
Figure 6: Out-of-category results. GT: ground truth, PR: PTN-Proj, CO: PTN-Comb, VO: CNN-Vol
(Best viewed in digital version. Zoom in for the 3D shape details). The angles are shown in the
parenthesis. Please also see more examples and video animations on the project webpage.
4.3 Out-of-Category Tests
Ideally, an intelligent agent should have the ability to generalize the knowledge learned from previously seen categories to unseen categories. To this end, we design out-of-category tests for both
models trained on a single category and multiple categories, as described in Section 4.1 and Section 4.2, respectively. We select 5 unseen categories from ShapeNetCore: bed, bookshelf, cabinet,
motorbike and train for out-of-category tests. Here, the two categories cabinet and train are
relatively easier than other categories since there might be instances in the training set with similar
shapes (e.g., dresser, vessel, and airplane). But the bed,bookshelf and motorbike can be
considered as completely novel categories in terms of shape.
We summarized the quantitative results in Table 3. Suprisingly, the model trained on multiple
categories still achieves reasonably good overall IU. As shown in Figure 6, the proposed projection
loss generalizes better than model trained using combined loss or volume loss on train, motorbike
and cabinet. The observations from the out-of-category tests suggest that (1) generalization from a
single category is very challenging, but training from multiple categories can significantly improve
generalization, and (2) the projection regularization can help learning a robust representation for
better generalization on unseen categories.
5
Conclusions
In this paper, we investigate the problem of single-view 3D shape reconstruction from a learning
agent?s perspective. By formulating the learning procedure as the interaction between 3D shape
and 2D observation, we propose to learn an encoder-decoder network which takes advantage of
the projection transformation as regularization. Experimental results demonstrate (1) excellent
performance of the proposed model in reconstructing the object even without ground-truth 3D volume
as supervision and (2) the generalization potential of the proposed model to unseen categories.
8
Acknowledgments
This work was supported in part by NSF CAREER IIS-1453651, ONR N00014-13-1-0762, Sloan
Research Fellowship, and a gift from Adobe. We acknowledge NVIDIA for the donation of GPUs.
We also thank Yuting Zhang, Scott Reed, Junhyuk Oh, Ruben Villegas, Seunghoon Hong, Wenling
Shang, Kibok Lee, Lajanugen Logeswaran, Rui Zhang and Yi Zhang for helpful comments and
discussions.
References
[1] A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song,
H. Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015.
[2] C. B. Choy, D. Xu, J. Gwak, K. Chen, and S. Savarese. 3d-r2n2: A unified approach for single and
multi-view 3d object reconstruction. In ECCV, 2016.
[3] R. Collobert, K. Kavukcuoglu, and C. Farabet. Torch7: A matlab-like environment for machine learning.
In BigLearn, NIPS Workshop, number EPFL-CONF-192376, 2011.
[4] R. Girdhar, D. F. Fouhey, M. Rodriguez, and A. Gupta. Learning a predictable and generative vector
representation for objects. arXiv preprint arXiv:1603.08637, 2016.
[5] G. E. Hinton, A. Krizhevsky, and S. D. Wang. Transforming auto-encoders. In ICANN. Springer, 2011.
[6] M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial transformer networks. In NIPS, 2015.
[7] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[8] D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
[9] T. D. Kulkarni, W. Whitney, P. Kohli, and J. B. Tenenbaum. Deep convolutional inverse graphics network.
In NIPS, 2015.
[10] K. N. Kutulakos and S. M. Seitz. A theory of shape by space carving. International Journal of Computer
Vision, 38(3):199?218, 2000.
[11] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep belief networks for scalable
unsupervised learning of hierarchical representations. In ICML, 2009.
[12] R. Memisevic and G. Hinton. Unsupervised learning of image transformations. In CVPR, 2007.
[13] V. Michalski, R. Memisevic, and K. Konda. Modeling deep temporal dependencies with recurrent grammar
cells"". In NIPS, 2014.
[14] C. R. Qi, H. Su, M. Niessner, A. Dai, M. Yan, and L. J. Guibas. Volumetric and multi-view cnns for object
classification on 3d data. In CVPR, 2016.
[15] S. Reed, K. Sohn, Y. Zhang, and H. Lee. Learning to disentangle factors of variation with manifold
interaction. In ICML, 2014.
[16] D. J. Rezende, S. Eslami, S. Mohamed, P. Battaglia, M. Jaderberg, and N. Heess. Unsupervised learning of
3d structure from images. In NIPS, 2016.
[17] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition.
arXiv preprint arXiv:1409.1556, 2014.
[18] H. Su, S. Maji, E. Kalogerakis, and E. Learned-Miller. Multi-view convolutional neural networks for 3d
shape recognition. In ICCV, 2015.
[19] R. Szeliski. Computer vision: algorithms and applications. Springer Science & Business Media, 2010.
[20] M. Tatarchenko, A. Dosovitskiy, and T. Brox. Single-view to multi-view: Reconstructing unseen views
with a convolutional network. In ECCV, 2016.
[21] J. Wu, T. Xue, J. J. Lim, Y. Tian, J. B. Tenenbaum, A. Torralba, and W. T. Freeman. Single image 3d
interpreter network. In ECCV, 2016.
[22] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao. 3d shapenets: A deep representation for
volumetric shapes. In CVPR, 2015.
[23] J. Yang, S. E. Reed, M.-H. Yang, and H. Lee. Weakly-supervised disentangling with recurrent transformations for 3d view synthesis. In NIPS, 2015.
[24] E. Yumer and N. J. Mitra. Learning semantic deformation flows with 3d convolutional networks. In ECCV,
2016.
9
| 6206 |@word kohli:1 cnn:13 version:3 repository:1 open:1 choy:2 seitz:1 solid:3 contains:2 disparity:3 score:2 com:1 cad:1 visible:1 shape:48 enables:3 hypothesize:1 drop:1 interpretable:1 n0:2 v:2 generative:3 plane:1 lamp:2 short:1 mental:1 contribute:2 yuting:1 zhang:5 height:1 along:1 kalogerakis:1 consists:2 fullyconnected:1 comb:14 introduce:2 mask:1 expected:1 multi:18 brain:1 inspired:1 freeman:1 xti:3 little:1 solver:1 becomes:1 spain:1 conv:6 underlying:1 project:5 gift:1 medium:1 what:1 complimentary:1 developed:1 interpreter:1 unified:1 transformation:21 temporal:1 dti:4 quantitative:3 jimei:1 control:2 unit:4 mitra:1 eslami:1 encoding:1 might:1 plus:1 challenging:3 co:10 logeswaran:1 range:2 tian:1 carving:2 unique:2 camera:8 acknowledgment:1 testing:3 practice:1 union:1 implement:2 procedure:3 zis:3 rnn:7 yan:1 significantly:1 projection:25 convenient:1 word:2 pre:2 regular:1 dresser:3 suggest:1 operator:1 transformer:12 disentangles:1 equivalent:1 demonstrated:1 attention:1 formulate:2 insight:1 importantly:1 regarded:1 disentangled:3 oh:1 retrieve:1 embedding:2 variation:2 coordinate:5 xinchen:1 target:2 us:1 synthesize:1 recognition:4 predicts:1 database:1 preprint:5 solved:1 wang:1 region:1 connected:3 goo:1 observes:1 principled:1 accessing:1 transforming:1 environment:2 entanglement:2 ui:2 mentioned:1 ideally:1 predictable:1 trained:16 depend:1 weakly:1 completely:1 easily:1 joint:2 represented:1 maji:1 train:7 hyper:1 outcome:1 supplementary:1 solve:1 cvpr:3 reconstruct:3 otherwise:1 encoder:14 ability:3 simonyan:2 grammar:1 unseen:8 transform:1 itself:2 advantage:1 differentiable:3 net:4 lee1:1 michalski:1 reconstruction:14 propose:6 interaction:6 neighboring:1 combining:1 achieve:1 description:1 bed:3 guo1:1 webpage:4 empty:2 categorization:1 generating:1 adam:2 object:37 help:1 depending:1 recurrent:5 develop:2 donation:1 measured:1 nearest:3 rescale:1 eq:1 implemented:1 predicted:3 filter:2 hull:1 stochastic:2 cnns:1 disbm:1 enable:1 material:2 villegas:1 premise:1 generalization:6 elevation:1 summation:1 around:2 considered:2 ground:17 guibas:2 predict:4 m0:1 major:1 achieves:1 adopt:1 consecutive:1 torralba:1 purpose:1 battaglia:1 estimation:1 sofa:2 concurrent:2 create:1 faithfully:1 suprisingly:1 biglearn:1 always:1 occupied:1 l0:1 focus:2 rezende:1 shapenet:1 baseline:1 dim:1 helpful:1 dependent:2 nn:4 epfl:1 typically:1 torch:1 hidden:1 proj:16 transformed:1 pixel:3 iu:11 overall:2 orientation:2 classification:1 denoted:2 spatial:4 brox:1 cube:1 equal:1 once:1 ng:1 sampling:4 x4:1 yu:1 unsupervised:4 icml:2 foreground:2 simplify:1 intelligent:1 fouhey:1 dosovitskiy:1 recognize:1 zoom:3 geometry:1 occlusion:1 attempt:1 investigate:2 evaluation:3 encourage:1 partial:2 capable:1 conduct:3 euclidean:1 savarese:2 deformation:1 instance:6 column:5 modeling:2 whitney:1 introducing:1 subset:2 krizhevsky:1 successful:1 conducted:1 azimuth:7 graphic:2 encoders:2 answer:1 dependency:1 xue:1 synthetic:1 combined:7 fundamental:2 lstm:1 international:1 memisevic:3 lee:4 together:1 synthesis:1 imagery:1 again:1 huang:1 conf:1 wing:1 li:1 potential:2 summarized:2 includes:1 sloan:1 vi:3 collobert:1 multiplicative:1 view:42 observer:1 linked:1 bayes:1 capability:1 annotation:1 contribution:1 convolutional:15 efficiently:1 miller:1 correspond:1 generalize:2 kavukcuoglu:1 basically:1 lighting:2 farabet:1 definition:1 volumetric:22 centering:1 involved:2 mohamed:1 naturally:1 cabinet:4 psi:2 dataset:1 color:2 car:2 dimensionality:1 knowledge:1 segmentation:1 subtle:1 lim:1 yumer:2 back:1 supervised:1 specify:1 zisserman:2 formulation:3 stage:3 tatarchenko:2 su:4 propagation:1 google:1 rodriguez:1 true:1 regularization:3 funkhouser:1 illustrated:2 white:2 semantic:1 x5:3 during:3 width:1 self:1 please:4 kutulakos:1 hong:1 demonstrate:5 vo:10 performs:1 image:39 meaning:1 variational:2 novel:3 recently:1 wise:1 x5x5:1 superior:2 rotation:2 common:1 junhyuk:1 physical:1 volume:50 synthesized:1 x6x6:1 refer:1 honglak:2 tuning:1 grid:2 similarly:1 gwak:1 moving:2 calibration:1 supervision:17 gt:9 disentangle:3 recent:2 showed:1 perspective:25 certain:2 n00014:1 nvidia:1 binary:2 onr:1 yi:4 hanrahan:1 seen:1 minimum:1 dai:1 ii:1 full:2 multiple:6 keypoints:1 d0:1 match:3 long:1 retrieval:1 rifle:2 parenthesis:3 adobe:3 qi:2 prediction:5 crop:1 xsi:3 x64x3:1 vision:5 variant:1 scalable:1 physically:1 arxiv:10 represent:1 robotics:1 cell:1 preserved:1 background:2 addition:2 fine:1 fellowship:1 girdhar:2 diagram:1 lajanugen:1 rest:1 comment:1 pooling:1 incorporates:1 flow:1 yang:4 r2n2:2 rendering:2 architecture:3 seunghoon:1 tradeoff:1 vgg:1 airplane:4 multiclass:2 torch7:1 song:2 matlab:1 deep:8 heess:1 detailed:1 involve:1 wenling:1 conscious:1 tenenbaum:2 sohn:1 category:35 lproj:3 generate:2 http:1 canonical:1 nsf:1 extrinsic:5 per:1 vol:26 threshold:1 yit:3 clean:1 year:1 angle:10 inverse:1 fourth:1 extends:1 wu:4 ersin:1 bit:1 interleaved:2 layer:11 followed:2 display:2 scene:2 chair:11 formulating:1 performing:1 ptn:27 rendered:4 relatively:2 gpus:1 loudspeaker:2 combination:2 across:1 reconstructing:3 pti:2 making:1 intuitively:1 iccv:1 gradually:2 invariant:4 pr:10 heart:1 visualization:2 previously:2 dmax:2 eventually:1 mechanism:3 end:7 umich:1 available:2 generalizes:1 hierarchical:1 batch:5 motorbike:4 include:3 recognizes:1 embodies:1 multicategory:1 konda:1 especially:1 build:2 comparatively:1 objective:6 move:1 x1x:1 question:2 strategy:1 shapenets:1 distance:1 thank:1 decoder:12 parametrized:2 manifold:2 trivial:2 assuming:1 code:1 index:1 reed:4 mini:5 illustration:3 providing:1 setup:2 disentangling:2 difficult:1 ba:1 implementation:1 design:2 boltzmann:2 gated:1 perform:1 dmin:2 observation:13 convolution:4 neuron:2 datasets:1 acknowledge:1 architect:1 hinton:3 ninth:1 arbitrary:1 download:1 introduced:6 pair:2 required:1 learned:7 narrow:2 barcelona:1 kingma:2 nip:7 address:1 able:5 below:3 scott:1 challenge:1 summarize:1 built:2 including:2 memory:1 max:5 belief:2 video:3 natural:2 business:1 regularized:1 predicting:1 curriculum:2 improve:2 auto:3 extract:1 embodied:1 ruben:1 understanding:5 geometric:1 voxels:3 loss:23 fully:3 generation:1 generator:3 digital:3 agent:13 degree:1 sufficient:1 consistent:1 xiao:1 viewpoint:12 eccv:4 row:1 gl:1 last:2 free:1 supported:1 side:2 understand:3 szeliski:1 neighbor:1 feedback:1 dimension:2 depth:1 world:6 calculated:1 curve:1 rich:1 sensory:2 adopts:1 made:2 projected:1 voxel:6 welling:1 ranganath:1 observable:3 jaderberg:4 silhouette:13 search:4 latent:7 khosla:1 table:9 additionally:1 yang2:1 learn:8 channel:3 fc6:1 reasonably:2 career:1 ignoring:1 robust:1 vessel:3 excellent:2 yan1:1 flattening:2 icann:1 main:1 dense:3 rh:2 animation:3 bookshelf:3 xu:1 screen:3 grosse:1 sub:2 position:1 explicit:1 third:1 learns:2 tang:1 specific:3 yijie:1 learnable:1 list:1 gupta:1 intrinsic:3 workshop:1 gained:1 texture:2 illumination:3 rui:1 gap:1 easier:1 chen:1 savva:1 intersection:1 michigan:1 visual:3 chang:1 springer:2 corresponds:3 truth:17 identity:3 goal:1 viewed:4 ann:1 change:3 specifically:3 telephone:2 semantically:1 sampler:1 shang:1 total:1 arbor:1 experimental:5 select:2 kulkarni:2 mcmc:1 bench:2 |
5,755 | 6,207 | Efficient Second Order Online Learning by Sketching
Haipeng Luo
Princeton University, Princeton, NJ USA
[email protected]
Nicol? Cesa-Bianchi
Universit? degli Studi di Milano, Italy
[email protected]
Alekh Agarwal
Microsoft Research, New York, NY USA
[email protected]
John Langford
Microsoft Research, New York, NY USA
[email protected]
Abstract
We propose Sketched Online Newton (SON), an online second order learning
algorithm that enjoys substantially improved regret guarantees for ill-conditioned
data. SON is an enhanced version of the Online Newton Step, which, via sketching
techniques enjoys a running time linear in the dimension and sketch size. We
further develop sparse forms of the sketching methods (such as Oja?s rule), making
the computation linear in the sparsity of features. Together, the algorithm eliminates
all computational obstacles in previous second order online learning approaches.
1
Introduction
Online learning methods are highly successful at rapidly reducing the test error on large, highdimensional datasets. First order methods are particularly attractive in such problems as they typically
enjoy computational complexity linear in the input size. However, the convergence of these methods
crucially depends on the geometry of the data; for instance, running the same algorithm on a rotated
set of examples can return vastly inferior results. See Fig. 1 for an illustration.
Second order algorithms such as Online Newton Step [18] have the attractive property of being
invariant to linear transformations of the data, but typically require space and update time quadratic
in the number of dimensions. Furthermore, the dependence on dimension is not improved even
if the examples are sparse. These issues lead to the key question in our work: Can we develop
(approximately) second order online learning algorithms with efficient updates? We show that
the answer is ?yes? by developing efficient sketched second order methods with regret guarantees.
Specifically, the three main contributions of this work are:
1. Invariant learning setting and optimal algorithms (Section 2). The typical online regret
minimization setting evaluates against a benchmark that is bounded in some fixed norm (such as the
`2 -norm), implicitly putting the problem in a nice geometry. However, if all the features are scaled
down, it is desirable to compare with accordingly larger weights, which is precluded by an apriori
fixed norm bound. We study an invariant learning setting similar to the paper [33] which compares
the learner to a benchmark only constrained to generate bounded predictions on the sequence of
examples. We show that a variant of the Online Newton Step [18], while quadratic in computation,
stays regret-optimal with a nearly matching lower bound in this more general setting.
2. Improved efficiency via sketching (Section 3). To overcome the quadratic running time, we
next develop sketched variants of the Newton update, approximating the second order information
using a small number of carefully chosen directions, called a sketch. While the idea of data sketching
is widely studied [36], as far as we know our work is the first one to apply it to a general adversarial
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
online learning setting and provide rigorous regret guarantees. Three different sketching methods are
considered: Random Projections [1, 19], Frequent Directions [12, 23], and Oja?s algorithm [28, 29],
all of which allow linear running time per round. For the first two methods, we prove regret bounds
similar to the full second order update whenever the sketch-size is large enough. Our analysis makes
it easy to plug in other sketching and online PCA methods (e.g. [11]).
3. Sparse updates (Section 4). For practical
implementation, we further develop sparse versions of these updates with a running time linear
in the sparsity of the examples. The main challenge here is that even if examples are sparse,
the sketch matrix still quickly becomes dense.
These are the first known sparse implementations of the Frequent Directions1 and Oja?s algorithm, and require new sparse eigen computation
routines that may be of independent interest.
0.16
error rate
0.14
AdaGrad
Oja?SON (m=0)
Oja?SON (m=5)
Oja?SON (m=10)
0.12
0.1
0.08
0.06
0
50
100
150
200
condition number
Empirically, we evaluate our algorithm using
the sparse Oja sketch (called Oja-SON) against Figure 1: Error rate of SON using Oja?s sketch, and
first order methods such as diagonalized A DA - A DAG RAD on a synthetic ill-conditioned problem.
G RAD [6, 25] on both ill-conditioned synthetic m is the sketch size (m = 0 is Online Gradient,
and a suite of real-world datasets. As Fig. 1 m = d resembles Online Newton). SON is nearly
shows for a synthetic problem, we observe sub- invariant to condition number for m = 10.
stantial performance gains as data conditioning
worsens. On the real-world datasets, we find
improvements in some instances, while observing no substantial second-order signal in the others.
Related work Our online learning setting is closest to the one proposed in [33], which studies
scale-invariant algorithms, a special case of the invariance property considered here (see also [31,
Section 5]). Computational efficiency, a main concern in this work, is not a problem there since each
coordinate is scaled independently. Orabona and P?l [30] study unrelated notions of invariance. Gao
et al. [9] study a specific randomized sketching method for a special online learning setting.
The L-BFGS algorithm [24] has recently been studied in the stochastic setting2 [3, 26, 27, 34, 35], but
has strong assumptions with pessimistic rates in theory and reliance on the use of large mini-batches
empirically. Recent works [7, 15, 14, 32] employ sketching in stochastic optimization, but do not
provide sparse implementations or extend in an obvious manner to the online setting. The FrankWolfe algorithm [8, 20] is also invariant to linear transformations, but with worse regret bounds [17]
without further assumptions and modifications [10].
Notation Vectors are represented by bold letters (e.g., x, w, . . . ) and matrices by capital letters
(e.g., M , A, . . . ). Mi,j denotes the (i, j) entry of matrix M . I d represents the d ? d identity matrix,
0m?d represents the m ? d matrix of zeroes, and diag{x} represents a diagonal
? matrix with x on
>
the diagonal. ?i (A) denotes the i-th largest eigenvalue of A,
PkwkA denotes w Aw, |A| is the
determinant of A, TR(A) is the trace of A, hA, Bi denotes i,j Ai,j Bi,j , and A B means that
B ? A is positive semidefinite. The sign function SGN(a) is 1 if a ? 0 and ?1 otherwise.
2
Setup and an Optimal Algorithm
We consider the following setting. On each round t = 1, 2 . . . , T : (1) the adversary first presents an
example xt ? Rd , (2) the learner chooses wt ? Rd and predicts w>
t xt , (3) the adversary reveals a
loss function ft (w) = `t (w> xt ) for some convex, differentiable `t : R ? R+ , and (4) the learner
suffers loss ft (wt ) for this round.
PT
PT
The learner?s regret to a comparator w is defined as RT (w) = t=1 ft (wt ) ? t=1 ft (w). Typical
results study RT (w) against all w with a bounded norm in some geometry. For an invariant update,
1
Recent work by [13] also studies sparse updates for a more complicated variant of Frequent Directions
which is randomized and incurs extra approximation error.
2
Stochastic setting assumes that the examples are drawn i.i.d. from a distribution.
2
we relax this requirement and only put bounds on the predictions w> xt . Specifically, for some
def
pre-chosen constant C we define Kt = w : |w> xt | ? C . We seek to minimize regret to all
comparators that generate bounded predictions on every data point, that is:
def
RT = sup RT (w) where K =
w?K
T
\
Kt = w : ?t = 1, 2, . . . T, |w> xt | ? C .
t=1
Under this setup, if the data are transformed to M xt for all t and some invertible matrix M ? Rd?d ,
the optimal w? simply moves to (M ?1 )> w? , which still has bounded predictions but might have
significantly larger norm. This relaxation is similar to the comparator set considered in [33].
We make two structural assumptions on the loss functions.
0
Assumption 1. (Scalar Lipschitz) The loss function `t satisfies |`t (z)| ? L whenever |z| ? C.
Assumption 2. (Curvature) There exists ?t ? 0 such that for all u, w ? K, ft (w) is lower bounded
2
by ft (u) + ?ft (u)> (w ? u) + ?2t ?ft (u)> (u ? w) .
Note that when ?t = 0, Assumption 2 merely imposes convexity. More generally, it is satisfied by
squared loss ft (w) = (w> xt ? yt )2 with ?t = 8C1 2 whenever |w> xt | and |yt | are bounded by C,
as well as for all exp-concave functions (see [18, Lemma 3]).
Enlarging the comparator set might result in worse regret. We next show matching?upper and lower
bounds qualitatively similar to the standard setting, but with an extra unavoidable d factor. 3
Theorem 1. For any online algorithm generating wt ? Rd and all T ? d, there exists a sequence of
T examples xt ? Rd andp
loss functions `t satisfying Assumptions 1 and 2 (with ?t = 0) such that the
regret RT is at least CL dT /2.
We now give an algorithm that matches the lower bound up to logarithmic constants in the worst case
but enjoys much smaller regret when ?t 6= 0. At round t + 1 with some invertible matrix At specified
later and gradient g t = ?ft (wt ), the algorithm performs the following update before making the
prediction on the example xt+1 :
ut+1 = wt ? A?1
t gt ,
and
wt+1 = argmin kw ? ut+1 kAt .
(1)
w?Kt+1
The projection onto the set Kt+1 differs from typical norm-based projections as it only enforces
boundedness on xt+1 at round t + 1. Moreover, this projection step can be performed in closed form.
Lemma 1. For any x 6= 0, u ? Rd and positive definite matrix A ? Rd?d , we have
argmin kw ? ukA = u ?
w : |w> x|?C
?C (u> x) ?1
A x, where ?C (y) = SGN(y) max{|y| ? C, 0}.
x> A?1 x
If At is a diagonal matrix, updates similar to those of Ross et al. [33] are recovered. We study a
choice of At that is similar to the Online Newton Step (ONS) [18] (though with different projections):
At = ?I d +
t
X
(?s + ?s )g s g >
s
(2)
s=1
for some parameters ? > 0 and ?t ? 0. The regret guarantee of this algorithm is shown below:
Theorem 2. Under Assumptions 1 and 2, suppose that ?t ? ? ? 0 for all t, and ?t is non-increasing.
Then using the matrices (2) in the updates (1) yields for all w ? K,
!
PT
T
2
X
(? + ?T ) t=1 kg t k2
?
d
2
2
?t +
ln 1 +
.
RT (w) ? kwk2 + 2(CL)
2
2(? + ?T )
d?
t=1
3
In the standard
?setting where wt and xt are restricted such that kwt k ? D and kxt k ? X, the minimax
regret is O(DXL T ). This is clearly a special case of our setting with C = DX.
3
Algorithm 1 Sketched Online Newton (SON)
Input: Parameters C, ? and m.
1: Initialize u1 = 0d?1 .
2: Initialize sketch (S, H) ? SketchInit(?, m).
3: for t = 1 to T do
4:
Receive example xt .
>
t xt )
b = Sxt , ? = x>?Cx(u
b ).
5:
Projection step: compute x
and set wt = ut ? ?(xt ? S > H x
b
?b
x> H x
t
t
6:
Predict label yt = w>
t xt and suffer loss `t (yt ).
?
b = ?t + ? t g t .
7:
Compute gradient g t = `0t (yt )xt and the to-sketch vector g
8:
(S, H) ? SketchUpdate(b
g ).
9:
Update weight: ut+1 = wt ? ?1 (g t ? S > HSg t ).
10: end for
2
The dependence on kwk2 implies that the method is not completely invariant to transformations of
the data. This is due to the part ?I d in At . However, this is not critical since ? is fixed and small
while the other part of the bound grows to eventually become the dominating term. Moreover, we
can even set ? = 0 and replace the inverse with the Moore-Penrose pseudoinverse to obtain a truly
invariant algorithm, as discussed in Appendix D. We use ? > 0 in the remainder for simplicity.
The implication
of this regret bound is the following: in the worst case where ? = 0, we set
p
2
2
?t = d/C L t and the bound simplifies to
!
PT
2
?
kg
k
?
CL ?
2
RT (w) ? kwk2 +
T d ln 1 + t=1 ? t 2 + 4CL T d ,
2
2
?CL T d
essentially only losing a logarithmic factor compared to the lower bound in Theorem 1. On the other
hand, if ?t ? ? > 0 for all t, then we set ?t = 0 and the regret simplifies to
!
PT
2
? t=1 kg t k2
?
d
2
RT (w) ? kwk2 +
ln 1 +
,
(3)
2
2?
d?
extending the O(d ln T ) results in [18] to the weaker Assumption 2 and a larger comparator set K.
3
Efficiency via Sketching
Our algorithm so far requires ?(d2 ) time and space just as ONS. In this section we show how to
achieve regret guarantees nearly as good as the above bounds, while keeping computation within a
constant factor of first order methods.
?
b>
bt = ?t + ?t g t to be
Let Gt ? Rt?d be a matrix such that the t-th row is g
t where we define g
the to-sketch vector. Our previous choice of At (Eq. (2)) can be written as ?I d + G>
t Gt . The idea
of sketching is to maintain an approximation of Gt , denoted by St ? Rm?d where m d is a
small constant called the sketch size. If m is chosen so that St> St approximates G>
t Gt well, we can
redefine At as ?I d + St> St for the algorithm.
To see why this admits an efficient
algorithm, notice that by the Woodbury formula one has A?1
=
t
1
>
> ?1
St . With the notation Ht = (?I m + St St> )?1 ? Rm?m and ?t =
? I d ? St (?I m + St St )
>
>
>
?C (u>
t+1 xt+1 )/(xt+1 xt+1 ? xt+1 St Ht St xt+1 ), update (1) becomes:
ut+1 = wt ? ?1 g t ? St> Ht St g t , and wt+1 = ut+1 ? ?t xt+1 ? St> Ht St xt+1 .
The operations involving St g t or St xt+1 require only O(md) time, while matrix vector products
with Ht require only O(m2 ). Altogether, these updates are at most m times more expensive than first
order algorithms as long as St and Ht can be maintained efficiently. We call this algorithm Sketched
Online Newton (SON) and summarize it in Algorithm 1.
We now discuss three sketching techniques to maintain the matrices St and Ht efficiently, each
requiring O(md) storage and time linear in d.
4
Algorithm 2 FD-Sketch for FD-SON
Internal State: S and H.
Algorithm 3 Oja?s Sketch for Oja-SON
Internal State: t, ?, V and H.
SketchInit(?, m)
1: Set S = 0m?d and H =
2: Return (S, H).
SketchInit(?, m)
1
1: Set t = 0, ? = 0m?m , H = ?
I m and V
to any m ? d matrix with orthonormal rows.
2: Return (0m?d , H).
1
? I m.
SketchUpdate(b
g)
b into the last row of S.
1: Insert g
2: Compute eigendecomposition: V > ?V =
1
S > S and set Sn= (? ? ?m,m I m ) 2 Vo.
3: Set H = diag
1
?+?1,1 ??m,m , ? ? ?
SketchUpdate(b
g)
1: Update t ? t + 1, ? and V as Eqn. 4.
1
2: Set S = (t?) 2nV .
o
1
1
,
?
?
?
,
3: Set H = diag ?+t?
?+t?m,m .
1,1
, ?1 .
4: Return (S, H).
4: Return (S, H).
Random Projection (RP). Random projections are classical methods for sketching [19, 1, 21].
b>
Here we consider Gaussian Random Projection sketch: St = St?1 + r t?
g
t , where each entry of
r t ? Rm is an independent random Gaussian variable drawn from N (0, 1/ m). One can verify that
?1
>
the update of Ht?1 can be realized by two rank-one updates: Ht?1 = Ht?1
+ qt r>
t + r t q t where
kb
g k2
bt ? 2t 2 r t . Using Woodbury formula, this results in O(md) update of S and H (see
q t = St g
Algorithm 6 in Appendix E). We call this combination of SON with RP-sketch RP-SON. When ? = 0
this algorithm is invariant to linear transformations for each fixed realization of the randomness.
Using the existing guarantees for RP-sketch, in Appendix E we show a similar regret bound as
?
Theorem 2 up to constants, provided m = ?(r)
where r is the rank of GT . Therefore RP-SON is
near invariant, and gives substantial computational gains when r d with small regret overhead.
Frequent Directions (FD). When GT is near full-rank, however, RP-SON may not perform well.
To address this, we consider Frequent Directions (FD) sketch [12, 23], a deterministic sketching
b>
method. FD maintains the invariant that the last row of St is always 0. On each round, the vector g
t
is inserted into the last row of St?1 , then the covariance of the resulting matrix is eigendecomposed
1
into Vt> ?t Vt and St is set to (?t ? ?t I m ) 2 Vt where ?t is the smallest eigenvalue. Since the rows
of St are orthogonal to each other, Ht is a diagonal matrix and can be maintained efficiently (see
Algorithm 2). The sketch update works in O(md) time (see [12] and Appendix G.2) so the total
running time is O(md) per round. We call this combination FD-SON and prove the following regret
Pd
bound with notation ?k = i=k+1 ?i (G>
T GT ) for any k = 0, . . . , m ? 1.
Theorem 3. Under Assumptions 1 and 2, suppose that ?t ? ? ? 0 for all t and ?t is non-increasing.
FD-SON ensures that for any w ? K and k = 0, . . . , m ? 1, we have
T
>
X
?
TR(ST
ST )
m?k
m
2
RT (w) ? kwk2 + 2(CL)2
?t +
ln 1 +
+
.
2
2(?
+
?
)
m?
2(m
?
k)(?
+ ?T )?
T
t=1
Instead of the rank, the bound depends on the spectral decay ?k , which essentially is the only extra
term compared to the bound
in Theorem
2. Similarly to previous discussion, if ?t ? ?, we get the
>
TR (ST ST )
2
m?k
?
m
bound 2 kwk2 + 2? ln 1 +
+ 2(m?k)??
. With ? tuned well, we pay logarithmic regret
m?
?
for the top m eigenvectors, but a square root regret O( ?k ) for remaining directions not controlled
by our sketch. This is expected for deterministic sketching which focuses on the dominant part of the
spectrum. When ? is not tuned we still get sublinear regret as long as ?k is sublinear.
Oja?s Algorithm. Oja?s algorithm [28, 29] is not usually considered as a sketching algorithm
but seems very natural here. This algorithm uses online gradient descent to find eigenvectors and
bt ?s as the input. Specifically,
eigenvalues of data in a streaming fashion, with the to-sketch vector g
let Vt ? Rm?d denote the estimated eigenvectors and the diagonal matrix ?t ? Rm?m contain the
estimated eigenvalues at the end of round t. Oja?s algorithm updates as:
2
bt } ,
?t = (I m ? ?t )?t?1 + ?t diag{Vt?1 g
5
orth
bt g
b>
Vt ??? Vt?1 + ?t Vt?1 g
t
(4)
where ?t ? Rm?m is a diagonal matrix with (possibly different) learning rates of order ?(1/t)
orth
on the diagonal, and the ????? operator represents an orthonormalizing step.4 The sketch is then
1
St = (t?t ) 2 Vt . The rows of St are orthogonal and thus Ht is an efficiently maintainable diagonal
matrix (see Algorithm 3). We call this combination Oja-SON.
The time complexity of Oja?s algorithm is O(m2 d) per round due to the orthonormalizing step. To
improve the running time to O(md), one can only update the sketch every m rounds (similar to
the block power method [16, 22]). The regret guarantee of this algorithm is unclear since existing
analysis for Oja?s algorithm is only for the stochastic setting (see e.g. [2, 22]). However, Oja-SON
provides good performance experimentally.
4
Sparse Implementation
In many applications, examples (and hence gradients) are sparse in the sense that kxt k0 ? s for all t
and some small constant s d. Most online first order methods enjoy a per-example running time
depending on s instead of d in such settings. Achieving the same for second order methods is more
difficult since A?1
t g t (or sketched versions) are typically dense even if g t is sparse.
We show how to implement our algorithms in sparsity-dependent time, specifically, in O(m2 +
ms) for RP-SON and FD-SON and in O(m3 + ms) for Oja-SON. We emphasize that since the
sketch would still quickly become a dense matrix even if the examples are sparse, achieving purely
sparsity-dependent time is highly non-trivial (especially for FD-SON and Oja-SON), and may be of
independent interest. Due to space limit, below we only briefly mention how to do it for Oja-SON.
Similar discussion for the other two sketches can be found in Appendix G. Note that mathematically
these updates are equivalent to the non-sparse counterparts and regret guarantees are thus unchanged.
There are two ingredients to doing this for Oja-SON: (1) The eigenvectors Vt are represented as
Vt = Ft Zt , where Zt ? Rm?d is a sparsely updatable direction (Step 3 in Algorithm 5) and
>
? t + Zt?1
bt ,
Ft ? Rm?m is a matrix such that Ft Zt is orthonormal. (2) The weights wt are split as w
?t
where bt ? Rm maintains the weights on the subspace captured by Vt?1 (same as Zt?1 ), and w
captures the weights on the complementary subspace which are again updated sparsely.
? t and bt below with the details for Ft and Zt deferred to
We describe the sparse updates for w
1
1
>
? t + Zt?1
Appendix H. Since St = (t?t ) 2 Vt = (t?t ) 2 Ft Zt and wt = w
bt , we know ut+1 is
? t ? g?t ? (Zt ? Zt?1 )> bt +Zt> (bt + ?1 Ft> (t?t Ht )Ft Zt g t ) . (5)
wt ? I d ? St> Ht St g?t = w
{z
}
|
{z
}
|
def
def 0
= bt+1
? t+1
=u
Since Zt ? Zt?1 is sparse by construction and the matrix operations defining b0t+1 scale with m,
overall the update can be done in O(m2 + ms). Using the update for wt+1 in terms of ut+1 , wt+1
is equal to
? t+1 ? ?t xt+1 +Zt> (b0t+1 + ?t Ft> (t?t Ht )Ft Zt xt+1 ) . (6)
ut+1 ? ?t (I d ? St> Ht St )xt+1 = u
{z
}
|
|
{z
}
def
def
? t+1
=w
= bt+1
? t+1 and bt+1 require only
Again, it is clear that all the computations scale with s and not d, so both w
>
2
>
?>
O(m + ms) time to maintain. Furthermore, the prediction wt xt = w
t xt + bt Zt?1 xt can also
be computed in O(ms) time. The O(m3 ) in the overall complexity comes from a Gram-Schmidt
step in maintaining Ft (details in Appendix H).
The pseudocode is presented in Algorithms 4 and 5 with some details deferred to Appendix H. This
is the first sparse implementation of online eigenvector computation to the best of our knowledge.
5
Experiments
Preliminary experiments revealed that out of our three sketching options, Oja?s sketch generally has
better performance (see Appendix I). For more thorough evaluation, we implemented the sparse
bt g
b>
For simplicity, we assume that Vt?1 + ?t Vt?1 g
t is always of full rank so that the orthonormalizing step
does not reduce the dimension of Vt .
4
6
Algorithm 4 Sparse Sketched Online Newton with Oja?s Algorithm
Input: Parameters C, ? and m.
? = 0d?1 and b = 0m?1 .
1: Initialize u
2: (?, F, Z, H) ? SketchInit(?, m) (Algorithm 5).
3: for t = 1 to T do
4:
Receive example xt .
>
xt +b> Zxt )
b = F Zxt and ? = x?>Cx(u??(t?1)b
.
5:
Projection step: compute x
b
x> ?H x
t
t
? =u
? ? ?xt and b ? b + ?(t ? 1)F > ?H x
b (Equation 6).
Obtain w
? > xt + b> Zxt and suffer loss `t (yt ).
Predict label yt = w
?
b = ?t + ? t g t .
Compute gradient g t = `0t (yt )xt and the to-sketch vector g
(?, F , Z, H, ?) ? SketchUpdate(b
g ) (Algorithm 5).
? =w
? ? ?1 g t ? (? > b)b
Update weight: u
g and b ? b + ?1 tF > ?HF Zg t (Equation 5).
end for
6:
7:
8:
9:
10:
Algorithm 5 Sparse Oja?s Sketch
Internal State: t, ?, F , Z, H and K.
SketchInit(?, m)
1: Set t = 0, ? = 0m?m , F = K = ?H = I m and Z to any m ? d matrix with orthonormal rows.
2: Return (?, F , Z, H).
SketchUpdate(b
g)
2
1: Update t ? t+1. Pick a diagonal stepsize matrix ?t to update ? ? (I ??t )?+?t diag{F Zb
g} .
> >
>
>
>
?1
b)?? .
2: Set ? = A ?t F Zb
g and update K ? K + ?b
g Z + Zb
g ? + (b
g g
3: Update Z ? Z + ?b
g> .
4: (L, Q) ? Decompose(F, K) (Algorithm 13), so that LQZ = F Z and QZ is orthogonal. Set
F = Q.
o
n
1
1
,
?
?
?
,
.
5: Set H ? diag ?+t?
?+t?
1,1
m,m
6: Return (?, F , Z, H, ?).
version of Oja-SON in Vowpal Wabbit.5 We compare it with A DAG RAD [6, 25] on both synthetic and
real-world datasets. Each algorithm takes a stepsize parameter: ?1 serves as a stepsize for Oja-SON
and a scaling constant on the gradient matrix for A DAG RAD. We try both methods with the parameter
set to 2j for j = ?3, ?2, . . . , 6 and report the best results. We keep the stepsize matrix in Oja-SON
fixed as ?t = 1t I m throughout. All methods make one online pass over data minimizing square loss.
5.1
Synthetic Datasets
To investigate Oja-SON?s performance in the setting it is really designed for, we generated a range
of synthetic ill-conditioned datasets as follows. We picked a random Gaussian matrix Z ? RT ?d
(T = 10,000 and d = 100) and a random orthonormal basis V ? Rd?d . We chose a specific spectrum
? ? Rd where the first d ? 10 coordinates are 1 and the rest increase linearly to some fixed condition
1
number parameter ?. We let X = Zdiag{?} 2 V > be our example matrix, and created a binary
classification problem with labels y = sign(? > x), where ? ? Rd is a random vector. We generated
20 such datasets with the same Z, V and labels y but different values of ? ? {10, 20, . . . , 200}. Note
that if the algorithm is truly invariant, it would have the same behavior on these 20 datasets.
Fig. 1 (in Section 1) shows the final progressive error (i.e. fraction of misclassified examples after one
pass over data) for A DAG RAD and Oja-SON (with sketch size m = 0, 5, 10) as the condition number
increases. As expected, the plot confirms the performance of first order methods such as A DAG RAD
degrades when the data is ill-conditioned. The plot also shows that as the sketch size increases,
Oja-SON becomes more accurate: when m = 0 (no sketch at all), Oja-SON is vanilla gradient
descent and is worse than A DAG RAD as expected; when m = 5, the accuracy greatly improves; and
finally when m = 10, the accuracy of Oja-SON is substantially better and hardly worsens with ?.
5
An open source machine learning toolkit available at http://hunch.net/~vw
7
0.6
0.4
0.2
0
0
2000
4000
6000
8000
10000
number of examples
Figure 2: Oja?s algorithm?s
eigenvalue recovery error.
0.4
m = 0 vs m=10
0.35
0.4
error rate of Oja?SON
? = 50
? = 100
? = 150
? = 200
0.8
error rate of Oja?SON (m=10)
relative eigenvalue difference
1
0.3
0.2
0.1
AdaGrad vs Oja?SON (m=0)
AdaGrad vs Oja?SON (m=10)
0.3
0.25
0.2
0.15
0.1
0.05
0
0
0.1
0.2
0.3
0.4
error rate of Oja?SON (m=0)
0
0
0.1
0.2
0.3
0.4
error rate of AdaGrad
Figure 3: (a) Comparison of two sketch sizes on real data,
and (b) Comparison against A DAG RAD on real data.
To further explain the effectiveness of Oja?s algorithm in identifying top eigenvalues and eigenvectors, the plot in Fig. 2 shows the largest relative difference between the true and estimated top 10
eigenvalues as Oja?s algorithm sees more data. This gap drops quickly after seeing just 500 examples.
5.2
Real-world Datasets
Next we evaluated Oja-SON on 23 benchmark datasets from the UCI and LIBSVM repository (see
Appendix I for description of these datasets). Note that some datasets are very high dimensional but
very sparse (e.g. for 20news, d ? 102, 000 and s ? 94), and consequently methods with running
time quadratic (such as ONS) or even linear in dimension rather than sparsity are prohibitive.
In Fig. 3(a), we show the effect of using sketched second order information, by comparing sketch
size m = 0 and m = 10 for Oja-SON (concrete error rates in Appendix I). We observe significant
improvements in 5 datasets (acoustic, census, heart, ionosphere, letter), demonstrating the advantage
of using second order information. However, we found that Oja-SON was outperformed by A DA G RAD on most datasets, mostly because the diagonal adaptation of A DAG RAD greatly reduces the
condition number on these datasets. Moreover, one disadvantage of SON is that for the directions not
in the sketch, it is essentially doing vanilla gradient descent. We expect better results using diagonal
adaptation as in A DAG RAD in off-sketch directions.
To incorporate this high level idea, we performed a simple modification to Oja-SON: upon seeing
?1
example xt , we feed Dt 2 xt to our algorithm instead of xt , where Dt ? Rd?d is the diagonal part of
Pt?1
6
the matrix ? =1 g ? g >
? . The intuition is that this diagonal rescaling first homogenizes the scales of
all dimensions. Any remaining ill-conditioning is further addressed by the sketching to some degree,
while the complementary subspace is no worse-off than with A DAG RAD. We believe this flexibility
in picking the right vectors to sketch is an attractive aspect of our sketching-based approach.
With this modification, Oja-SON outperforms A DAG RAD on most of the datasets even for m = 0,
as shown in Fig. 3(b) (concrete error rates in Appendix I). The improvement on A DAG RAD at
m = 0 is surprising but not impossible as the updates are not identical?our update is scale invariant
like Ross et al. [33]. However, the diagonal adaptation already greatly reduces the condition number
on all datasets except splice (see Fig. 4 in Appendix I for detailed results on this dataset), so little
improvement is seen for sketch size m = 10 over m = 0. For several datasets, we verified the
accuracy of Oja?s method in computing the top-few eigenvalues (Appendix I), so the lack of difference
between sketch sizes is due to the lack of second order information after the diagonal correction.
The average running time of our algorithm when m = 10 is about 11 times slower than A DAG RAD,
matching expectations. Overall, SON can significantly outperform baselines on ill-conditioned data,
while maintaining a practical computational complexity.
Acknowledgements This work was done when Haipeng Luo and Nicol? Cesa-Bianchi were at
Microsoft Research, New York.
6
D1 is defined as 0.1 ? I d to avoid division by zero.
8
References
[1] D. Achlioptas. Database-friendly random projections: Johnson-lindenstrauss with binary coins. Journal of
Computer and System Sciences, 66(4):671?687, 2003.
[2] A. Balsubramani, S. Dasgupta, and Y. Freund. The fast convergence of incremental pca. In NIPS, 2013.
[3] R. H. Byrd, S. Hansen, J. Nocedal, and Y. Singer. A stochastic quasi-newton method for large-scale
optimization. SIAM Journal on Optimization, 26:1008?1031, 2016.
[4] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press, 2006.
[5] N. Cesa-Bianchi, A. Conconi, and C. Gentile. A second-order perceptron algorithm. SIAM Journal on
Computing, 34(3):640?668, 2005.
[6] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic
optimization. JMLR, 12:2121?2159, 2011.
[7] M. A. Erdogdu and A. Montanari. Convergence rates of sub-sampled newton methods. In NIPS, 2015.
[8] M. Frank and P. Wolfe. An algorithm for quadratic programming. Naval research logistics quarterly, 3
(1-2):95?110, 1956.
[9] W. Gao, R. Jin, S. Zhu, and Z.-H. Zhou. One-pass auc optimization. In ICML, 2013.
[10] D. Garber and E. Hazan. A linearly convergent conditional gradient algorithm with applications to online
and stochastic optimization. SIAM Journal on Optimization, 26:1493?1528, 2016.
[11] D. Garber, E. Hazan, and T. Ma. Online learning of eigenvectors. In ICML, 2015.
[12] M. Ghashami, E. Liberty, J. M. Phillips, and D. P. Woodruff. Frequent directions: Simple and deterministic
matrix sketching. SIAM Journal on Computing, 45:1762?1792, 2015.
[13] M. Ghashami, E. Liberty, and J. M. Phillips. Efficient frequent directions algorithm for sparse matrices. In
KDD, 2016.
[14] A. Gonen and S. Shalev-Shwartz. Faster sgd using sketched conditioning. arXiv:1506.02649, 2015.
[15] A. Gonen, F. Orabona, and S. Shalev-Shwartz. Solving ridge regression using sketched preconditioned
svrg. In ICML, 2016.
[16] M. Hardt and E. Price. The noisy power method: A meta algorithm with applications. In NIPS, 2014.
[17] E. Hazan and S. Kale. Projection-free online learning. In ICML, 2012.
[18] E. Hazan, A. Agarwal, and S. Kale. Logarithmic regret algorithms for online convex optimization. Machine
Learning, 69(2-3):169?192, 2007.
[19] P. Indyk and R. Motwani. Approximate nearest neighbors: towards removing the curse of dimensionality.
In STOC, 1998.
[20] M. Jaggi. Revisiting frank-wolfe: Projection-free sparse convex optimization. In ICML, 2013.
[21] D. M. Kane and J. Nelson. Sparser johnson-lindenstrauss transforms. Journal of the ACM, 61(1):4, 2014.
[22] C.-L. Li, H.-T. Lin, and C.-J. Lu. Rivalry of two families of algorithms for memory-restricted streaming
pca. arXiv:1506.01490, 2015.
[23] E. Liberty. Simple and deterministic matrix sketching. In KDD, 2013.
[24] D. C. Liu and J. Nocedal. On the limited memory bfgs method for large scale optimization. Mathematical
programming, 45(1-3):503?528, 1989.
[25] H. B. McMahan and M. Streeter. Adaptive bound optimization for online convex optimization. In COLT,
2010.
[26] A. Mokhtari and A. Ribeiro. Global convergence of online limited memory bfgs. JMLR, 16:3151?3181,
2015.
[27] P. Moritz, R. Nishihara, and M. I. Jordan. A linearly-convergent stochastic l-bfgs algorithm. In AISTATS,
2016.
[28] E. Oja. Simplified neuron model as a principal component analyzer. Journal of mathematical biology, 15
(3):267?273, 1982.
[29] E. Oja and J. Karhunen. On stochastic approximation of the eigenvectors and eigenvalues of the expectation
of a random matrix. Journal of mathematical analysis and applications, 106(1):69?84, 1985.
[30] F. Orabona and D. P?l. Scale-free algorithms for online linear optimization. In ALT, 2015.
[31] F. Orabona, K. Crammer, and N. Cesa-Bianchi. A generalized online mirror descent with applications to
classification and regression. Machine Learning, 99(3):411?435, 2015.
[32] M. Pilanci and M. J. Wainwright. Newton sketch: A linear-time optimization algorithm with linearquadratic convergence. arXiv:1505.02250, 2015.
[33] S. Ross, P. Mineiro, and J. Langford. Normalized online learning. In UAI, 2013.
[34] N. N. Schraudolph, J. Yu, and S. G?nter. A stochastic quasi-newton method for online convex optimization.
In AISTATS, 2007.
[35] J. Sohl-Dickstein, B. Poole, and S. Ganguli. Fast large-scale optimization by unifying stochastic gradient
and quasi-newton methods. In ICML, 2014.
[36] D. P. Woodruff. Sketching as a tool for numerical linear algebra. Foundations and Trends in Machine
Learning, 10(1-2):1?157, 2014.
9
| 6207 |@word worsens:2 determinant:1 briefly:1 version:4 repository:1 norm:6 seems:1 open:1 d2:1 confirms:1 seek:1 crucially:1 covariance:1 pick:1 sgd:1 incurs:1 mention:1 tr:3 boundedness:1 liu:1 woodruff:2 tuned:2 frankwolfe:1 outperforms:1 diagonalized:1 existing:2 recovered:1 com:2 comparing:1 luo:2 surprising:1 dx:1 written:1 john:1 numerical:1 kdd:2 designed:1 plot:3 update:32 drop:1 v:3 prohibitive:1 accordingly:1 updatable:1 provides:1 mathematical:3 become:2 prove:2 overhead:1 redefine:1 manner:1 expected:3 behavior:1 byrd:1 little:1 curse:1 increasing:2 becomes:3 spain:1 provided:1 bounded:7 unrelated:1 notation:3 moreover:3 kg:3 argmin:2 substantially:2 eigenvector:1 transformation:4 nj:1 suite:1 guarantee:8 thorough:1 every:2 concave:1 friendly:1 universit:1 scaled:2 k2:3 rm:9 enjoy:2 positive:2 before:1 limit:1 approximately:1 lugosi:1 might:2 chose:1 studied:2 resembles:1 kane:1 limited:2 bi:2 range:1 practical:2 woodbury:2 enforces:1 regret:26 definite:1 differs:1 sxt:1 kat:1 block:1 implement:1 significantly:2 matching:3 projection:13 pre:1 seeing:2 get:2 onto:1 operator:1 put:1 storage:1 impossible:1 equivalent:1 deterministic:4 yt:8 vowpal:1 kale:2 independently:1 convex:5 simplicity:2 recovery:1 identifying:1 m2:4 rule:1 orthonormal:4 notion:1 coordinate:2 updated:1 enhanced:1 pt:6 suppose:2 construction:1 losing:1 programming:2 us:1 hunch:1 wolfe:2 trend:1 satisfying:1 particularly:1 expensive:1 sparsely:2 predicts:1 database:1 ft:20 inserted:1 capture:1 worst:2 revisiting:1 ensures:1 alekha:1 news:1 substantial:2 intuition:1 pd:1 convexity:1 complexity:4 solving:1 algebra:1 purely:1 upon:1 division:1 efficiency:3 learner:4 completely:1 basis:1 k0:1 represented:2 fast:2 describe:1 shalev:2 garber:2 larger:3 widely:1 dominating:1 relax:1 otherwise:1 noisy:1 final:1 online:37 indyk:1 sequence:2 eigenvalue:10 differentiable:1 kxt:2 wabbit:1 net:1 propose:1 advantage:1 product:1 remainder:1 frequent:7 adaptation:3 uci:1 realization:1 rapidly:1 flexibility:1 achieve:1 description:1 haipeng:2 convergence:5 motwani:1 requirement:1 extending:1 generating:1 incremental:1 rotated:1 depending:1 develop:4 nearest:1 qt:1 eq:1 strong:1 implemented:1 c:1 implies:1 come:1 direction:11 liberty:3 stochastic:11 kb:1 milano:1 sgn:2 require:5 really:1 preliminary:1 eigendecomposed:1 decompose:1 pessimistic:1 mathematically:1 insert:1 correction:1 considered:4 exp:1 predict:2 smallest:1 dxl:1 outperformed:1 label:4 hansen:1 ross:3 largest:2 tf:1 tool:1 minimization:1 clearly:1 gaussian:3 always:2 rather:1 avoid:1 zhou:1 focus:1 naval:1 improvement:4 rank:5 greatly:3 adversarial:1 rigorous:1 baseline:1 sense:1 ganguli:1 dependent:2 streaming:2 typically:3 bt:16 transformed:1 misclassified:1 quasi:3 sketched:10 issue:1 overall:3 ill:7 classification:2 denoted:1 colt:1 constrained:1 special:3 initialize:3 apriori:1 equal:1 biology:1 identical:1 represents:4 kw:2 progressive:1 comparators:1 nearly:3 yu:1 icml:6 others:1 report:1 employ:1 few:1 oja:49 kwt:1 orthonormalizing:3 geometry:3 microsoft:5 maintain:3 b0t:2 interest:2 fd:9 highly:2 investigate:1 evaluation:1 deferred:2 truly:2 semidefinite:1 implication:1 kt:4 accurate:1 orthogonal:3 instance:2 obstacle:1 disadvantage:1 entry:2 successful:1 johnson:2 answer:1 aw:1 mokhtari:1 synthetic:6 chooses:1 st:39 randomized:2 siam:4 stay:1 off:2 invertible:2 picking:1 together:1 sketching:22 concrete:2 quickly:3 vastly:1 squared:1 cesa:6 satisfied:1 unavoidable:1 again:2 possibly:1 worse:4 return:7 rescaling:1 li:1 bfgs:4 bold:1 depends:2 later:1 performed:2 nishihara:1 closed:1 root:1 observing:1 sup:1 doing:2 hazan:5 hf:1 maintains:2 complicated:1 option:1 contribution:1 minimize:1 square:2 accuracy:3 efficiently:4 yield:1 yes:1 nter:1 lu:1 randomness:1 explain:1 suffers:1 whenever:3 evaluates:1 against:4 obvious:1 di:1 mi:1 gain:2 sampled:1 dataset:1 hardt:1 knowledge:1 ut:9 improves:1 dimensionality:1 routine:1 carefully:1 feed:1 dt:3 improved:3 done:2 though:1 evaluated:1 furthermore:2 just:2 achlioptas:1 langford:2 sketch:38 hand:1 eqn:1 lack:2 grows:1 believe:1 usa:3 effect:1 normalized:1 requiring:1 verify:1 contain:1 counterpart:1 true:1 hence:1 moritz:1 moore:1 attractive:3 round:10 game:1 inferior:1 maintained:2 auc:1 m:5 generalized:1 ridge:1 vo:1 performs:1 duchi:1 recently:1 pseudocode:1 empirically:2 conditioning:3 extend:1 discussed:1 approximates:1 maintainable:1 picked:1 kwk2:6 significant:1 cambridge:1 dag:13 ai:1 phillips:2 rd:11 vanilla:2 similarly:1 analyzer:1 toolkit:1 alekh:1 gt:8 nicolo:1 dominant:1 curvature:1 closest:1 jaggi:1 recent:2 italy:1 meta:1 binary:2 vt:16 captured:1 seen:1 haipengl:1 gentile:1 ghashami:2 signal:1 full:3 desirable:1 reduces:2 match:1 faster:1 plug:1 schraudolph:1 long:2 lin:1 controlled:1 prediction:7 variant:3 involving:1 regression:2 essentially:3 expectation:2 arxiv:3 agarwal:2 c1:1 receive:2 addressed:1 jcl:1 uka:1 source:1 extra:3 eliminates:1 rest:1 nv:1 effectiveness:1 jordan:1 call:4 structural:1 near:2 vw:1 revealed:1 split:1 enough:1 easy:1 reduce:1 idea:3 simplifies:2 pca:3 suffer:2 york:3 hardly:1 generally:2 clear:1 eigenvectors:7 detailed:1 transforms:1 generate:2 http:1 outperform:1 notice:1 sign:2 estimated:3 per:4 dasgupta:1 dickstein:1 key:1 putting:1 reliance:1 demonstrating:1 achieving:2 drawn:2 capital:1 libsvm:1 verified:1 ht:16 nocedal:2 relaxation:1 merely:1 fraction:1 subgradient:1 inverse:1 letter:3 throughout:1 family:1 appendix:14 scaling:1 bound:18 def:6 pay:1 convergent:2 quadratic:5 u1:1 aspect:1 developing:1 combination:3 smaller:1 son:47 making:2 modification:3 invariant:14 restricted:2 census:1 heart:1 ln:6 equation:2 discus:1 eventually:1 singer:2 know:2 end:3 serf:1 available:1 operation:2 apply:1 observe:2 balsubramani:1 quarterly:1 spectral:1 stepsize:4 batch:1 schmidt:1 coin:1 eigen:1 altogether:1 rp:7 slower:1 denotes:4 running:10 assumes:1 top:4 remaining:2 maintaining:2 newton:15 unifying:1 especially:1 approximating:1 classical:1 unchanged:1 move:1 question:1 realized:1 already:1 degrades:1 dependence:2 rt:11 diagonal:15 md:6 unclear:1 gradient:11 subspace:3 zxt:3 nelson:1 trivial:1 studi:1 preconditioned:1 illustration:1 mini:1 minimizing:1 setup:2 difficult:1 mostly:1 stoc:1 frank:2 trace:1 implementation:5 zt:17 perform:1 bianchi:6 upper:1 neuron:1 datasets:18 benchmark:3 descent:4 jin:1 logistics:1 defining:1 specified:1 rad:15 acoustic:1 barcelona:1 nip:4 address:1 precluded:1 adversary:2 andp:1 below:3 usually:1 poole:1 gonen:2 sparsity:5 challenge:1 summarize:1 max:1 memory:3 wainwright:1 power:2 critical:1 natural:1 zhu:1 minimax:1 improve:1 created:1 sn:1 nice:1 acknowledgement:1 nicol:2 adagrad:4 relative:2 freund:1 loss:9 expect:1 sublinear:2 ingredient:1 eigendecomposition:1 foundation:1 degree:1 imposes:1 row:8 last:3 keeping:1 svrg:1 free:3 enjoys:3 allow:1 weaker:1 perceptron:1 neighbor:1 erdogdu:1 rivalry:1 sparse:24 overcome:1 dimension:6 world:4 gram:1 lindenstrauss:2 qualitatively:1 adaptive:2 simplified:1 far:2 ribeiro:1 approximate:1 emphasize:1 implicitly:1 ons:3 keep:1 pseudoinverse:1 global:1 reveals:1 uai:1 degli:1 shwartz:2 spectrum:2 mineiro:1 streeter:1 why:1 qz:1 pilanci:1 try:1 cl:6 da:2 diag:6 aistats:2 main:3 dense:3 linearly:3 montanari:1 complementary:2 fig:7 fashion:1 ny:2 sub:2 orth:2 mcmahan:1 jmlr:2 splice:1 down:1 enlarging:1 theorem:6 formula:2 specific:2 xt:40 removing:1 decay:1 admits:1 alt:1 ionosphere:1 concern:1 exists:2 sohl:1 mirror:1 conditioned:6 karhunen:1 gap:1 sparser:1 cx:2 logarithmic:4 simply:1 gao:2 penrose:1 conconi:1 scalar:1 satisfies:1 acm:1 ma:1 comparator:4 conditional:1 identity:1 consequently:1 towards:1 orabona:4 lipschitz:1 replace:1 price:1 experimentally:1 specifically:4 typical:3 reducing:1 unimi:1 wt:18 except:1 lemma:2 zb:3 called:3 total:1 pas:3 invariance:2 principal:1 m3:2 zg:1 highdimensional:1 internal:3 crammer:1 incorporate:1 evaluate:1 princeton:3 d1:1 |
5,756 | 6,208 | R?nyi Divergence Variational Inference
Yingzhen Li
University of Cambridge
Cambridge, CB2 1PZ, UK
[email protected]
Richard E. Turner
University of Cambridge
Cambridge, CB2 1PZ, UK
[email protected]
Abstract
This paper introduces the variational R?nyi bound (VR) that extends traditional variational inference to R?nyi?s ?-divergences. This new family of variational methods
unifies a number of existing approaches, and enables a smooth interpolation from
the evidence lower-bound to the log (marginal) likelihood that is controlled by the
value of ? that parametrises the divergence. The reparameterization trick, Monte
Carlo approximation and stochastic optimisation methods are deployed to obtain a
tractable and unified framework for optimisation. We further consider negative ?
values and propose a novel variational inference method as a new special case in
the proposed framework. Experiments on Bayesian neural networks and variational
auto-encoders demonstrate the wide applicability of the VR bound.
1
Introduction
Approximate inference, that is approximating posterior distributions and likelihood functions, is at the
core of modern probabilistic machine learning. This paper focuses on optimisation-based approximate
inference algorithms, popular examples of which include variational inference (VI), variational Bayes
(VB) [1, 2] and expectation propagation (EP) [3, 4]. Historically, VI has received more attention
compared to other approaches, although EP can be interpreted as iteratively minimising a set of local
divergences [5]. This is mainly because VI has elegant and useful theoretical properties such as the
fact that it proposes a lower-bound of the log-model evidence. Such a lower-bound can serve as
a surrogate to both maximum likelihood estimation (MLE) of the hyper-parameters and posterior
approximation by Kullback-Leibler (KL) divergence minimisation.
Recent advances of approximate inference follow three major trends. First, scalable methods,
e.g. stochastic variational inference (SVI) [6] and stochastic expectation propagation (SEP) [7, 8],
have been developed for datasets comprising millions of datapoints. Recent approaches [9, 10, 11]
have also applied variational methods to coordinate parallel updates arising from computations
performed on chunks of data. Second, Monte Carlo methods and black-box inference techniques have
been deployed to assist variational methods, e.g. see [12, 13, 14, 15] for VI and [16] for EP. They all
proposed ascending the Monte Carlo approximated variational bounds to the log-likelihood using
noisy gradients computed with automatic differentiation tools. Third, tighter variational lower-bounds
have been proposed for (approximate) MLE. The importance weighted auto-encoder (IWAE) [17]
improved upon the variational auto-encoder (VAE) [18, 19] framework, by providing tighter lowerbound approximations to the log-likelihood using importance sampling. These recent developments
are rather separated and little work has been done to understand their connections.
In this paper we try to provide a unified framework from an energy function perspective that
encompasses a number of recent advances in variational methods, and we hope our effort could
potentially motivate new algorithms in the future. This is done by extending traditional VI to R?nyi?s
?-divergence [20], a rich family that includes many well-known divergences as special cases. After
reviewing useful properties of R?nyi divergences and the VI framework, we make the following
contributions:
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Table 1: Special cases in the R?nyi divergence family.
?
??1
Definition
R
p(?) log
p(?)
q(?) d?
??0
?2 log(1 ? Hel2 [p||q])
R
? log p(?)>0 q(?)d?
?=2
? log(1 ? ?2 [p||q])
? = 0.5
? ? +?
log max???
p(?)
q(?)
Notes
Kullback-Leibler (KL) divergence,
used in VI (KL[q||p]) and EP (KL[p||q])
function of the square Hellinger distance
zero when supp(q) ? supp(p)
(not a divergence)
proportional to the ?2 -divergence
worst-case regret in
minimum description length principle [24]
? We introduce the variational R?nyi bound (VR) as an extension of VI/VB. We then discuss
connections to existing approaches, including VI/VB, VAE, IWAE [17], SEP [7] and black-box
alpha (BB-?) [16], thereby showing the richness of this new family of variational methods.
? We develop an optimisation framework for the VR bound. An analysis of the bias introduced
by stochastic approximation is also provided with theoretical guarantees and empirical results.
? We propose a novel approximate inference algorithm called VR-max as a new special case.
Evaluations on VAEs and Bayesian neural networks show that this new method is often
comparable to, or even better than, a number of the state-of-the-art variational methods.
2
Background
This section reviews R?nyi?s ?-divergence and variational inference upon which the new framework
is based. Note that there exist other ?-divergence definitions [21, 22] (see appendix). However we
mainly focus on R?nyi?s definition as it enables us to derive a new class of variational lower-bounds.
2.1
R?nyi?s ?-divergence
We first review R?nyi?s ?-divergence [20, 23]. R?nyi?s ?-divergence, defined on {? : ? > 0, ? 6=
1, |D? | < +?}, measures the ?closeness? of two distributions p and q on a random variable ? ? ?:
Z
1
D? [p||q] =
log p(?)? q(?)1?? d?.
(1)
??1
The definition is extended to ? = 0, 1, +? by continuity. We note that when ? ? 1 the KullbackLeibler (KL) divergence is recovered, which plays a crucial role in machine learning and information
theory. Some other special cases are presented in Table 1. The method proposed in this work also
considers ? ? 0 (although (1) is no longer a divergence for these ? values), and we include from
[23] some useful properties for forthcoming derivations.
Proposition 1. (Monotonicity) R?nyi?s ?-divergence definition (1), extended to negative ?, is continuous and non-decreasing on ? ? {? : ?? < D? < +?}.
?
D1?? [q||p]. This implies
Proposition 2. (Skew symmetry) For ? 6? {0, 1}, D? [p||q] = 1??
D? [p||q] ? 0 for ? < 0. For the limiting case D?? [p||q] = ?D+? [q||p].
A critical question that is still in active research is how to choose a divergence in this rich family to
obtain optimal solution for a particular application, an issue which is discussed in the appendix.
2.2
Variational inference
Next we review the variational inference algorithm [1, 2] using posterior approximation as a running
example. Consider observing a dataset of N i.i.d. samples D = {xn }N
n=1 from a probabilistic model
p(x|?) parametrised by a random variable ? that is drawn from a prior p0 (?). Bayesian inference
involves computing the posterior distribution of the parameters given the data,
QN
p0 (?|?) n=1 p(xn |?, ?)
p(?, D|?)
p(?|D, ?) =
=
,
(2)
p(D|?)
p(D|?)
2
(VI)
(a) Approximated posterior.
(b) Hyper-parameter optimisation.
Figure 1: Mean-Field approximation for Bayesian linear regression. In this case ? = ? the
observation noise variance. The bound is tight as ? ? +?, biasing the VI solution to large ? values.
R
QN
where p(D|?) = p0 (?|?) n=1 p(xn |?, ?)d? is called marginal likelihood or model evidence.
The hyper-parameters of the model are denoted as ? which might be omitted henceforth for notational
ease. For many powerful models the exact posterior is typically intractable, and approximate inference
introduces an approximation q(?) in some tractable distribution family Q to the exact posterior. One
way to obtain this approximation is to minimise the KL divergence KL[q(?)||p(?|D)], which is
also intractable due the difficult term p(D). Variational inference (VI) sidesteps this difficulty by
considering an equivalent optimisation problem that maximises the variational lower-bound:
p(?, D|?)
LVI (q; D, ?) = log p(D|?) ? KL[q(?)||p(?|D, ?)] = Eq log
.
(3)
q(?)
The variational lower-bound can also be used to optimise the hyper-parameters ?.
To illustrate the approximation quality of VI we present a mean-field approximation example to
Bayesian linear regression in Figure 1(a) (in magenta). Readers are referred to the appendix for
details, but essentially a factorised Gaussian approximation is fitted to the true posterior, a correlated
Gaussian in this case. The approximation recovers the posterior mean correctly, but is over-confident.
Moreover, as LVI is the difference between the marginal likelihood and the KL divergence, hyperparameter optimisation can be biased away from the exact MLE towards the region of parameter
space where the KL term is small [25] (see Figure 1(b)).
3
Variational R?nyi bound
Recall from Section 2.1 that the family of R?nyi divergences includes the KL divergence. Perhaps
variational free-energy approaches can be generalised to the R?nyi case? Consider approximating
the exact posterior p(?|D) by minimizing R?nyi?s ?-divergence D? [q(?)||p(?|D)] for some selected
? > 0. Now we consider the equivalent optimization problem maxq?Q log p(D)?D? [q(?)||p(?|D)],
and when ? 6= 1, whose objective can be rewritten as
"
1?? #
1
p(?, D)
L? (q; D) :=
log Eq
.
(4)
1??
q(?)
We name this new objective the variational R?nyi (VR) bound. Importantly the above definition can
be extend to ? ? 0, and the following theorem is a direct result of Proposition 1.
Theorem 1. The objective L? (q; D) is continuous and non-increasing on ? ? {? : |L? | < +?}.
Especially for all 0 < ?+ < 1 and ?? < 0,
LVI (q; D) = lim L? (q; D) ? L?+ (q; D) ? L0 (q; D) ? L?? (q; D)
??1
Also L0 (q; D) = log p(D) if and only if the support supp(p(?|D)) ? supp(q(?)).
Theorem 1 indicates that the VR bound can be useful for model selection by sandwiching the marginal
likelihood with bounds computed using positive and negative ? values, which we leave to future
work. In particular L0 = log p(D) under the mild assumption that q is supported where the exact
3
posterior is supported. This assumption holds for many commonly used distributions, e.g. Gaussians
are supported on the entire space, and in the following we assume that this condition is satisfied.
Choosing different alpha values allows the approximation to balance between zero-forcing (? ?
+?, when using uni-modal approximations it is usually called mode-seeking) and mass-covering
(? ? ??) behaviour. This is illustrated by the Bayesian linear regression example, again in Figure
1(a). First notice that ? ? +? (in cyan) returns non-zero uncertainty estimates (although it is more
over-confident than VI) which is different from the maximum a posteriori Q
(MAP) method that only
returns a point estimate. Second, setting ? = 0.0 (in green) returns q(?) = i p(?i |D) and the exact
marginal likelihood log p(D) (Figure 1(b)). Also the approximate MLE is less biased for ? = 0.5 (in
blue) since now the tightness of the bound is less hyper-parameter dependent.
4
The VR bound optimisation framework
This section addresses several issues of the VR bound optimisation by proposing further approximations. First when ? 6= 1, the VR bound is usually just as intractable as the marginal likelihood
for many useful models. However Monte Carlo (MC) approximation is applied here to extend the
set of models that can be handled. The resulting method can be applied to any model that MC-VI
[12, 13, 14, 15] is applied to. Second, Theorem 1 suggests that the VR bound is to be minimised
when ? < 0, which performs disastrously in MLE context. As we shall see, this issue is solved also
by the MC approximation under certain conditions. Third, a mini-batch training method is developed
for large-scale datasets in the posterior approximation context. Hence the proposed optimisation
framework of the VR bound enables tractable application to the same class of models as SVI.
4.1
Monte Carlo approximation of the VR bound
Consider learning a latent variable model with MLE as a running example, where the model is
specified by a conditional distribution p(x|h, ?) and a prior p(h|?) on the latent variables h.
Examples include models treated by the variational auto-encoder (VAE) approach [18, 19] that
parametrises the likelihood with a (deep) neural network. MLE requires log p(x) which is obtained
by marginalising out h and is often intractable, so the VR bound is considered as an alternative
optimisation objective. However instead of using exact bounds, a simple Monte Carlo (MC) method
is deployed, which uses finite samples hk ? q(h|x), k = 1, ..., K to approximate L? ? L??,K :
"
1?? #
K
X
p(h
,
x)
1
1
k
L??,K (q; x) =
log
.
(5)
1??
K
q(hk |x)
k=1
The importance weighted auto-encoder (IWAE) [17] is a special case of this framework with ? = 0
and K < +?. But unlike traditional VI, here the MC approximation is biased. Fortunately we can
characterise the bias by the following theorems proved in the appendix.
Theorem 2. Assume E{h }K [|L??,K (q; x)|] < +? and |L? | < +?. Then E{h }K [L??,K (q; x)]
k k=1
k k=1
as a function of ? ? R and K ? 1 is:
1) non-decreasing in K for fixed ? ? 1, and non-increasing in K for fixed ? ? 1;
2) E{hk }K
[L??,K (q; x)] ? L? as K ? +?;
k=1
3) continuous and non-increasing in ? with fixed K.
Corollary 1. For finite K, either E{hk }K
[L??,K (q; x)] < log p(x) for all ?, or there exists ?K ? 0
k=1
such that E{h }K [L?? ,K (q; x)] = log p(x) and E{h }K [L??,K (q; x)] > log p(x) for all ? < ?K .
k k=1
K
k k=1
Also ?K is non-decreasing in K if exists, with limK?1 ?K = ?? and limK?+? ?K = 0.
The intuition behind the theorems is visualised in Figure 2(a). By definition, the exact VR bound
is a lower-bound or upper-bound of log p(x) when ? > 0 or ? < 0, respectively. However the
MC approximation E[L??,K ] biases the estimate towards LVI , where the approximation quality can
be improved using more samples. Thus for finite samples and under mild conditions, negative
alpha values can potentially be used to improve the accuracy of the approximation, at the cost of
losing the upper-bound guarantee. Figure 2(b) shows an empirical evaluation by computing the
exact and the MC approximation of the R?nyi divergences. In this example p, q are 2-D Gaussian
distributions with ?p = [0, 0], ?q = [1, 1] and ?p = ?q = I. The sampling procedure is repeated
4
(a) MC approximated VR bounds.
(b) Simulated MC approximations.
Figure 2: (a) An illustration for the bounding properties of MC approximations to the VR bounds. (b)
The bias of the MC approximation. Best viewed in colour and see the main text for details.
200 times to estimate the expectation. Clearly for K = 1 it is equivalent to an unbiased estimate
of the KL-divergence for all ? (even though now the estimation is biased for D? ). For K > 1 and
? < 1, the MC method under-estimates the VR bound, and the bias decreases with increasing K. For
? > 1 the inequality is reversed also as predicted.
4.2
Unified implementation with the reparameterization trick
Readers may have noticed that LVI has a different form compared to L? with ? 6= 1. In this section
we show how to unify the implementation for all finite ? settings using the reparameterization trick
[13, 18] as an example. This trick assumes the existence of the mapping ? = g? (), where the
distribution of the noise term satisfies q(?)d? = p()d. Then the expectation of a function F (?)
over distribution q(?) can be computed as Eq(?) [F (?)] = Ep() [F (g? ())]. One prevalent example
1
is the Gaussian reparameterization: ? ? N (?, ?) ? ? = ? + ? 2 , ? N (0, I). Now we apply
the reparameterization trick to the VR bound
"
1?? #
p(g? (), x)
1
log E
L? (q? ; x) =
.
(6)
1??
q(g? ())
Then the gradient of the VR bound w.r.t. ? is (similar for ?, see appendix for derivation)
p(g? (), x)
,
(7)
?? L? (q? ; x) = E w? (; ?, x)?? log
q(g? ())
1??
1??
p(g (),x)
p(g? (),x)
where w? (; ?, x) = q(g?? ())
denotes the normalised importance
E
q(g? ())
weight. One can show that this recovers the the stochastic gradients of LVI by setting ? = 1 in (7)
since now w1 (; ?, x) = 1, which means the resulting algorithm unifies the computation for all
finite ? settings. For MC approximations, we use K samples to approximately compute the weight
1??
p(g ( ),x)
, k = 1, ..., K, and the stochastic gradient becomes
w
??,k (k ; ?, x) ? q(g?? (kk ))
?? L??,K (q? ; x) =
K
X
p(g? (k ), x)
w
??,k (k ; ?, x)?? log
.
q(g? (k ))
(8)
k=1
When ? = 1, w
?1,k (k ; ?, x) = 1/K, and it recovers the stochastic gradient VI method [18].
To speed-up learning [17] suggested back-propagating only one sample j with j ? pj = w
??,j , which
can be easily extended to our framework. Importantly, the use of different ? < 1 indicates the degree
of emphasis placed upon locations where the approximation q under-estimates p, and in the extreme
case ? ? ??, the algorithm chooses the sample that has the maximum unnormalised importance
weight. We name this approach VR-max and summarise it and the general case in Algorithm 1. Note
that VR-max (and VR-? with ? < 0 and MC approximations) does not minimise D1?? [p||q]. It is
true that L? ? log p(x) for negative ? values. However Corollary 1 suggests that the tightest MC
approximation for given K has non-positive ?K value, or might not even exist. Furthermore ?K
becomes more negative as the mismatch between q and p increases, e.g. VAE uses a uni-modal q
distribution to approximate the typically multi-modal exact posterior.
5
Algorithm 1 One gradient step for VR-?/VR-max
with single backward pass. Here w(
? k ; x) shorthands w
?0,k (k ; ?, x) in the main text.
1: given the current datapoint x, sample
1 , ..., K ? p()
2: for k = 1, ..., K, compute the unnormalised weight
mini-batch
sub-sampling
VR
log w(
? k ; x) = log p(g? (k ), x)?log q(g? (k )|x)
3: choose the sample j to back-propagate:
if |?| < ?: j ? pk where pk ? w(
? k ; x)1??
if ? = ??: j = arg maxk log w(
? k ; x)
4: return the gradients ?? log w(
? j ; x)
4.3
SEP
?xed point
approx.
energy
approx.
factor
tying
EP
BBglobal
local
Figure 3: Connecting local and global
divergence minimisation.
Stochastic approximation for large-scale learning
VR bounds can also be applied to full Bayesian inference with posterior approximation. However for
large datasets full batch learning is very inefficient. Mini-batch training is non-trivial here since the
VR bound cannot be represented by the expectation on a datapoint-wise loss, except when ? = 1.
This section introduces two proposals for mini-batch training, and interestingly, this recovers two
existing algorithms that were motivated from a different perspective. In the following we define the
QN
1
?average likelihood? f?D (?) = [ n=1 p(xn |?)] N . Hence the joint distribution can be rewritten as
p(?, D) = p0 (?)f?D (?)N . Also for a mini-batch of M datapoints S = {xn1 , ..., xnM } ? D, we
QM
1
define the ?subset average likelihood? f?S (?) = [ m=1 p(xnm |?)] M .
The first proposal considers fixed point approximations with mini-batch sub-sampling. It first derives
the fixed point conditions for the variational parameters (e.g. the natural parameters of q) using the
exact VR bound (4), then design an iterative algorithm using those fixed point equations, but with
f?D (?) replaced by f?S (?). The second proposal also applies this subset average likelihood approximation idea, but directly to the VR bound (4) (so this approach is named energy approximation):
"
#
?S (?)N 1??
1
p
(?)
f
0
L?? (q; S) =
log Eq
.
(9)
1??
q(?)
In the appendix we demonstrate with detailed derivations that fixed point approximation returns
Stochastic EP (SEP) [7], and black box alpha (BB-?) [16] corresponds to energy approximation. Both
algorithms were originally proposed to approximate (power) EP [3, 26], which usually minimises
?-divergences locally, and considers M = 1, ? ? [1 ? 1/N, 1) and exponential family distributions.
These approximations were done by factor tying, which significantly reduces the memory overhead
of full EP and makes both SEP and BB-? scalable to large datasets just as SVI. The new derivation
derivation provides a theoretical justification from energy perspective, and also sheds lights on the
connections between local and global divergence minimisations as depicted in Figure 3. Note that
all these methods recover SVI when ? ? 1, in which global and local divergence minimisation are
equivalent. Also these results suggest that recent attempts of distributed posterior approximation (by
carving up the dataset into pieces with M > 1 [10, 11]) can be extended to both SEP and BB-?.
Monte Carlo methods can also be applied to both proposals. For SEP the moment computation can be
approximated with MCMC [10, 11]. For BB-? one can show in the same way as to prove Theorem
2 that simple MC approximation in expectation lower-bounds the BB-? energy when ? ? 1. In
general it is also an open question how to choose ? for given the mini-batch size M and the number
of samples K, but there is evidence that intermediate ? values can be superior [27, 28].
5
Experiments
We evaluate the VR bound methods on Bayesian neural networks and variational auto-encoders. All
the experiments used the ADAM optimizer [29], and the detailed experimental set-up (batch size,
learning rate, etc.) can be found in the appendix. The implementation of all the experiments in Python
is released at https://github.com/YingzhenLi/VRbound.
6
mass-covering
zero-forcing
Figure 4: Test LL and RMSE results for Bayesian neural network regression. The lower the better.
5.1
Bayesian neural network
The first experiment considers Bayesian neural network regression. The datasets are collected from
the UCI dataset repository.1 The model is a single-layer neural network with 50 hidden units (ReLUs)
for all datasets except Protein and Year (100 units). We use a Gaussian prior ? ? N (?; 0, I) for
the network weights and Gaussian approximation to the true posterior q(?) = N (?; ?q , diag(?q )).
We follow the toy example in Section 3 to consider ? ? {??, 0.0, 0.5, 1.0, +?} in order to
examine the effect of mass-covering/zero-forcing behaviour. Stochastic optimisation uses the energy
approximation proposed in Section 4.3. MC approximation is also deployed to compute the energy
function, in which K = 100, 10 is used for small and large datasets (Protein and Year), respectively.
We summarise the test negative log-likelihood (LL) and RMSE with standard error (across different
random splits except for Year) for selected datasets in Figure 4, where the full results are provided in
the appendix. These results indicate that for posterior approximation problems, the optimal ? may
vary for different datasets. Also the MC approximation complicates the selection of ? (see appendix).
Future work should develop algorithms to automatically select the best ? values, although a naive
approach could use validation sets. We observed two major trends that zero-forcing/mode-seeking
methods tend to focus on improving the predictive error, while mass-covering methods returns better
calibrated uncertainty estimate and better test log-likelihood. In particular VI returns lower test
log-likelihood for most of the datasets. Furthermore, ? = 0.5 produced overall good results for both
test LL and RMSE, possibly because the skew symmetry is centred at ? = 0.5 and the corresponding
divergence is the only symmetric distance measure in the family.
5.2
Variational auto-encoder
The second experiments considers variational auto-encoders for unsupervised learning. We mainly
compare three approaches: VAE (? = 1.0), IWAE (? = 0), and VR-max (? = ??), which are
implemented upon the publicly available code.2 Four datasets are considered: Frey Face (with 10-fold
cross validation), Caltech 101 Silhouettes, MNIST and OMNIGLOT. The VAE model has L = 1, 2
stochastic layers with deterministic layers stacked between, and the network architecture is detailed
in the appendix. We reproduce the IWAE experiments to obtain a fair comparison, since the results in
the original publication [17] mismatches those evaluated on the publicly available code.
We report test log-likelihood results in Table 2 by computing log p(x) ? L?0,5000 (q; x) following
[17]. We also present some samples from the trained models in the appendix. Overall VR-max is
almost indistinguishable from IWAE. Other positive alpha settings (e.g. ? = 0.5) return worse results,
e.g. 1374.64 ? 5.62 for Frey Face and ?85.50 for MNIST with ? = 0.5, L = 1 and K = 5. These
worse results for ? > 0 indicate the preference of getting tighter approximations to the likelihood
function for MLE problems. Small negative ? values (e.g. ? = ?1.0, ?2.0) returns better results on
different splits of the Frey Face data, and overall the best ? value is dataset-specific.
1
2
http://archive.ics.uci.edu/ml/datasets.html
https://github.com/yburda/iwae
7
Table 2: Average Test log-likelihood. Results for VAE on
MNIST and OMNIGLOT are collected from [17].
Dataset
L K
VAE
IWAE
VR-max
Frey Face
1
5 1322.96 1380.30
1377.40
(? std. err.)
?10.03
?4.60
?4.59
Caltech 101
1
5
-119.69 -117.89
-118.01
Silhouettes
50 -119.61 -117.21
-117.10
MNIST
1
5
-86.47
-85.41
-85.42
50
-86.35
-84.80
-84.81
2
5
-85.01
-83.92
-84.04
50
-84.78
-83.05
-83.44
OMNIGLOT 1
5
-107.62 -106.30
-106.33
1 50 -107.80 -104.68
-105.05
2
5
-106.31 -104.64
-104.71
2 50 -106.30 -103.25
-103.72
(a) Log of ratio R = wmax /(1 ? wmax )
Figure 5: Bias of sampling approximation to. Results for K = 5, 50
samples are shown on the left and
right, respectively.
(b) Weights of samples.
Figure 6: Importance weights during training, see main text for details. Best viewed in colour.
VR-max?s success might be explained by the tightness of the bound. To evaluate this, we compute
the VR bounds on 100 test datapoints using the 1-layer VAE trained on Frey Face, with K = {5, 50}
and ? ? {0, ?1, ?5, ?50, ?500}. Figure 5 presents the estimated gap L??,K ? L?0,5000 . The results
indicates that L??,K provides a lower-bound, and that gap is narrowed as ? ? ??. Also increasing
K provides improvements. The standard error of estimation is almost constant for different ? (with
K fixed), and is negligible when compared to the MC approximation bias.
Another explanation for VR-max?s success is that, the sample with the largest normalised importance
weight wmax dominates the contributions of all the gradients. This is confirmed by tracking R =
wmax
1?wmax during training on Frey Face (Figure 6(a)). Also Figure 6(b) shows the 10 largest importance
weights from K = 50 samples in descending order, which exhibit an exponential decay behaviour,
with the largest weight occupying more than 75% of the probability mass. Hence VR-max provides a
fast approximation to IWAE when tested on CPUs or multiple GPUs with high communication costs.
Indeed our numpy implementation of VR-max achieves up to 3 times speed-up compared to IWAE
(9.7s vs. 29.0s per epoch, tested on Frey Face data with K = 50 and batch size M = 100, CPU info:
Intel Core i7-4930K CPU @ 3.40GHz). However this speed advantage is less significant when the
gradients can be computed very efficiently on a single GPU.
6
Conclusion
We have introduced the variational R?nyi bound and an associated optimisation framework. We
have shown the richness of the new family, not only by connecting to existing approaches including
VI/VB, SEP, BB-?, VAE and IWAE, but also by proposing the VR-max algorithm as a new special
case. Empirical results on Bayesian neural networks and variational auto-encoders indicate that VR
bound methods are widely applicable and can obtain state-of-the-art results. Future work will focus
on both experimental and theoretical sides. Theoretical work will study the interaction of the biases
introduced by MC approximation and datapoint sub-sampling. A guide on choosing optimal ? values
are needed for practitioners when applying the framework to their applications.
Acknowledgements
We thank the Cambridge MLG members and the reviewers for comments. YL thanks the Schlumberger Foundation FFTF fellowship. RET thanks EPSRC grants # EP/M026957/1 and EP/L000776/1.
8
References
[1] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul, ?An introduction to variational methods for
graphical models,? Machine learning, vol. 37, no. 2, pp. 183?233, 1999.
[2] M. J. Beal, Variational algorithms for approximate Bayesian inference. PhD thesis, University College
London, 2003.
[3] T. Minka, ?Expectation propagation for approximate Bayesian inference,? in Conference on Uncertainty in
Artificial Intelligence (UAI), 2001.
[4] M. Opper and O. Winther, ?Expectation consistent approximate inference,? The Journal of Machine
Learning Research, vol. 6, pp. 2177?2204, 2005.
[5] T. Minka, ?Divergence measures and message passing,? tech. rep., Microsoft Research, 2005.
[6] M. D. Hoffman, D. M. Blei, C. Wang, and J. W. Paisley, ?Stochastic variational inference,? Journal of
Machine Learning Research, vol. 14, no. 1, pp. 1303?1347, 2013.
[7] Y. Li, J. M. Hern?ndez-Lobato, and R. E. Turner, ?Stochastic expectation propagation,? in Advances in
Neural Information Processing Systems (NIPS), 2015.
[8] G. Dehaene and S. Barthelm?, ?Expectation propagation in the large-data limit,? arXiv:1503.08060, 2015.
[9] T. Broderick, N. Boyd, A. Wibisono, A. C. Wilson, and M. I. Jordan, ?Streaming variational Bayes,? in
Advances in Neural Information Processing Systems (NIPS), 2013.
[10] A. Gelman, A. Vehtari, P. Jyl?nki, C. Robert, N. Chopin, and J. P. Cunningham, ?Expectation propagation
as a way of life,? arXiv:1412.4869, 2014.
[11] M. Xu, B. Lakshminarayanan, Y. W. Teh, J. Zhu, and B. Zhang, ?Distributed Bayesian posterior sampling
via moment sharing,? in Advances in Neural Information Processing Systems (NIPS), 2014.
[12] J. Paisley, D. Blei, and M. Jordan, ?Variational Bayesian inference with stochastic search,? in Proceedings
of The 29th International Conference on Machine Learning (ICML), 2012.
[13] T. Salimans and D. A. Knowles, ?Fixed-form variational posterior approximation through stochastic linear
regression,? Bayesian Analysis, vol. 8, no. 4, pp. 837?882, 2013.
[14] R. Ranganath, S. Gerrish, and D. M. Blei, ?Black box variational inference,? in Proceedings of the 17th
International Conference on Artificial Intelligence and Statistics (AISTATS), 2014.
[15] A. Kucukelbir, R. Ranganath, A. Gelman, and D. M. Blei, ?Automatic variational inference in Stan,? in
Advances in Neural Information Processing Systems (NIPS), 2015.
[16] J. M. Hern?ndez-Lobato, Y. Li, M. Rowland, D. Hern?ndez-Lobato, T. Bui, and R. E. Turner, ?Black-box
?-divergence minimization,? in Proceedings of The 33rd International Conference on Machine Learning
(ICML), 2016.
[17] Y. Burda, R. Grosse, and R. Salakhutdinov, ?Importance weighted autoencoders,? in International Conference on Learning Representations (ICLR), 2016.
[18] D. P. Kingma and M. Welling, ?Auto-encoding variational Bayes,? in International Conference on Learning
Representations (ICLR), 2014.
[19] D. J. Rezende, S. Mohamed, and D. Wierstra, ?Stochastic backpropagation and approximate inference
in deep generative models,? in Proceedings of The 30th International Conference on Machine Learning
(ICML), 2014.
[20] A. R?nyi, ?On measures of entropy and information,? Fourth Berkeley symposium on mathematical
statistics and probability, vol. 1, 1961.
[21] S.-i. Amari, Differential-Geometrical Methods in Statistic. New York: Springer, 1985.
[22] C. Tsallis, ?Possible generalization of Boltzmann-Gibbs statistics,? Journal of statistical physics, vol. 52,
no. 1-2, pp. 479?487, 1988.
[23] T. Van Erven and P. Harremo?s, ?R?nyi divergence and Kullback-Leibler divergence,? Information Theory,
IEEE Transactions on, vol. 60, no. 7, pp. 3797?3820, 2014.
[24] P. Gr?nwald, Minimum Description Length Principle. MIT press, Cambridge, MA, 2007.
[25] R. E. Turner and M. Sahani, ?Two problems with variational expectation maximisation for time-series
models,? in Bayesian Time series models (D. Barber, T. Cemgil, and S. Chiappa, eds.), ch. 5, pp. 109?130,
Cambridge University Press, 2011.
[26] T. Minka, ?Power EP,? Tech. Rep. MSR-TR-2004-149, Microsoft Research, 2004.
[27] T. D. Bui, D. Hern?ndez-Lobato, Y. Li, J. M. Hern?ndez-Lobato, and R. E. Turner, ?Deep gaussian processes
for regression using approximate expectation propagation,? in Proceedings of The 33rd International
Conference on Machine Learning (ICML), 2016.
[28] S. Depeweg, J. M. Hern?ndez-Lobato, F. Doshi-Velez, and S. Udluft, ?Learning and policy search in
stochastic dynamical systems with bayesian neural networks,? arXiv preprint arXiv:1605.07127, 2016.
[29] D. P. Kingma and J. Ba, ?Adam: A method for stochastic optimization,? in International Conference on
Learning Representations (ICLR), 2015.
9
| 6208 |@word mild:2 repository:1 msr:1 open:1 propagate:1 p0:4 thereby:1 tr:1 moment:2 ndez:6 series:2 interestingly:1 erven:1 existing:4 err:1 recovered:1 current:1 com:2 gpu:1 enables:3 update:1 v:1 intelligence:2 selected:2 generative:1 core:2 blei:4 provides:4 location:1 preference:1 zhang:1 wierstra:1 mathematical:1 direct:1 differential:1 xnm:2 symposium:1 shorthand:1 prove:1 overhead:1 introduce:1 hellinger:1 indeed:1 examine:1 multi:1 salakhutdinov:1 decreasing:3 automatically:1 little:1 cpu:3 considering:1 increasing:5 becomes:2 spain:1 provided:2 moreover:1 mass:5 xed:1 tying:2 interpreted:1 developed:2 proposing:2 unified:3 ret:1 differentiation:1 guarantee:2 berkeley:1 shed:1 qm:1 uk:4 unit:2 grant:1 generalised:1 positive:3 negligible:1 local:5 frey:7 cemgil:1 limit:1 encoding:1 interpolation:1 approximately:1 black:5 might:3 emphasis:1 suggests:2 ease:1 tsallis:1 lowerbound:1 carving:1 maximisation:1 regret:1 backpropagation:1 cb2:2 svi:4 procedure:1 empirical:3 significantly:1 boyd:1 suggest:1 protein:2 cannot:1 selection:2 gelman:2 context:2 applying:1 descending:1 equivalent:4 map:1 deterministic:1 reviewer:1 lobato:6 attention:1 unify:1 iwae:11 d1:2 importantly:2 datapoints:3 reparameterization:5 coordinate:1 justification:1 limiting:1 play:1 exact:11 losing:1 us:3 trick:5 trend:2 approximated:4 std:1 ep:12 role:1 observed:1 epsrc:1 preprint:1 solved:1 wang:1 worst:1 region:1 richness:2 decrease:1 visualised:1 intuition:1 vehtari:1 broderick:1 cam:2 motivate:1 trained:2 reviewing:1 tight:1 predictive:1 serve:1 upon:4 sep:8 easily:1 joint:1 represented:1 derivation:5 stacked:1 separated:1 fast:1 london:1 monte:7 artificial:2 hyper:5 choosing:2 whose:1 widely:1 tightness:2 amari:1 encoder:5 statistic:4 noisy:1 beal:1 advantage:1 propose:2 interaction:1 uci:2 description:2 getting:1 extending:1 adam:2 leave:1 derive:1 develop:2 ac:2 propagating:1 chiappa:1 illustrate:1 minimises:1 received:1 eq:4 implemented:1 predicted:1 involves:1 implies:1 indicate:3 stochastic:18 behaviour:3 parametrises:2 generalization:1 proposition:3 tighter:3 extension:1 hold:1 considered:2 ic:1 mapping:1 major:2 optimizer:1 vary:1 achieves:1 omitted:1 released:1 estimation:3 applicable:1 largest:3 occupying:1 tool:1 weighted:3 hoffman:1 hope:1 minimization:1 mit:1 clearly:1 gaussian:7 rather:1 ret26:1 vae:10 publication:1 minimisation:4 corollary:2 jaakkola:1 l0:3 focus:4 rezende:1 wilson:1 notational:1 improvement:1 prevalent:1 likelihood:20 mainly:3 indicates:3 hk:4 tech:2 posteriori:1 inference:25 dependent:1 streaming:1 typically:2 entire:1 cunningham:1 hidden:1 reproduce:1 chopin:1 comprising:1 issue:3 arg:1 overall:3 html:1 denoted:1 proposes:1 development:1 art:2 special:7 marginal:6 field:2 sampling:7 unsupervised:1 icml:4 future:4 summarise:2 report:1 richard:1 modern:1 divergence:37 numpy:1 replaced:1 microsoft:2 attempt:1 schlumberger:1 message:1 evaluation:2 introduces:3 extreme:1 light:1 behind:1 parametrised:1 theoretical:5 fitted:1 complicates:1 jyl:1 applicability:1 cost:2 subset:2 gr:1 kullbackleibler:1 encoders:4 barthelm:1 chooses:1 chunk:1 confident:2 calibrated:1 thanks:2 winther:1 international:8 probabilistic:2 yl:1 physic:1 minimised:1 connecting:2 w1:1 again:1 thesis:1 satisfied:1 kucukelbir:1 choose:3 possibly:1 henceforth:1 worse:2 sidestep:1 inefficient:1 return:9 li:4 supp:4 toy:1 factorised:1 centred:1 includes:2 lakshminarayanan:1 vi:19 piece:1 performed:1 try:1 observing:1 sandwiching:1 bayes:3 recover:1 parallel:1 relus:1 narrowed:1 rmse:3 contribution:2 square:1 publicly:2 accuracy:1 variance:1 efficiently:1 bayesian:19 unifies:2 produced:1 mc:20 carlo:7 confirmed:1 datapoint:3 sharing:1 ed:1 definition:7 mlg:1 energy:9 pp:7 mohamed:1 minka:3 doshi:1 associated:1 recovers:4 xn1:1 dataset:5 proved:1 popular:1 recall:1 lim:1 back:2 originally:1 follow:2 modal:3 improved:2 done:3 box:5 though:1 evaluated:1 marginalising:1 just:2 furthermore:2 autoencoders:1 wmax:5 propagation:7 continuity:1 mode:2 quality:2 perhaps:1 name:2 effect:1 true:3 unbiased:1 hence:3 symmetric:1 iteratively:1 leibler:3 illustrated:1 ll:3 indistinguishable:1 during:2 covering:4 demonstrate:2 performs:1 geometrical:1 variational:45 wise:1 novel:2 superior:1 million:1 discussed:1 extend:2 velez:1 significant:1 cambridge:7 gibbs:1 paisley:2 automatic:2 approx:2 rd:2 omniglot:3 longer:1 etc:1 posterior:19 recent:5 perspective:3 forcing:4 certain:1 inequality:1 rep:2 success:2 harremo:1 life:1 caltech:2 minimum:2 fortunately:1 nwald:1 full:4 multiple:1 reduces:1 smooth:1 minimising:1 cross:1 mle:8 l000776:1 controlled:1 scalable:2 regression:7 optimisation:13 expectation:13 essentially:1 arxiv:4 proposal:4 background:1 fellowship:1 crucial:1 biased:4 unlike:1 limk:2 archive:1 comment:1 tend:1 elegant:1 dehaene:1 member:1 jordan:3 practitioner:1 intermediate:1 split:2 forthcoming:1 architecture:1 idea:1 depeweg:1 minimise:2 i7:1 motivated:1 handled:1 assist:1 colour:2 effort:1 passing:1 york:1 deep:3 useful:5 detailed:3 characterise:1 locally:1 http:3 exist:2 notice:1 estimated:1 arising:1 correctly:1 per:1 blue:1 hyperparameter:1 shall:1 vol:7 four:1 drawn:1 pj:1 backward:1 year:3 powerful:1 uncertainty:3 fourth:1 named:1 extends:1 family:10 reader:2 almost:2 knowles:1 appendix:11 vb:4 comparable:1 bound:46 cyan:1 layer:4 fold:1 speed:3 gpus:1 across:1 explained:1 equation:1 hern:6 discus:1 skew:2 needed:1 tractable:3 ascending:1 available:2 gaussians:1 rewritten:2 tightest:1 apply:1 yburda:1 away:1 salimans:1 batch:10 alternative:1 existence:1 original:1 assumes:1 running:2 include:3 denotes:1 graphical:1 ghahramani:1 especially:1 nyi:22 approximating:2 seeking:2 objective:4 noticed:1 question:2 traditional:3 surrogate:1 exhibit:1 gradient:9 iclr:3 distance:2 reversed:1 thank:1 simulated:1 barber:1 considers:5 collected:2 trivial:1 length:2 code:2 mini:7 providing:1 minimizing:1 balance:1 illustration:1 kk:1 difficult:1 ratio:1 yingzhen:1 robert:1 potentially:2 info:1 negative:8 ba:1 implementation:4 design:1 boltzmann:1 policy:1 maximises:1 upper:2 teh:1 observation:1 datasets:12 finite:5 maxk:1 extended:4 communication:1 introduced:3 kl:12 specified:1 connection:3 barcelona:1 maxq:1 nip:5 kingma:2 address:1 suggested:1 usually:3 dynamical:1 mismatch:2 biasing:1 encompasses:1 max:13 including:2 explanation:1 optimise:1 green:1 power:2 critical:1 memory:1 difficulty:1 treated:1 natural:1 nki:1 turner:5 zhu:1 improve:1 github:2 historically:1 stan:1 auto:10 naive:1 unnormalised:2 udluft:1 sahani:1 text:3 review:3 prior:3 epoch:1 python:1 acknowledgement:1 loss:1 proportional:1 validation:2 foundation:1 degree:1 consistent:1 principle:2 supported:3 placed:1 free:1 bias:8 normalised:2 understand:1 side:1 guide:1 wide:1 saul:1 face:7 burda:1 distributed:2 ghz:1 van:1 opper:1 xn:4 rich:2 qn:3 commonly:1 rowland:1 welling:1 transaction:1 bb:7 ranganath:2 approximate:15 alpha:5 uni:2 kullback:3 silhouette:2 bui:2 monotonicity:1 ml:1 global:3 active:1 uai:1 continuous:3 latent:2 iterative:1 search:2 table:4 symmetry:2 improving:1 diag:1 lvi:6 aistats:1 pk:2 main:3 bounding:1 noise:2 repeated:1 fair:1 xu:1 referred:1 intel:1 deployed:4 grosse:1 vr:41 sub:3 exponential:2 third:2 magenta:1 theorem:8 specific:1 showing:1 pz:2 decay:1 evidence:4 closeness:1 intractable:4 exists:2 derives:1 mnist:4 dominates:1 importance:9 phd:1 gap:2 entropy:1 depicted:1 tracking:1 applies:1 springer:1 ch:1 corresponds:1 satisfies:1 gerrish:1 ma:1 conditional:1 viewed:2 towards:2 except:3 called:3 pas:1 experimental:2 vaes:1 select:1 college:1 support:1 wibisono:1 evaluate:2 mcmc:1 tested:2 correlated:1 |
5,757 | 6,209 | Hypothesis Testing in Unsupervised Domain
Adaptation with Applications in Alzheimer?s Disease
Hao Henry Zhou?
Sathya N. Ravi?
Vamsi K. Ithapu?
?,?
?
Sterling C. Johnson
Grace Wahba
Vikas Singh?
?
?
William S. Middleton Memorial VA Hospital
University of Wisconsin?Madison
Abstract
Consider samples from two different data sources {xis } ? Psource and {xit } ?
Ptarget . We only observe their transformed versions h(xis ) and g(xit ), for some
known function class h(?) and g(?). Our goal is to perform a statistical test checking
if Psource = Ptarget while removing the distortions induced by the transformations.
This problem is closely related to domain adaptation, and in our case, is motivated
by the need to combine clinical and imaging based biomarkers from multiple sites
and/or batches ? a fairly common impediment in conducting analyses with much
larger sample sizes. We address this problem using ideas from hypothesis testing
on the transformed measurements, wherein the distortions need to be estimated in
tandem with the testing. We derive a simple algorithm and study its convergence
and consistency properties in detail, and provide lower-bound strategies based on
recent work in continuous optimization. On a dataset of individuals at risk for
Alzheimer?s disease, our framework is competitive with alternative procedures that
are twice as expensive and in some cases operationally infeasible to implement.
1
Introduction
A first order requirement in many estimation tasks is that the training and testing samples are from
the same underlying distribution and the associated features are directly comparable. But in many
real world datasets, training/testing (or source/target) samples may come from different ?domains?:
they may be variously represented and involve different marginal distributions [8, 32]. ?Domain
adaptation? (DA) algorithms [24, 27] are often used to address such problems. For example, in
vision, not accounting for systematic source/target variations in images due to commodity versus
professional camera equipment yields poor accuracy for visual recognition; here, these schemes
can be used to match the source/target distributions or identify intermediate latent representations
[12, 1, 9], often yielding superior performance [29, 12, 1, 9]. Such success has lead to specialized
formulations, for instance, when target annotations are absent (unsupervised) [11, 13] or minimally
available (semi-supervised) [7, 22]. With a mapping to compensate for this domain shift, we know
that the normalized (or transformed) features are sufficiently invariant and reliable in practice.
In numerous DA applications, the interest is in seamlessly translating a classifier across domains ?
consequently, the model?s test/target predictive performance serves the intended goals. However, in
many areas of science, issues concerning the statistical power of the experiment, the sample sizes
needed to achieve this power and whether we can derive p-values for the estimated domain adaptation
model are equally, if not, more important. For instance, the differences in instrument calibration and
reagents in wet lab experiments are potential DA applications except that the downstream analysis may
involve little to no discrimination performance measures per se. Separately, in multi-site population
studies [17, 18, 21], where due to operational reasons, recruitment and data acquisition is distributed
over multiple sites (even countries) ? site-specific shifts in measurements and missing covariates
are common [17, 18, 21]. The need to harmonize such data requires some form of DA. While good
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
predictive performance is useful, the ability to perform hypothesis tests and obtain interpretable
statistical quantities remain central to the conduct of experiments or analyses across a majority of
scientific disciplines. We remark that constructs such as H?H distance have been widely used to
analyze non-conservative DA and obtain probabilistic bounds on the performance of a classifier from
certain hypotheses classes, but the statistical considerations identified above are not well studied and
do not follow straightforwardly from the learning theoretic results derived in [2, 5].
A Motivating Example from Neuroscience. The social and financial burden (of health-care) is
projected to grow considerably since elderly are the fastest growing populace [28, 6], and age is the
strongest risk factor for neurological disorders such as Alzheimer?s disease (AD). Although numerous
large scale projects study the aging brain to identify early biomarkers for various types of dementia,
when younger cohorts are analyzed (farther away from disease onset), the effect sizes become worse.
This has led to multi-center research collaborations and clinical trials in an effort to increase sample
sizes. Despite the promise, combining data across sites pose significant statistical challenges ? for AD
in particular, the need for harmonization or standardization (i.e., domain adaptation) was found to be
essential [20, 34] in the analysis of multi-site Cerebrospinal fluid (CSF) assays and brain volumetric
measurements. These analyses refer to the use of AD related pathological biomarkers (?-amyloid
peptide in CSF), but there is variability in absolute concentrations due to CSF collection and storage
procedures [34]. Similar variability issues exist for amyloid and structural brain imaging studies,
and are impediments before multi-site data can be pooled and analyzed in totality. The temporary
solution emerging from [20] is to use an ?normalization/anchor? cohort of individuals which will
then be validated using test/retest variation. The goal of this paper is to provide a rigorous statistical
framework for addressing these challenges that will make domain adaptation an analysis tool in
neuroimaging as well as other experimental areas.
This paper makes the following key contributions. a) On the formulation side, we generalize
existing models which assume an identical transformation applied to both the source/target domains
to compensate for the domain shift. Our proposal permits domain-specific transformations to align
both the marginal (and the conditional) data distributions; b) On the statistical side, we derive a
provably consistent hypothesis test to check whether the transformation model can indeed correct the
?shift?, directly yielding p-values. We also show consistency of the model in that we can provably
estimate the actual transformation parameters in an asymptotic sense; c) We identify some interesting
links of our estimation with recent developments in continuous optimization and show how our model
permits an analysis based on obtaining successively tighter lower bounds; d) Finally, we present
experiments on an AD study showing how CSF data from different batches (source/target) can be
harmonized enabling the application of standard statistical analysis schemes.
2
Background
Consider the unsupervised domain adaptation setting where the inputs/features/covariates in the
source and target domains are denoted by xs and xt respectively. The source and target feature
spaces are related via some unknown mapping, which is recovered by applying some appropriate
? s and x
? t . Within this setting,
transformations on the inputs. We denote these transformed inputs as x
our goal is two-fold: first, to estimate the source-to-target mapping, followed by performing some
statistical test about the ?goodness? of the estimate. Specifically, the problem is to first estimate
suitable transformations h ? G, g ? G 0 , parameterized by some ? and ? respectively, such that the
? s := h(xs , ?) and x
? t := g(xt , ?) have similar distributions. G and G 0 restrict
transformed data x
the allowable mappings (e.g., affine) between source and target. Clearly the goodness of domain
adaptation depends on the nature and size of G, and the similarity measure used to compare the
distributions. The distance/similarity measure used in our model defines a statistic for comparing
distributions. Hence, using the estimated transformations, we then provide a hypothesis test for the
existence of ? and ? such that P r(?
xs ) = P r(?
xt ), and finally assign p-values for the significance.
To setup this framework, we start with a statistic that measures the distance between two distributions.
As motivated in Section 1, we do not impose any parametric assumptions. Since we are interested in
the mismatch of P r(?
xs ) and P r(?
xt ), we use maximum mean discrepancy (MMD) which measures
the mean distance between {xs } and {xt } in a Hilbert space induced by a characteristic kernel K,
M M D(xs , xt ) = sup
f ?F
m
n
1 X
1X
f (xis ) ?
f (xit )
m i=1
n i=1
2
!
=k
m
n
1 X
1X
K(xit , ?) ?
K(xis , ?)kH
m i=1
n i=1
(1)
where F = {f ? HK , ||f ||HK ? 1} and HK denotes the universal RKHS. The advantage of
MMD over other nonparametric distance measures is discussed in [30, 15, 16, 31]. Specifically,
MMD statistic defines a metric, and whenever MMD is large, the samples are ?likely? from different
distributions. The simplicity of MMD and the statistical and asymptotic guarantees it provides
[15, 16], largely drive our estimation and testing approach. In fact, our framework will operate on
? s and x
? t while estimating the appropriate transformations.
?transformed? data x
2.1
Related Work
The body of work on domain adaptation is fairly extensive, even when restricted to the unsupervised
version. Below, we describe algorithms that are more closely related to our work and identify
the similarities/differences. A common feature of many unsupervised methods is to match the
feature/covariate distributions between the source and the target domains, and broadly, these fall
into two different categories. The first set of methods deal with feature distributions that may be
different but not due to the distortion of the inputs/features. Denoting the labels/outputs for the source
and target domains as ys and yt respectively, here we have, P r(ys |xs ) ? P r(yt |xt ) but P r(xs ) 6=
P r(xt ) ? this is sampling bias. The ideas in [19, 25, 2, 5] address this by ?re-weighting? the source
instances so as to minimize feature distribution differences between the source and the target. Such
re-weighting schemes do not necessarily correspond to transforming the source and target inputs,
and may simply scale or shift the appropriate loss functions. The central difference among these
approaches is the distance metric used to measure the discrepancy of the feature distributions.
The second set of methods correspond to the case where distributional differences are mainly caused
by feature distortion such as change in pose, lighting, blur and resolution in visual recognition.
Under this scenario, P r(ys |xs ) 6= P r(yt |xt ) but P r(?
xs ) ? P r(?
xt ) and the transformed conditional
distributions are close. [26, 1, 10, 14, 12] address this problem by learning the same feature transformation on source and target domains to minimize the difference of P r(?
xs ) and P r(?
xt ) directly.
Our proposed model fits better under this umbrella ? where the distributional differences are mainly
caused by feature distortion due to site specific acquisition and other experimental issues. While
some methods are purely data-driven such as those using geodesic flow [14, 12], backpropagation
[10]) and so on, other approaches estimate the transformation that minimizes distance metrics such
as the Maximum Mean Discrepancy (MMD) [26, 1]. To our knowledge, no statistical consistency
results are known for any of the methods that fall in the second set.
Overview: The idea in [1] is perhaps the most closely related to our proposal, but with a few important
differences. First, we relax the condition that the same transformation must be applied to each
domain; instead, we permit domain-specific transformations. Second, we derive a provably consistent
hypothesis test to check whether the transformation model can indeed correct the shift. We then prove
that the model is consistent when it is correct. These theoretical results apply directly to [1], which
turns out to be a special case of our framework. We find that the extension of our results to [26] is
problematic since that method violates the requirement that the mean differences should be measured
in a valid Reproducing Kernel Hilbert space (RKHS).
3
Model
We first present the objective function of our estimation problem and provide a simple algorithm
to compute the unknown parameters ? and ?. Recall the definition of MMD from (1). Given the
kernel K and the source and target inputs xs and xt , we are interested in the MMD between the
? s and x
? t . We are only provided the class of the transformations; m and n
?transformed? inputs x
denote the sample sizes of source and target inputs. So our objective function is simply
min min kExt K(g(xt , ?), ?) ? Exs K(h(xs , ?), ?)kH
???? ????
(2)
where ? ? ?? and ? ? ?? are the constraint sets of the unknowns. Assume that the parameters are
bounded is reasonable (discussed in Section 4.3), and their approximations can be easily computed
using certain data statistics. The empirical estimate of the above objective would simply be
m
n
1 X
1X
min min k
K(g(xit , ?), ?) ?
K(h(xis , ?), ?)||H
???? ???? m
n i=1
i=1
3
(3)
Remarks: We note a few important observations about (3) to draw the contrast from (1). The power
of MMD lies in differentiating feature distributions, and the correction factor is entirely dependant
on the choice of the kernel class ? a richer one does a better job. Instead, our objective in (3) is
showing that complex distortions can be corrected before applying the kernel in an intra-domain
manner (as we show in Section 4). From the perspective of the complexity of distortions, this strategy
may correspond to a larger hypotheses space compared to the classical MMD setup. This is clearly
beneficial in settings where source and target are related by complex feature distortions.
It may be seen from the structure of the objective in (3) that designing an algorithm for any given
K and G may not be straightforward. We present the estimation procedure for certain widely-used
classes of K and G in Section 4.3. For the remainder of the section, where we present our testing
procedure and describe technical results, we will assume that we can solve the above objective and
? and ?.
?
the corresponding estimates are denoted by ?
3.1
Minimal MMD test statistic
Observe that the objective in (3) is based on the assumption that the transformations h(?) ? G and
g(?) ? G 0 (G and G 0 may be different if desired) are sufficient in some sense for ?correcting? the
discrepancy between the source and target inputs. Hence, we need to specify a model checking
task on the recoverability of these transforms, while also concurrently checking the goodness of the
estimates of ? and ?. This task will correspond to a hypothesis test where the two hypotheses being
compared are as follows.
H0 : There exists a ? and ? such that P r(g(xt , ?)) = P r(h(xs , ?)).
HA : There does not exist any such ? and ? such that P r(g(xt , ?)) = P r(h(xs , ?)).
Since the statistic for testing H0 here needs to measure the discrepancy of P r(g(xt , ?)) and
P r(h(xs , ?)), one can simply use the objective from (3). Hence our test statistic is given by
? ??
the minimal MMD estimate for a given h ? G, g ? G 0 , xs , xt and computed at the estimates ?,
? ?)
? := arg min min M(?, ?) := k
(?,
???? ????
m
n
1 X
1X
K(g(xit , ?), ?) ?
K(h(xis , ?), ?)||H
m i=1
n i=1
(4)
We denote the population estimates of the parameters under the null and alternate hypothesis as
(?0 , ?0 ) and (?A , ?A ). Recall that the MMD corresponds to a statistic, and it has been used for
testing the equality of distributions in earlier works [15]. It is straightforward to see that the true
minimal MMD M? (?0 , ?0 ) = 0 if and only if H0 is true. Observe that (4) is the empirical (and
hence biased) ?approximation? of the true minimal MMD statistic M? (?) from the objective in (2).
This will be used while presenting our technical results (in Section 4) on the consistency and the
corresponding statistical power guaranteed by this minimal MMD statistic based testing.
Relationship to existing approaches. Hypothesis testing involves transforming the inputs before
comparing their distributions in some RKHS (while we solve for the transformation parameters).
The approach in [15, 16] applies the kernel to the input data directly and asks whether or not the
distributions are the same based on the MMD measure. Our approach derives from the intuition
that allowing for the two-step procedure of transforming the inputs first, followed by computing
their distance in some RKHS is flexible, and in some sense is more general compared to directly
using MMD (or other distance measures) on the inputs. To see this, consider the simple example
where xs ? N (0, 1) and xt = xs + 1. A simple application of MMD (from (1)) on the inputs xs
and xt directly will reject the null hypothesis (where the H0 states that the source and target are the
same distributions). Our algorithm allows for a transformation on the source/target and will correct
this discrepancy and accept H0 . Further, our proposed model generalizes the approach taken in [1].
Specifically, their approach is a special case of (3) with h(xs ) = WT xs , g(xt ) = WT xt (? and ?
correspond to W here) with the constraint that W is orthogonal.
Summary: Overall, our estimation followed by testing procedure will be two-fold. Given xs and
xt , the kernel K and the function spaces G, G 0 , we first estimate the unknowns ? and ? (described
? ?)
? at the estimates is then compared to a given
in Section 4.3). The corresponding statistic M(?,
?
?
significance threshold ?. Whenever M(?, ?) > ? the null H0 is rejected. This rejection simply
indicates that G and/or G 0 are not sufficient in recovering the mismatch of source to target at
4
the Type I error of ?. Clearly, the richness of these function classes is central to the power of
the testing procedure. We will further argue in Section 4 that even allowing h(?) and g(?) to be
linear transformations greatly enhances the ability to remove the distorted feature distributions and
reliably test their difference or equivalence. Also the test is non-parametric and handles missing
(systematic/noisy) features among the two distributions of interest (see appendix for more details).
4
Consistency
Building upon the two-fold estimating and testing procedure presented in the previous sections,
we provide several guarantees about the estimation consistency and the power of minimal MMD
based hypothesis testing, both in the asymptotic and finite sample regimes. The technical results
presented here are applicable for large classes of transformation functions G with fairly weak and
reasonable assumptions on K. Specifically we consider Holder-continuous h(?) and g(?) functions on
compact sets ?? and ?? . Like [15], we have K to be a bounded non-negative characteristic kernel
i.e., 0 ? K(x, x0 ) ? K ?x, x0 , and we assume ?K to be bounded in a neighborhood of 0. We note
that technical results for an even more general class of kernels are fairly involved and so in this paper
we restrict ourselves to radial basis kernels. Nevertheless, even under the above assumptions our null
hypothesis space is more general than the one considered in [15] because of the extra transformations
that we allow on the inputs. With these assumptions, and the Holder-continuity of h(xs , ?) and
g(xt , ?), we assume
(A1)
kK(h(xs , ?1 ), ?) ? K(h(xs , ?2 ), ?)k ? Lh d(?1 , ?2 )rh ?xs ; ?1 , ?2 ? ??
(A2)
kK(g(xt , ?1 ), ?) ? K(g(xt , ?2 ), ?)k ? Lg d(?1 , ?2 )rg ?xt ; ?1 , ?2 ? ??
4.1
Estimation Consistency
Observe that the minimization of (3) assumes that the null is true i.e., the parameter estimates
correspond to H0 . Therefore, we discuss consistency in the context of existence of a unique set of
? s and x
? t perfectly. By inspecting the structure of
parameters (?0 , ?0 ) that match the distributions of x
the objective in (2) and (3), we see that the estimates will be asymptotically unbiased. Our first set
of results summarized here provide consistency of the estimation whenever the assumptions (A1)
and (A2) hold. This consistency result follows from the convergence of objective. All the proofs are
included in the appendix.
? ?) ? Ex K(g(xt , ?),
? ?)k ? 0
Theorem 4.1 (MMD
Under H0 , kExs K(h(xs , ?),
t
H
? Convergence).
?
log n
log m
?
at the rate, max ?
,
.
n
m
? and ?? are consistent.
Theorem 4.2 (Consistency). Under H0 , the estimators ?
Remarks: Theorem 4.1 shows the convergence rate of MMD distance between the source and the
target after the transformations are applied. Recall that m and n are the sample sizes of source and
? and g(xt , ?)
? are the recovered transformations.
target respectively, and h(xs , ?)
4.2
Power of the Hypothesis Test
We now discuss the statistical power of minimal MMD based testing. The next set of results establish
that the testing setup from Section 3.1 is asymptotically consistent. Recall that M? (?) denotes the
(unknown) expected statistic from (2) while M(?) is its empirical estimate from (4).
Theorem 4.3 (Hypothesis Testing). (a) Whenever H0 is true, with probability at least 1 ? ?,
r
?
?
2K(m + n) log ??1
2 K
2 K
?
?
0 ? M(?, ?) ?
+ ? + ?
(5)
mn
n
m
(b) Whenever HA is true, with probability at least 1 ? ,
?
?
2K(m + n) log ?1
2 K
2 K
+ ? + ?
mn
n
m
! ?
r
?
? ?)
? ? M? (?A , ?A ) ? ?K 4 + C (h,) + d? log n ? ?K
M(?,
2rh
n
m
r
? ?)
? ? M? (?A , ?A ) +
M(?,
5
s
4+
C (g,)
!
d?
+
log m
2rg
(6)
where C (h,) = log(2|?? |)+log ?1 + drh? log
Lh
?
,
K
d
and C (g,) = log(2|?? |)+log ?1 + r?g log
L
?g
K
Remarks: We make a few comments about the theorem. Recall that the constant K is the kernel
bound, and Lh , Lg , rh and rg are defined in (A1)(A2). d? and d? are the dimensions of ? and ?
? ?)
? approaches 0 as
spaces respectively. Observe that whenever H0 is true, (5) shows that M(?,
the sample size increases. Similarly, under HA the statistic converges to some positive (unknown)
value M? (?A , ?A ). Following these observations, Theorem 4.3 basically implies that the statistical
power of our test (described in Section 3.1) increases to 1 as the sample size m, n increases. Except
constants, the upper bounds under both H0 and HA have a rate of max( ?1n , ?1m ), while the lower
?
?
log n
log m
bound under HA has the rate max( ?
, ?
). In the appendix we show that (see Lemma 4.5)
n
m
as m, n ? ?, the constants |?? |, |?? | converge to a small positive number, thus removing the
dependence of consistency on these constants.
The dependence on the sizes of search spaces ?? and ?? may nevertheless make the bounds for
HA loose. In practice, one can choose ?good? bound constraints based on some pre-processing on
the source and target inputs (e.g., comparison of median and modes). The loss in power due to
overestimated ?? and ?? will be compensated by ?large enough? sample sizes. Observe that this
trade-off of sample size versus complexity of hypothesis space is fundamental in statistical testing
and is not specific to our model. We further investigate this trade-off for certain special cases of
transformations h(?) and g(?) that may be of interest in practice. For instance, consider the scenario
where one of the transformations is identity and the other one is linear in the unknowns. Specifically,
? t = xt and h0 (xs , ?0 ) = ?(xs )T ?0 where ?(?) is some known transformation. Although restrictive,
x
this scenario is very common in medical data acquisition (refer to Section 1) where the source and
target inputs are assumed to have linear/affine distortions. Within this setting, the assumptions for our
technical results will be satisfied whenever ?(xs ) is bounded with high probability and with rh = 12 .
We have the following result for this scenario (Var(?) denotes empirical variance).
T
Theorem 4.4 P
(Linear transformation). Under
Pp H0 , identity g(?) with h = ?(xs ) ?, we have
n
1
2
i T
i
?? := {?; | n i=1 kxt ? ?(xs ) ?)k ? 3 k=1 Var(xt,k ) + }. For any , ? > 0 and sufficiently
large sample size, a neighborhood of ?0 is contained in ?? with probability at least 1 ? ?.
Observe that subscript k in xt,k above denotes the k th dimensional feature of xt . The above result
implies that the search space for ? reduces to a quadratic constraint in the above described example
scenario. Clearly, this refined search region would enhance the statistical power for the test even when
the sample sizes are small (which is almost always the case in population studies). Note that such
refined sets may be computed using ?extra? information about the structure of the transformations
and/or input data statistics, there by allowing for better estimation and higher power. Lastly, we point
out that the ideas presented in [16] for a finite sample testing setting translate to our model as well
but we do not present explicit details in this work.
4.3
Optimization Lower Bounds
We see that it is valid to assume that the feasible set is compact and convex for our purposes
(Theorem 4.4). This immediately allows us to use algorithms that exploit feasible set compactness to
estimate model parameters, for instance, conditional gradient algorithms which have low per iteration
complexity [23]. Even though these algorithms offer practical benefits, with non-convex objective,
it is nontrivial to analyze their theoretical/convergence aspects, and as was noted earlier in Section
3, except for simplistic G, G 0 and K, the minimization in (3) might involve a non-convex objective.
We turn to some recent results which have shown that specific classes of non-convex problems or
NP-Hard problems can be solved to any desired accuracy using a sequence of convex optimization
problems [33]. This strategy is currently an active area of research and has already shown to provide
impressive performance in practice [3].
Very recently,[4] showed that one such class of problems called signomial programming can be solved
using successive relative entropy relaxations. Interestingly, we show that for the widely-used class of
Gaussian kernels, our objective can be optimized using these ideas. For notational simplicity, we do
? t = xt or g(?) is identity and only allow for linear transformations h(?).
not transform the targets i.e, x
Observe that, with respect to the estimation problem (refer to (3)) this is the same as transforming
both source and target inputs. When K is Gaussian, the objective in (3) with identity g(?) and linear
6
h(?) (? corresponds to slope and intercept here) can be equivalently written as,
!
n
n
m
n
1 XX
2 XX
i
j
i
j
K(h(xs , ?), h(xs , ?)) ?
K(xt , h(xs , ?))
n2 i=1 j=1
mn i=1 j=1
X 2
(7)
X 1
T
T
T
T
T
2
exp ? aj ?? ai ) ?
exp ? bij ?? bij + 2cbij ? + c
:= min
????
n2
mn
i,j
j
min
????
for appropriate aj , bij and c. Denoting ? = ??T , the above objective can be made linear in the
decision variables ? and ? thus putting it in the standard form of signomial optimization. The convex
relaxation of the quadratic equality constraint is ? ? ??T 0, hence we seek to solve,
min
?,?
X 1
X 2
T
exp
(tr(A
?))
?
exp
tr(B
?)
+
C
?
+
c
ij
j
ij
n2
mn
j
i,j
s.t. ? ? ??T 0
(8)
Clearly the objective is exactly in the form that [4] solves, albeit we also have a convex constraint.
Nevertheless, using their procedure for the unconstrained signomial optimization we can write a
sequence of convex relaxations for this objective. This sequence is hierarchical, in the sense that, as
we go down the sequence, each problem gives tighter bounds to the original nonconvex objective [4].
For our applications, we see that since confidence interval procedure (mentioned earlier) naturally
suggests a good initial point in addition, any generic (local) numerical optimization schemes like
trust region, gradient projection etc. can be used to solve (7) whereas the hierarchy of (8) can be used
in general when one does not have access to a good starting point.
5
Experiments
Design and Overall Goals. We performed evaluations on both synthetic data as well as data from
an AD study. (A) We first evaluate the goodness of our estimation procedure and the power of the
minimal MMD based test when the source and target inputs are known transformations of samples
from different distribution families (e.g., Normal, Laplace). Here, we seek to clearly identify the
influence of the sample size as well as the effect of the transformations on recoverability. (B) After
these checks, we then apply our proposed model for matching CSF protein levels of 600 subjects.
These biomarkers were collected in two different batches; it is known that the measures for the same
participant (across batches) have high variability [20]. In our data, fortunately, a subset of individuals
have both batch data (the ?real? measurement must be similar in both batches) whereas a fraction
of individuals? CSF is only available in one batch. If we find a linear standardization between the
Normal target vs. different sources
Acceptance Rate
Acceptance Rate
0.8
0.6
0.4
Normal(0,1)
Laplace(0,1)
Exponential(1)
0.2
0
4
Models linear in parameters
1.2
1
Estimation Errors normal vs. normal
1.2
1
1
0.8
0.8
L1 Error
1.2
0.6
0.4
Slope
Intercept
0.6
0.4
0.2
0.2
2
a*x +b*x+c
a*log(|x|)+b
6
8
0
10
4
Sample Size (Log2 scale)
6
8
0
10
2
Sample Size (Log2 scale)
(a)
4
6
8
10
12
Sample Size (Log2 scale)
(b)
(c)
Estimation error for 2D simulation
Minimal MMD histogram (128 samples)
Model 1, first row
Model 1, second row
Model 2, first row
Model 2, second row
1
4
5
6
0.4
0.2
7
Sample Size (Log2 scale)
(d)
0.6
8
9
0
Nor vs Nor
Nor vs Exp
Nor vs Lap
0.8
histogram
1.5
1
Nor vs Nor
Nor vs Exp
Nor vs Lap
0.8
2
0.5
Minimal MMD histogram (1024 samples)
1
histogram
Quartic Mean of estimation error
2.5
0.6
0.4
0.2
0
0.005
0.01
0.015
mMMD value
(e)
0.02
0.025
0
0
0.005
0.01
0.015
mMMD value
(f)
Figure 1: (a,b) Acceptance Ratios, (c,d) Estimation errors, (e,f) Histograms of minimal MMD statistic.
7
two batches it serves as a gold standard, against which we compare our algorithm which does not
use information about corresponding samples. Note that the standardization trick is unavailable in
multi-center studies; we use this data in this paper simply to make the description of our evaluation
design simpler which, for multi-site data pooling, must be addressed using secondary analyses.
Synthetic data.
Fig 1 summarizes our results on synthetic data where
the source are Normal samples and targets comes from different families.
First, observe that our testing procedure efficiently rejects
H0 whenever the targets are not Normal (blue and black
curves in Fig. 1(a)). If the transformation class is beyond
linear (e.g., log), the null is efficiently rejected as samples
increase (see Fig. 1(b)). Beyond the testing power, Figs.
1(c,d) shows the error in the actual estimates, which decrease as the sample size increases (with tighter confidence
intervals). The appendix includes additional model details.
To get a better idea about the minimal MMD statistic, we
show its histogram (over multiple bootstrap simulations)
for different targets in Fig 1(e,f). The green line here denotes the bootstrap significance threshold (0.05). In Fig.
1(e,f), the red curve is always to the left of the threshold,
as desired. However, the samples are not enough to reject Figure 2: Relative error in transformation
the null the black and blue curves; and we will need larger estimation between CSF batches.
sample sizes (Fig. 1(f)). If needed, the minimal MMD value can be used to obtain a better threshold.
Overall, these plots show that the minimal MMD based estimation and testing setup robustly removes
the feature distortions and facilitates the statistical test.
Relative error on comparsion to baseline
0.25
Linear Model
Minimal MMD (S
Minimal MMD (S
1
2
)
)
Relative Error
0.2
0.15
0.1
0.05
0
p1
p2
p3
p4
p5
p6
p7
p8
p9
p10
p11
p12
AD study. Fig 2 shows the relative errors after correcting the feature distortions between the two
batches in the 12 CSF proteins. The bars correspond to simple linear ?standardization? transformation
where we assume we have corresponding sample information (blue) and our minimal MMD based
domain adaptation procedure on sets S1 and S2 (S1 : participants available in both batches, S2 : all
participants). Our models perform as well as the gold standard (where some subjects have volunteered
CSF sampling for both batches). Specifically, the trends in Fig 2 indicate that our minimal MMD
based testing procedure is a powerful procedure for conducting analyses on such pooled datasets.
To further validate these observations, we used the ?transformed? CSF data from the two batches (our algorithm and
gold standard) and performed a multiple regression to predict Table 1: Performance of transformed
Left and Right Hippocampal Volume (which are known to be (our vs. gold standard) CSF on a regrestask.
AD markers). Table 1 shows that the correlations (predicted sion
Model
Left
Right
vs. actual) resulting from the minimal MMD corrected data
None
0.46? 0.15 0.37?0.16
are comparable or offer improvements to the alternatives. We
Linear 0.46? 0.15 0.37?0.16
point out that the best correlations are achieved when all the
M (S1 ) 0.48? 0.15 0.39? 0.15
data is used with minimal MMD (which the gold standard
M (S2 ) 0.48? 0.15 0.40? 0.15
cannot benefit from). Any downstream prediction tasks we
wish to conduct are independent of the model presented here.
6
Conclusions
We presented a framework for kernelized statistical testing on data from multiple sources when the
observed measurements/features have been systematically distorted/transformed. While there is a rich
body of work on kernel test statistics based on the maximum mean discrepancy and other measures,
the flexibility to account for a given class of transformations offers improvements in statistical power.
We analyze the statistical properties of the estimation and demonstrate how such a formulation may
enable pooling datasets from multiple participating sites, and facilitate the conduct of neuroscience
studies with substantially higher sample sizes which may be otherwise infeasible.
Acknowledgments: This work is supported by NIH AG040396, NIH U54AI117924, NSF
DMS1308847, NSF CAREER 1252725, NSF CCF 1320755 and UW CPCP AI117924. The authors
are grateful for partial support from UW ADRC AG033514 and UW ICTR 1UL1RR025011. We
thank Marilyn S. Albert (Johns Hopkins) and Anne Fagan (Washington University at St. Louis) for
discussions at a preclinical Alzheimer?s disease meeting in 2015 (supported by Stay Sharp fund).
8
References
[1] M Baktashmotlagh, M Harandi, B Lovell, and M Salzmann. Unsupervised domain adaptation by domain
invariant projection. In Proceedings of the IEEE ICCV, 2013.
[2] S Ben-David, J Blitzer, K Crammer, et al. A theory of learning from different domains. Machine learning,
2010.
[3] B Chalise, Y Zhang, and M Amin. Successive convex approximation for system performance optimization
in a multiuser network with multiple mimo relays. In IEEE CAMSAP, 2011.
[4] V Chandrasekaran and P Shah. Relative entropy relaxations for signomial optimization. arXiv:1409.7640,
2014.
[5] C Cortes and M Mohri. Domain adaptation in regression. In Algorithmic Learning Theory, 2011.
[6] T Dall, P Gallo, R Chakrabarti, T West, A Semilla, and M Storm. An aging population and growing disease
burden will require alarge and specialized health care workforce by 2025. Health Affairs, 2013.
[7] H Daum? III, A Kumar, and A Saha. Frustratingly easy semi-supervised domain adaptation. In Proceedings
of the 2010 Workshop on Domain Adaptation for Natural Language Processing, 2010.
[8] P Doll?r, C Wojek, B Schiele, and P Perona. Pedestrian detection: A benchmark. In CVPR, 2009.
[9] B Fernando, A Habrard, M Sebban, and T Tuytelaars. Unsupervised visual domain adaptation using
subspace alignment. In Proceedings of the IEEE ICCV, 2013.
[10] Y Ganin and V Lempitsky. Unsupervised domain adaptation by backpropagation. arXiv:1409.7495, 2014.
[11] B Gong, K Grauman, and F Sha. Connecting the dots with landmarks: Discriminatively learning domaininvariant features for unsupervised domain adaptation. In ICML, 2013.
[12] B Gong, Y Shi, F Sha, and K Grauman. Geodesic flow kernel for unsupervised domain adaptation. In
CVPR, 2012.
[13] Boqing Gong. Kernel Methods for Unsupervised Domain Adaptation. PhD thesis, Citeseer, 2015.
[14] R Gopalan, R Li, and R Chellappa. Domain adaptation for object recognition: An unsupervised approach.
In Proceedings of the IEEE ICCV, 2011.
[15] A Gretton, K Borgwardt, M Rasch, B Sch?lkopf, and A Smola. A kernel two-sample test. JMLR, 2012.
[16] A Gretton, K Fukumizu, Z Harchaoui, and B Sriperumbudur. A fast, consistent kernel two-sample test. In
NIPS, 2009.
[17] Glioma Meta-analysis Trialists GMT Group. Chemotherapy in adult high-grade glioma: a systematic
review and meta-analysis of individual patient data from 12 randomised trials. The Lancet, 2002.
[18] M Haase, R Bellomo, P Devarajan, P Schlattmann, et al. Accuracy of neutrophil gelatinase-associated
lipocalin (ngal) in diagnosis and prognosis in acute kidney injury: a systematic review and meta-analysis.
American Journal of Kidney Diseases, 2009.
[19] J Huang, A Gretton, K Borgwardt, B Sch?lkopf, and A Smola. Correcting sample selection bias by
unlabeled data. In NIPS, 2006.
[20] W Klunk, R Koeppe, J Price, T Benzinger, M Devous, et al. The centiloid project: standardizing quantitative
amyloid plaque estimation by pet. Alzheimer?s & Dementia, 2015.
[21] W Klunk, R Koeppe, J Price, T Benzinger, et al. The centiloid project: standardizing quantitative amyloid
plaque estimation by pet. Alzheimer?s & Dementia, 2015.
[22] A Kumar, A Saha, and H Daume. Co-regularization based semi-supervised domain adaptation. In NIPS,
2010.
[23] S Lacoste-Julien, M Jaggi, M Schmidt, and P Pletscher. Block-coordinate frank-wolfe optimization for
structural svms. arXiv:1207.4747, 2012.
[24] Qi Li. Literature survey: domain adaptation algorithms for natural language processing. Department of CS
The Graduate Center, The City University of New York, 2012.
[25] X Nguyen, M Wainwright, and M Jordan. Estimating divergence functionals and the likelihood ratio by
convex risk minimization. Information Theory, IEEE Transactions on, 2010.
[26] S Pan, I Tsang, J Kwok, and Q Yang. Domain adaptation via transfer component analysis. Neural Networks,
IEEE Transactions on, 2011.
[27] V Patel, R Gopalan, R Li, and R Chellappa. Visual domain adaptation: A survey of recent advances. Signal
Processing Magazine, IEEE, 2015.
[28] B Plassman, K Langa, G Fisher, S Heeringa, et al. Prevalence of dementia in the united states: the aging,
demographics, and memory study. Neuroepidemiology, 2007.
[29] K Saenko, B Kulis, M Fritz, and T Darrell. Adapting visual category models to new domains. In ECCV.
2010.
[30] D Sejdinovic, B Sriperumbudur, A Gretton, K Fukumizu, et al. Equivalence of distance-based and
rkhs-based statistics in hypothesis testing. The Annals of Statistics, 2013.
[31] B Sriperumbudur, K Fukumizu, A Gretton, G Lanckriet, and B Sch?lkopf. Kernel choice and classifiability
for rkhs embeddings of probability distributions. In NIPS, 2009.
[32] A Torralba and A Efros. Unbiased look at dataset bias. In CVPR, 2011.
[33] L Tun?el. Polyhedral and semidefinite programming methods in combinatorial optimization. AMS, 2010.
[34] H Vanderstichele, M Bibl, S Engelborghs, N Le Bastard, et al. Standardization of preanalytical aspects
of cerebrospinal fluid biomarker testing for alzheimer?s disease diagnosis: a consensus paper from the
alzheimer?s biomarkers standardization initiative. Alzheimer?s & Dementia, 2012.
9
| 6209 |@word trial:2 kulis:1 version:2 simulation:2 seek:2 accounting:1 citeseer:1 asks:1 tr:2 initial:1 united:1 salzmann:1 denoting:2 rkhs:6 interestingly:1 multiuser:1 existing:2 recovered:2 comparing:2 anne:1 must:3 written:1 john:1 numerical:1 blur:1 remove:2 plot:1 interpretable:1 fund:1 discrimination:1 v:10 operationally:1 p7:1 affair:1 farther:1 provides:1 successive:2 simpler:1 harmonize:1 zhang:1 become:1 chakrabarti:1 initiative:1 prove:1 combine:1 polyhedral:1 manner:1 classifiability:1 x0:2 elderly:1 p8:1 expected:1 indeed:2 p1:1 nor:8 growing:2 multi:6 brain:3 camsap:1 grade:1 p9:1 little:1 actual:3 tandem:1 spain:1 project:3 underlying:1 estimating:3 provided:1 bounded:4 xx:2 null:7 minimizes:1 emerging:1 substantially:1 transformation:35 guarantee:2 quantitative:2 commodity:1 exactly:1 grauman:2 classifier:2 medical:1 louis:1 before:3 positive:2 local:1 aging:3 despite:1 subscript:1 might:1 black:2 twice:1 minimally:1 studied:1 equivalence:2 suggests:1 co:1 fastest:1 graduate:1 unique:1 camera:1 practical:1 testing:27 acknowledgment:1 practice:4 block:1 implement:1 backpropagation:2 prevalence:1 bootstrap:2 procedure:15 area:3 universal:1 empirical:4 reject:3 adapting:1 projection:2 matching:1 pre:1 radial:1 confidence:2 protein:2 get:1 cannot:1 close:1 selection:1 unlabeled:1 storage:1 risk:3 applying:2 context:1 intercept:2 influence:1 middleton:1 missing:2 center:3 yt:3 straightforward:2 compensated:1 go:1 starting:1 convex:10 shi:1 resolution:1 kidney:2 simplicity:2 disorder:1 immediately:1 correcting:3 survey:2 estimator:1 financial:1 population:4 handle:1 variation:2 coordinate:1 laplace:2 annals:1 target:34 hierarchy:1 magazine:1 programming:2 designing:1 hypothesis:19 lanckriet:1 trick:1 trend:1 wolfe:1 expensive:1 recognition:3 distributional:2 observed:1 p5:1 solved:2 tsang:1 region:2 richness:1 trade:2 decrease:1 disease:8 intuition:1 transforming:4 mentioned:1 complexity:3 covariates:2 schiele:1 comparsion:1 geodesic:2 singh:1 grateful:1 predictive:2 purely:1 upon:1 basis:1 easily:1 represented:1 various:1 fast:1 describe:2 chellappa:2 neighborhood:2 h0:15 refined:2 richer:1 larger:3 widely:3 solve:4 distortion:11 relax:1 otherwise:1 cvpr:3 ability:2 statistic:19 tuytelaars:1 transform:1 noisy:1 advantage:1 kxt:1 sequence:4 adaptation:24 remainder:1 p4:1 combining:1 translate:1 flexibility:1 achieve:1 gold:5 amin:1 description:1 kh:2 validate:1 participating:1 convergence:5 requirement:2 darrell:1 converges:1 ben:1 object:1 derive:4 blitzer:1 ganin:1 gong:3 pose:2 measured:1 ij:2 job:1 p2:1 solves:1 recovering:1 predicted:1 involves:1 come:2 implies:2 indicate:1 rasch:1 c:1 csf:11 closely:3 correct:4 enable:1 translating:1 violates:1 reagent:1 require:1 assign:1 harmonization:1 tighter:3 inspecting:1 extension:1 neutrophil:1 correction:1 hold:1 sufficiently:2 considered:1 normal:7 exp:6 mapping:4 predict:1 algorithmic:1 efros:1 early:1 a2:3 relay:1 torralba:1 purpose:1 estimation:21 applicable:1 wet:1 label:1 currently:1 combinatorial:1 peptide:1 ictr:1 city:1 tool:1 minimization:3 fukumizu:3 clearly:6 concurrently:1 always:2 gaussian:2 zhou:1 sion:1 derived:1 xit:6 validated:1 notational:1 improvement:2 dall:1 biomarker:1 check:3 seamlessly:1 hk:3 contrast:1 greatly:1 rigorous:1 baseline:1 sense:4 am:1 equipment:1 el:1 accept:1 compactness:1 kernelized:1 perona:1 transformed:11 interested:2 provably:3 arg:1 issue:3 among:2 flexible:1 denoted:2 overall:3 development:1 special:3 fairly:4 haase:1 marginal:2 construct:1 indicates:1 washington:1 sampling:2 identical:1 look:1 unsupervised:12 icml:1 discrepancy:7 np:1 few:3 saha:2 pathological:1 divergence:1 individual:5 variously:1 sterling:1 intended:1 ourselves:1 william:1 detection:1 interest:3 acceptance:3 investigate:1 intra:1 chemotherapy:1 evaluation:2 alignment:1 analyzed:2 yielding:2 semidefinite:1 likelihood:1 partial:1 adrc:1 lh:3 orthogonal:1 conduct:3 re:2 desired:3 theoretical:2 minimal:20 instance:5 earlier:3 injury:1 goodness:4 addressing:1 subset:1 habrard:1 mimo:1 johnson:1 motivating:1 straightforwardly:1 considerably:1 synthetic:3 st:1 borgwardt:2 fundamental:1 fritz:1 overestimated:1 stay:1 systematic:4 probabilistic:1 off:2 discipline:1 enhance:1 connecting:1 hopkins:1 thesis:1 central:3 satisfied:1 successively:1 choose:1 huang:1 worse:1 american:1 li:3 account:1 potential:1 standardizing:2 pooled:2 summarized:1 includes:1 pedestrian:1 caused:2 ad:7 onset:1 depends:1 performed:2 lab:1 analyze:3 sup:1 red:1 competitive:1 start:1 participant:3 annotation:1 slope:2 contribution:1 minimize:2 accuracy:3 holder:2 variance:1 conducting:2 characteristic:2 largely:1 yield:1 identify:5 correspond:7 efficiently:2 glioma:2 generalize:1 weak:1 lkopf:3 basically:1 none:1 lighting:1 drive:1 strongest:1 whenever:8 fagan:1 volumetric:1 definition:1 against:1 ul1rr025011:1 sriperumbudur:3 acquisition:3 pp:1 involved:1 storm:1 naturally:1 associated:2 proof:1 dataset:2 recall:5 knowledge:1 hilbert:2 retest:1 higher:2 supervised:3 follow:1 wherein:1 specify:1 formulation:3 though:1 rejected:2 smola:2 lastly:1 p6:1 correlation:2 trust:1 marker:1 dependant:1 defines:2 continuity:1 mode:1 aj:2 perhaps:1 scientific:1 building:1 effect:2 umbrella:1 normalized:1 true:7 unbiased:2 ccf:1 facilitate:1 hence:5 equality:2 regularization:1 assay:1 deal:1 noted:1 hippocampal:1 lovell:1 allowable:1 presenting:1 theoretic:1 demonstrate:1 l1:1 p12:1 image:1 consideration:1 recently:1 nih:2 common:4 superior:1 specialized:2 sebban:1 overview:1 vamsi:1 volume:1 discussed:2 measurement:5 significant:1 refer:3 ai:1 unconstrained:1 consistency:12 similarly:1 gmt:1 language:2 henry:1 dot:1 calibration:1 access:1 similarity:3 impressive:1 acute:1 etc:1 align:1 jaggi:1 recent:4 showed:1 perspective:1 quartic:1 driven:1 boqing:1 scenario:5 certain:4 nonconvex:1 gallo:1 meta:3 success:1 meeting:1 seen:1 p10:1 fortunately:1 care:2 impose:1 additional:1 converge:1 wojek:1 fernando:1 signal:1 semi:3 multiple:7 harchaoui:1 reduces:1 gretton:5 memorial:1 match:3 technical:5 clinical:2 compensate:2 offer:3 concerning:1 totality:1 equally:1 y:3 a1:3 va:1 qi:1 prediction:1 devarajan:1 simplistic:1 regression:2 vision:1 metric:3 patient:1 albert:1 iteration:1 normalization:1 kernel:18 mmd:36 histogram:6 achieved:1 arxiv:3 younger:1 proposal:2 background:1 addition:1 separately:1 whereas:2 interval:2 addressed:1 grow:1 source:32 country:1 median:1 sch:3 biased:1 operate:1 extra:2 comment:1 induced:2 subject:2 pooling:2 facilitates:1 flow:2 jordan:1 alzheimer:9 structural:2 yang:1 intermediate:1 cohort:2 enough:2 iii:1 easy:1 embeddings:1 fit:1 wahba:1 identified:1 impediment:2 restrict:2 idea:6 perfectly:1 prognosis:1 absent:1 shift:6 biomarkers:5 motivated:2 whether:4 effort:1 york:1 remark:4 useful:1 se:1 involve:3 gopalan:2 transforms:1 nonparametric:1 svms:1 category:2 exist:2 problematic:1 nsf:3 estimated:3 neuroscience:2 per:2 blue:3 broadly:1 diagnosis:2 write:1 promise:1 group:1 key:1 putting:1 threshold:4 nevertheless:3 p11:1 ravi:1 lacoste:1 uw:3 imaging:2 asymptotically:2 relaxation:4 downstream:2 fraction:1 recruitment:1 parameterized:1 powerful:1 distorted:2 almost:1 reasonable:2 family:2 chandrasekaran:1 p3:1 draw:1 decision:1 appendix:4 summarizes:1 preclinical:1 comparable:2 entirely:1 bound:10 followed:3 guaranteed:1 fold:3 quadratic:2 nontrivial:1 constraint:6 aspect:2 min:9 kumar:2 performing:1 department:1 alternate:1 poor:1 across:4 remain:1 beneficial:1 pan:1 s1:3 cerebrospinal:2 ptarget:2 invariant:2 restricted:1 iccv:3 plaque:2 taken:1 randomised:1 turn:2 discus:2 loose:1 needed:2 know:1 instrument:1 serf:2 demographic:1 available:3 generalizes:1 permit:3 doll:1 apply:2 observe:9 hierarchical:1 away:1 appropriate:4 generic:1 kwok:1 robustly:1 batch:13 alternative:2 professional:1 shah:1 schmidt:1 vikas:1 mainly:2 existence:2 original:1 denotes:5 assumes:1 log2:4 madison:1 daum:1 exploit:1 restrictive:1 establish:1 classical:1 objective:19 already:1 quantity:1 strategy:3 concentration:1 parametric:2 dependence:2 grace:1 sha:2 enhances:1 gradient:2 signomial:4 distance:11 link:1 thank:1 subspace:1 majority:1 landmark:1 argue:1 collected:1 consensus:1 reason:1 pet:2 relationship:1 kk:2 ratio:2 equivalently:1 setup:4 neuroimaging:1 lg:2 frank:1 hao:1 negative:1 fluid:2 design:2 reliably:1 unknown:7 perform:3 allowing:3 upper:1 workforce:1 ithapu:1 observation:3 datasets:3 benchmark:1 enabling:1 finite:2 variability:3 reproducing:1 recoverability:2 sharp:1 david:1 extensive:1 optimized:1 temporary:1 barcelona:1 nip:5 address:4 beyond:2 bar:1 adult:1 below:1 sejdinovic:1 mismatch:2 regime:1 challenge:2 tun:1 reliable:1 max:3 green:1 memory:1 wainwright:1 power:15 suitable:1 marilyn:1 natural:2 pletscher:1 mn:5 scheme:4 numerous:2 julien:1 health:3 review:2 literature:1 checking:3 asymptotic:3 wisconsin:1 relative:6 loss:2 discriminatively:1 interesting:1 versus:2 var:2 age:1 affine:2 sufficient:2 consistent:6 standardization:6 lancet:1 systematically:1 collaboration:1 row:4 eccv:1 summary:1 mohri:1 supported:2 infeasible:2 side:2 bias:3 allow:2 fall:2 differentiating:1 absolute:1 ag033514:1 distributed:1 benefit:2 curve:3 dimension:1 world:1 valid:2 rich:1 author:1 collection:1 made:1 projected:1 nguyen:1 social:1 transaction:2 functionals:1 compact:2 drh:1 patel:1 baktashmotlagh:1 active:1 anchor:1 assumed:1 xi:6 continuous:3 latent:1 search:3 frustratingly:1 table:2 nature:1 transfer:1 career:1 operational:1 obtaining:1 unavailable:1 necessarily:1 complex:2 domain:40 da:5 significance:3 rh:4 s2:3 daume:1 n2:3 body:2 site:10 fig:9 west:1 explicit:1 wish:1 exponential:1 lie:1 jmlr:1 weighting:2 bij:3 removing:2 theorem:8 down:1 specific:6 xt:34 covariate:1 showing:2 dementia:5 harandi:1 x:37 cortes:1 derives:1 burden:2 essential:1 exists:1 albeit:1 workshop:1 phd:1 rejection:1 rg:3 entropy:2 led:1 lap:2 simply:6 likely:1 visual:5 contained:1 neurological:1 applies:1 corresponds:2 conditional:3 lempitsky:1 goal:5 identity:4 consequently:1 harmonized:1 price:2 fisher:1 feasible:2 change:1 hard:1 included:1 specifically:6 except:3 corrected:2 wt:2 lemma:1 conservative:1 called:1 hospital:1 secondary:1 experimental:2 saenko:1 support:1 crammer:1 evaluate:1 ex:2 |
5,758 | 621 | Some Solutions to the Missing Feature Problem
in Vision
Subutai Ahmad
Siemens AG,
Central Research and Development
ZFE ST SN61, Otto-Hahn Ring 6
8000 Miinchen 83, Gennany.
[email protected]
Volker Tresp
Siemens AG,
Central Research and Development
ZFE ST SN41, Otto-Hahn Ring 6
8000 Miinchen 83, Gennany.
[email protected]
Abstract
In visual processing the ability to deal with missing and noisy information is crucial. Occlusions and unreliable feature detectors often lead to
situations where little or no direct information about features is available. However the available information is usually sufficient to highly
constrain the outputs. We discuss Bayesian techniques for extracting
class probabilities given partial data. The optimal solution involves integrating over the missing dimensions weighted by the local probability
densities. We show how to obtain closed-form approximations to the
Bayesian solution using Gaussian basis function networks. The framework extends naturally to the case of noisy features. Simulations on a
complex task (3D hand gesture recognition) validate the theory. When
both integration and weighting by input densities are used, performance
decreases gracefully with the number of missing or noisy features. Performance is substantially degraded if either step is omitted.
1 INTRODUCTION
The ability to deal with missing or noisy features is vital in vision. One is often faced with
situations in which the full set of image features is not computable. In fact, in 3D object
recognition, it is highly unlikely that all features will be available. This can be due to selfocclusion, occlusion from other objects, shadows, etc. To date the issue of missing features has not been dealt with in neural networks in a systematic way. Instead the usual
practice is to substitute a single value for the missing feature (e.g. 0, the mean value of the
feature, or a pre-computed value) and use the network's output on that feature vector.
393
394
Ahmad and Tresp
y
y
4
5
3
............ Yo
Yo
x
x
(b)
(a)
Figure 1. The images show two possible situations for a 6-class classification problem. (Dark
shading denotes high-probability regions.) If the value of feature x is unknown, the correct
solution depends both on the classification boundaries along the missing dimension and on the
distribution of exemplars.
When the features are known to be noisy, the usual practice is to just use the measured
noisy features directly. The point of this paper is to show that these approaches are not
optimal and that it is possible to do much better.
A simple example serves to illustrate why one needs to be careful in dealing with missing
features. Consider the situation depicted in Figure 1(a). It shows a 2 -d feature space with 6
possible classes. Assume a network has already been trained to correctly classify these
regions. During classification of a novel exemplar. only feature y has been measured, as
Yo; the value of feature x is unknown. For each class Ci , we would like to compute p(Cily).
Since nothing is known about x, the classifier should assign equal probability to classes 1,
2, and 3, and zero probability to classes 4,5, and 6. Note that substituting any single value
will always produce the wrong result. For example, if the mean value of x is substituted,
the classifier would assign a probability near 1 for class 2. To obtain the correct posterior
probability, it is necessary to integrate the network output over all values of x. But there is
one other fact to consider: the probability distribution over x may be highly constrained by
the known value of feature y. With a distribution as in Figure 1(b) the classifier should
assign class 1 the highest probability. Thus it is necessary to integrate over x along the line
Y=Yo weighted by the joint distribution p(x,y).
2 MISSING FEATURES
We first show how the intituitive arguments outlined above for missing inputs can be formalized using Bayes rule. Let represent a complete feature vector. We assume the classifier outputs good estimates of p (Cil x) (most reasonable classifiers do - see (Richard &
x
Lippmann, 1991?. In a given instance,
tain) features, and
mate p (Cil
xcan be split up into xc' the vector of known (cer-
xu. the unknown features. When features are missing the task is to esti-
xc) . Computing marginal probabilities we get:
Some Solutions to the Missing Feature Problem in Vision
Jp (Cil Xc' xu) p (xc' xu) dxu
(1)
p (xc)
Note that p (Cil XC' xu) is approximated by the network output and that in order to use (1)
effectively we need estimates of the joint probabilities of the inputs.
3 NOISY FEATURES
The missing feature scenario can be extended to deal with noisy inputs. (Missing features
are simply noisy features in the limiting case of complete noise.) Let Xc be the vector of
features measured with complete certainty, Xu the vector of measured, uncertain features,
and xtu the true values of the features in Xu. p (xul XtU ) denotes our knowledge of the noise
(i.e. the probability of measuring the (uncertain) value Xu given that the true value is xtu ).
Xc' Ci ) = p (xul xlU )
is dependent on Xc and Cj .) We want to compute p (Cil Xc' xu)
We assume that this is independent of Xc and Ci ? i.e. that p
x
(Of course the value of lU
This can be expressed as:
",,.
p(Cjlxc,x u)
=
(xul x
Jp (xc' xu' x tu , Ci ) dxtu
~
~
lU ,
?
.
(2)
p (xc' xu)
Given the independence assumption, this becomes:
p(Cilxc'x u)
Jp (Cjl x ,XtU ) p (xc' x tu ) p (Xul XtU ) dx tu
= ____c_ _ _ _ _ _ _ _ _ __
(3)
Jp (xc' XtU ) p (xul XtU ) dxtu
As before, p (C il Xc> Xtu) is given by the classifier. (3) is almost the same as (1) except that
the integral is also weighted by the noise model. Note that in the case of complete uncertainty about the features (i.e. the noise is uniform over the entire range of the features), the
equations reduce to the miSSing feature case.
4 GAUSSIAN BASIS FUNCTION NETWORKS
The above discussion shows how to optimally deal with missing and noisy inputs in a
Bayesian sense. We now show how these equations can be approximated using networks
of Gaussian basis functions (GBF nets). Let us consider GBF networks where the Gaussians have diagonal covariance matrices (Nowlan, 1990). Such networks have proven to
be useful in a number of real-world applications (e.g. Roscheisen et al, 1992). Each hidden unit is characterized by a mean vector ~j and by aj, a vector representing the diagonal
of the covariance matrix. The network output is:
4:
Yj (x)
wijbj (x)
= -..::1_ _ __
395
396
Ahmad and Tresp
with bj (x)
= 1tj n (x;a.j , crJ) =
r -':;/l
1tj d
exp [-
d
-
(21t)
II~
2
(xi
(4)
20'??
I
JI
O'kj
k
w ji is the weight from the j'th basis unit to the i'th output unit, Ttj is the probability of
choosing unit j, and d is the dimensionality of x.
4.1 GBF NETWORKS AND MISSING FEATURES
Under certain training regimes sur.h as Gaussian mixture modeling, EM or "soft clustering" (Duda & Hart, 1973; Dempster et ai, 1977; Nowlan, 1990) or an approximation as in
(Moody & Darken, 1988) the hidden units adapt to represent local probability densities. In
particular Yi (x) "" p (Cil x) and p (x) "" Ijbj (x) . This is a major advantage of this architectur and can be exploited to obtain closed form solutions to (1) and (3). Substituting into
(3) we get:
J(L, wijbj (xc' XtU ? p (xul XtU ) dXtu
p (C il xc' xu) ==
J(Lb (xc' xlU ) ) p (xul xtu ) dxlu
(5)
---"j----------j
J
For the case of missing features equation (5) can be computed directly. As noted before,
uniform. Since the infinite integral along each
equation (1) is simply (3) with p
dimension of a multivariate normal density is equal to one we get:
(xui x,u )
'"
4 wJI..bJ? (xc)
p(Cilxc)""J",
(6)
3.
~bj(xc)
j
(Here bj (xc) denotes the same function as in except that it is only evaluated over the
known dimensions given by xc.) Equation (6) is appealing since it gives us a simple closed
form solution. Intuitively, the solution is nothing more than projecting the Gaussians onto
the dimensions which are available and evaluating the resulting network. As the number
of training patterns increases, (6) will approach the optimal Bayes solution.
4.2 GBF NETWORKS AND NOISY FEATURES
With noisy features the situation is a little more complicated and the solution depends on
the form of the noise. If the noise is known to be uniform in some region [a, b] then
equation (5) becomes:
II [N (bjal .., 0'2.) - N (ai;~'" O'~.)]
L bJ.(xc ) II [N(b"'~. ' :7O'2'' ) -N(a,. .,~ .. 2
'" w iJb. (xc)
~
p(C'lx,x)== J
ICU
J.
IJ
IJ
IJ
IJ
lEV
3.
. .
J
IE V
IJ
IJ
IJ
,(J .. ) ]
IJ
(7)
Some Solutions to the Missing Feature Problem in Vision
Here ~jj and a~ select the i'th component of the j'th mean and variance vectors. U ranges
over the noisy feature indices. Good closed form approximations to the normal distribution function N (x; 1.1., ( 2) are available (Press et al, 1986) so (7) is efficiently computable.
With zero-mean Gaussian noise with variance O'~, we can also write down a closed form
solution. In this case we have to integrate a product of two Gaussians and end up with:
4, wjjb') (xc' xu)
J
= ~-----
. >.
.... 2..>.2
= n (xu;J..Lju'
0u + 0ju) b/x c)'
~..
with b 'j (xc' xu)
.>.
Lb') (xc' xu)
j
5 BACKPROPAGATION NETWORKS
With a large training set, the outputs of a sufficiently large network trained with backpropagation converges to the optimal Bayes a posteriori estimates (Richard & Lippmann,
1992). If B j (x) is the output of the i'th output unit when presented with input x,
B j (x) "" p (Cj / x) . Unfortunately, access to the input distribution is not available with back-
propagation. Without prior knowledge it is reasonable to assume a uniform input distribution, in which case the right hand side of (3) simplifies to:
.>.
Jp (Cil xc' xtu)p (xul x tu ) dx tu
p (C -I x ) == - - - - - - - -
I
(8)
Jp (xul x tu ) dxtu
C
The integral can be approximated using standard Monte Carlo techniques. With uniform
.>.
noise in the interval [a, b] , this becomes (ignoring normalizing constants):
b"
p(Cjlx c) == JBj(Xc.Xtu)dXtu
(9)
With missing features the integral in (9) is computed over the entire range of each feature.
6 AN EXAMPLE TASK: 3D HAND GESTURE RECOGNITION
A simple realistic example serves to illustrate the utility of the above techniques. We consider the task of recognizing a set of hand gestures from single 2D images independent of
3D orientation (Figure 2). As input, each classifier is given the 2D polar coordinates of the
five fingertip positions relative to the 2D center of mass of the hand (so the input space is
lO-dimensional). Each classifier is trained on a training set of 4368 examples (624 poses
for each gesture) and tested on a similar independent test set.
The task forms a good benchmark for testing performance with missing and uncertain
inputs. The classification task itself is non-trivial. The classifier must learn to deal with
hands (which are complex non-rigid objects) and with perspective projection (which is
non-linear and non-invertible). In fact it is impossible to obtain a perfect score since in
certain poses some of the gestures are indistinguishable (e.g. when the hand is pointing
directly at the screen). Moreover, the task is characteristic of real vision problems. The
397
398
Ahmad and Tresp
"five"
"four"
"three"
"two"
"one"
"thumbs_up" "pointing"
Figure 2. Examples of the 7 gestures used to train the classifier. A 3D computer model of
the hand is used to generate images of the hand in various poses. For each training example, we choose a 3D orientation, compute the 3D positions of the fingertips and project
them onto 2D. For this task we assume that the correspondence between image and model
features are known, and that during training all feature values are always available.
position of each finger is highly (but not completely) constrained by the others resulting in
a very non-uniform input distribution. Finally it is often easy to see what the classifier
should output if features are uncertain. For example suppose the real gesture is "fi ve" but
for some reason the features from the thumb are not reliably computed. In this case the
gestures "four" and "five" should both get a positive probability whereas the rest should
get zero. In many such cases only a single class should get the highest score, e.g. if the features for the little finger are uncertain the correct class is still "five".
We tried three classifiers on this task: standard sigmoidal networks trained with backpropagation (BP), and two types of gaussian networks as described in . In the first (GaussRBF), the gaussians were radial and the centers were determined using k-means clustering
as in (Moody & Darken, 1988). 0'2 was set to twice the average distance of each point to
its nearest gaussian (all gaussians had the same width). After clustering, 1t . was set to
J
L k [ n (Xk~ ~<~J~2 ] . The output weights were then determined using LMS gradient
L j n(xk,llj,O'J
descent. In the second (Gauss-G), each gaussian had a unique diagonal covariance matrix.
The centers and variances were determined using gradient descent on all the parameters
(Roscheisen et ai, 1992). Note that with this type of training, even though gaussian hidden
units are used, there is no guarantee that the distribution information will be preserved.
All classifiers were able to achieve a reasonable performance level. BP with 60 hidden
units managed to score 95.3% and 93.3% on the training and test sets, respectively. GaussG with 28 hidden units scored 94% and 92%. Gauss-RBF scored 97.7% and 91.4% and
required 2000 units to achieve it. (Larger numbers of hidden units led to overfitting.) For
comparison, nearest neighbor achieves a score of 82.4% on the test set.
6.1 PERFORMANCE WITH MISSING FEATURES
We tested the performance of each network in the presence of missing features. For backpropagation we used a numerical approximation to equation (9). For both gaussian basis
function networks we used equation (6). To test the networks we randomly picked samples
from the test set and deleted random features. We calculated a performance score as the
percentage of samples where the correct class was ranked as one of the top two classes.
Figure 3 displays the results. For comparison we also tested each classifier by substituting
the mean value of each missing feature and using the normal update equation.
As predicted by the theory the performance of Gauss-RBF using (6) was consistently better than the others. The fact that BP and Gauss-G performed poorly indicates that the distribution of the features must be taken into account. The fact that using the mean value is
Some Solutions to the Missing Feature Problem in Vision
Performance with milling features
100
~
90
? ? : ?~
'. '. . .
.. "" " 'O
80
Performanc
Cl&11lIII-RBF -.--
70
.. . .
v~o. ....
Ga~; ~
60
Gauls-G-MEAN ?0? .
BP-MEAN +. RBF MEAN ?e?-
o
1
~~"":"'O
'. ."...?
4:
234
No. of milling feat urel
5
6
Figure 3. The performance of various classifiers when dealing with missing features. Each
data point denotes an average over tOOO random samples from an independent test set. For
each sample. random features were considered missing. Each graph plots the percentage
of samples where the correct class was one of the top two classes.
insufficient indicates that the integration step must also be carried out. Perhaps most
encouraging is the result that even with 50% of the features missing. Gauss-RBF ranks the
correct class among the top two 90% of the time. This clearly shows that a significant
amount of information can be extracted even with a large number of missing features.
6.2 PERFORMANCE WITH NOISY FEATURES
We also tested the performance of each network in the presence of noisy features. We randomly picked samples from the test set and added uniform noise to random features. The
noise interval was calculated as [x I. - 2cr I., x I? + 2cr.JI where XI? is the feature value and crI. is
the standard deviation of that feature over the training set. For BP we used equation (9)
and for the GBF networks we used equation (7). Figure 3 displays the results. For comparison we also tested each classifier by substituting the noisy value of each noisy feature and
using the normal update equation (RBF-N, BP-N, and Gauss-GN). As with missing features, the performance of Gauss-RBF was significantly better than the others when a large
number of features were noisy.
7 DISCUSSION
The results demonstrate the advantages of estimating the input distribution and integrating
over the missing dimensions, at least on this task. They also show that good classification
performance alone does not guarantee good missing feature performance. (Both BP and
Gauss-G performed better than Gauss-RBF on the test set.) To get the best of both worlds
one could use a hybrid technique utilizing separate density estimators and classifiers
although this would probably require equations (1) and (3) to be numerically integrated.
One way to improve the performance of BP and Gauss-G might be to use a training set
that contained missing features. Given the unusual distributions that arise in vision, in
order to guarantee accuracy such a training set should include every possible combination
399
400
Ahmad and Tresp
Performance with noilY featurel
I
I
I
I
Performance:
80 I-Gaull-RBF
Ga1181-G
BP
Gauu-GN
BP-N
70 IRBF-N
I
o
1
-
___
0-
-+-?0? .
+ ..
-
.?..
l
1
234
No. of noisy featurel
5
l
6
Figure 4. As in Figure 3 except that the performance with noisy features is plotted.
of missing features. In addition, for each such combination, enough patterns must be
included to accurately estimate the posterior density. In general this type of training is
intractable since the number of combinations is exponential in the number of features.
Note that if the input distribution is available (as in Gauss-RBF), then such a training scenario is unnecessary.
Acknowledgements
We thank D. Goryn, C. Maggioni, S. Omohundro, A. Stokke, and R. Schuster for helpful
discussions, and especially B. Wirtz for providing the computer hand model. V.T. is supported in part by a grant from the Bundesministerium fUr Forschung und Technologie.
References
A.P. Dempster, N.M. Laird, and D.H. Rubin. (1977) Maximum-likelihood from incomplete
data via the EM algorithm.f. Royal Statistical Soc. Ser. B, 39:1-38.
R.O. Duda and P.E. Hart. (1973) Pattern Classification and Scene Analysis. John Wiley &
Sons, New York.
J. Moody and C. Darken. (1988) Learning with localized receptive fields. In: D. Touretzky, G.
Hinton, T. Sejnowski, editors, Proceedings of the 1988 Connectionist Models Summer School,
Morgan Kaufmann, CA.
S. Nowlan. (1990) Maximum Likelihood Competitive Learning. In: Advances in Neurallnformation Processing Systems 4, pages 574-582.
W.H. Press, B.P. Flannery, S.A. Teukolsky, and W.T. Veuerling. (1986) Numerical Recipes:
The Art of Scientific Computing, Cambridge University Press, Cambridge, UK.
M. D. Richard and R.P. Lippmann. (1991) Neural Network Classifiers Estimate Bayesian a
posteriori Probabilities, Neural Computation, 3:461-483.
M. Roscheisen, R. Hofman, and V. Tresp. (1992) Neural Control for Rolling Mills: Incorporating Domain Theories to Overcome Data DefiCiency. In: Advances in Neural Information
Processing Systems 4, pages 659-666.
| 621 |@word duda:2 simulation:1 tried:1 covariance:3 shading:1 score:5 nowlan:3 dx:2 must:4 john:1 realistic:1 numerical:2 plot:1 update:2 alone:1 xk:2 ijb:1 miinchen:2 lx:1 sigmoidal:1 five:4 along:3 direct:1 little:3 encouraging:1 becomes:3 project:1 estimating:1 moreover:1 mass:1 what:1 substantially:1 ag:2 guarantee:3 esti:1 berkeley:1 certainty:1 every:1 classifier:18 wrong:1 ser:1 uk:1 unit:11 grant:1 control:1 before:2 positive:1 local:2 lev:1 might:1 twice:1 range:3 unique:1 yj:1 testing:1 practice:2 backpropagation:4 significantly:1 projection:1 pre:1 integrating:2 radial:1 get:7 onto:2 ga:1 impossible:1 missing:35 zfe:3 center:3 formalized:1 rule:1 estimator:1 utilizing:1 maggioni:1 coordinate:1 limiting:1 suppose:1 recognition:3 approximated:3 region:3 decrease:1 ahmad:6 highest:2 icsi:1 dempster:2 und:1 technologie:1 trained:4 hofman:1 xul:9 basis:5 completely:1 joint:2 various:2 finger:2 train:1 monte:1 sejnowski:1 choosing:1 larger:1 otto:2 ability:2 noisy:20 itself:1 laird:1 advantage:2 net:1 product:1 tu:6 date:1 poorly:1 achieve:2 validate:1 recipe:1 produce:1 perfect:1 ring:2 converges:1 object:3 illustrate:2 pose:3 exemplar:2 nearest:2 measured:4 ij:8 school:1 soc:1 predicted:1 involves:1 shadow:1 xui:1 correct:6 require:1 assign:3 c_:1 sufficiently:1 considered:1 normal:4 exp:1 bj:5 lm:1 pointing:2 substituting:4 major:1 achieves:1 omitted:1 polar:1 weighted:3 clearly:1 subutai:1 gaussian:10 always:2 cr:2 volker:1 gennany:2 yo:4 consistently:1 rank:1 indicates:2 fur:1 likelihood:2 sense:1 posteriori:2 helpful:1 dependent:1 rigid:1 cri:1 unlikely:1 entire:2 integrated:1 hidden:6 issue:1 classification:6 orientation:2 among:1 development:2 constrained:2 integration:2 art:1 marginal:1 equal:2 field:1 cer:1 others:3 connectionist:1 richard:3 randomly:2 ve:1 occlusion:2 bundesministerium:1 highly:4 fingertip:2 mixture:1 tj:2 integral:4 partial:1 necessary:2 incomplete:1 plotted:1 uncertain:5 instance:1 classify:1 modeling:1 soft:1 gn:2 measuring:1 deviation:1 rolling:1 uniform:7 recognizing:1 optimally:1 st:2 density:6 ju:1 ie:1 systematic:1 invertible:1 moody:3 central:2 cjl:1 choose:1 account:1 de:1 depends:2 performed:2 picked:2 closed:5 competitive:1 bayes:3 complicated:1 il:2 degraded:1 accuracy:1 variance:3 characteristic:1 efficiently:1 kaufmann:1 dealt:1 bayesian:4 thumb:1 accurately:1 lu:2 carlo:1 detector:1 touretzky:1 ttj:1 naturally:1 knowledge:2 dimensionality:1 cj:2 back:1 evaluated:1 though:1 roscheisen:3 just:1 hand:10 propagation:1 aj:1 perhaps:1 scientific:1 true:2 managed:1 deal:5 indistinguishable:1 during:2 width:1 noted:1 jbj:1 complete:4 demonstrate:1 omohundro:1 image:5 novel:1 fi:1 ji:3 jp:6 numerically:1 significant:1 cambridge:2 ai:3 outlined:1 had:2 access:1 etc:1 posterior:2 multivariate:1 lju:1 perspective:1 scenario:2 certain:2 yi:1 exploited:1 wji:1 morgan:1 ii:3 full:1 characterized:1 gesture:8 adapt:1 hart:2 vision:7 represent:2 preserved:1 whereas:1 want:1 addition:1 interval:2 crucial:1 rest:1 probably:1 extracting:1 near:1 presence:2 neurallnformation:1 vital:1 split:1 easy:1 enough:1 independence:1 reduce:1 simplifies:1 computable:2 utility:1 tain:1 york:1 jj:1 useful:1 amount:1 dark:1 generate:1 percentage:2 correctly:1 write:1 four:2 deleted:1 xtu:13 graph:1 uncertainty:1 extends:1 almost:1 reasonable:3 summer:1 display:2 correspondence:1 deficiency:1 constrain:1 bp:10 scene:1 argument:1 llj:1 combination:3 dxu:1 em:2 son:1 appealing:1 intuitively:1 projecting:1 taken:1 equation:13 xlu:2 discus:1 serf:2 end:1 unusual:1 available:8 gaussians:5 liii:1 substitute:1 denotes:4 clustering:3 top:3 include:1 xc:30 hahn:2 especially:1 already:1 added:1 receptive:1 usual:2 diagonal:3 gradient:2 distance:1 separate:1 thank:1 gracefully:1 trivial:1 reason:1 sur:1 index:1 insufficient:1 providing:1 gbf:5 unfortunately:1 reliably:1 unknown:3 darken:3 benchmark:1 mate:1 descent:2 situation:5 extended:1 hinton:1 lb:2 required:1 able:1 usually:1 pattern:3 regime:1 royal:1 ranked:1 hybrid:1 representing:1 improve:1 carried:1 tresp:7 kj:1 faced:1 prior:1 acknowledgement:1 relative:1 proven:1 localized:1 integrate:3 sufficient:1 rubin:1 editor:1 lo:1 course:1 supported:1 side:1 neighbor:1 stokke:1 boundary:1 dimension:6 calculated:2 world:2 evaluating:1 overcome:1 lippmann:3 feat:1 unreliable:1 dealing:2 overfitting:1 unnecessary:1 xi:2 why:1 learn:1 ca:1 ignoring:1 complex:2 cl:1 domain:1 substituted:1 icu:1 noise:10 scored:2 arise:1 nothing:2 xu:15 screen:1 cil:7 wiley:1 position:3 exponential:1 weighting:1 down:1 normalizing:1 intractable:1 incorporating:1 effectively:1 ci:4 milling:2 forschung:1 crj:1 flannery:1 depicted:1 led:1 mill:1 simply:2 visual:1 expressed:1 contained:1 extracted:1 teukolsky:1 careful:1 rbf:10 included:1 infinite:1 except:3 determined:3 gauss:11 siemens:3 select:1 tested:5 schuster:1 |
5,759 | 6,210 | Data driven estimation of Laplace-Beltrami operator
Fr?d?ric Chazal
Inria Saclay
Palaiseau France
[email protected]
Ilaria Giulini
Inria Saclay
Palaiseau France
[email protected]
Bertrand Michel
Ecole Centrale de Nantes
Laboratoire de Math?matiques Jean Leray (UMR 6629 CNRS)
Nantes France
[email protected]
Abstract
Approximations of Laplace-Beltrami operators on manifolds through graph Laplacians have become popular tools in data analysis and machine learning. These
discretized operators usually depend on bandwidth parameters whose tuning remains a theoretical and practical problem. In this paper, we address this problem for
the unnormalized graph Laplacian by establishing an oracle inequality that opens
the door to a well-founded data-driven procedure for the bandwidth selection. Our
approach relies on recent results by Lacour and Massart [LM15] on the so-called
Lepski?s method.
1
Introduction
The Laplace-Beltrami operator is a fundamental and widely studied mathematical tool carrying a
lot of intrinsic topological and geometric information about the Riemannian manifold on which it is
defined. Its various discretizations, through graph Laplacians, have inspired many applications in
data analysis and machine learning and led to popular tools such as Laplacian EigenMaps [BN03] for
dimensionality reduction, spectral clustering [VL07], or semi-supervised learning [BN04], just to
name a few.
During the last fifteen years, many efforts, leading to a vast literature, have been made to understand
the convergence of graph Laplacian operators built on top of (random) finite samples to LaplaceBeltrami operators. For example pointwise convergence results have been obtained in [BN05] (see
also [BN08]) and [HAL07], and a (uniform) functional central limit theorem has been established
in [GK06]. Spectral convergence results have also been proved by [BN07] and [VLBB08]. More
recently, [THJ11] analyzed the asymptotic of a large family of graph Laplacian operators by taking
the diffusion process approach previously proposed in [NLCK06].
Graph Laplacians depend on scale or bandwidth parameters whose choice is often left to the user.
Although many convergence results for various metrics have been established, little is known about
how to rigorously and efficiently tune these parameters in practice. In this paper we address this
problem in the case of unnormalized graph Laplacian. More precisely, given a Riemannian manifold
M of known dimension d and a function f : M ? R , we consider the standard unnormalized graph
Laplacian operator defined by
1 X
y ? Xi
?
?h f (y) =
K
[f (Xi ) ? f (y)] ,
y ? M,
nhd+2 i
h
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
where h is a bandwidth, X1 , . . . , Xn is a finite point cloud sampled on M on which the values of f
can be computed, and K is the Gaussian kernel: for y ? Rm
K(y) =
2
1
e?kykm /4 ,
d/2
(4?)
(1)
where kykm is the Euclidean norm in the ambiant space Rm .
In this case, previous results (see for instance [GK06]) typically say that the bandwidth parameter h
1
? h should be taken of the order of n? d+2+?
in ?
for some ? > 0, but in practice, for a given point
cloud, these asymptotic results are not sufficient to choose h efficiently. In the context of neighbor
graphs [THJ11] proposes self-tuning graphs by choosing h locally in terms of the distances to the
k-nearest neighbor, but note that k still need to be chosen and moreover as far as we know there is
no guarantee for such method to be rate-optimal. More recently a data driven method for spectral
clustering has been proposed in [Rie15]. Cross validation [AC+ 10] is the standard approach for
?h
tuning parameters in statistics and machine learning. Nevertheless, the problem of choosing h in ?
is not easy to rewrite as a cross validation problem, in particular because there is no obvious contrast
corresponding to the problem (see [AC+ 10]).
The so-called Lepski?s method is another popular method for selecting the smoothing parameter
of an estimator. The method has been introduced by Lepski [Lep92b, Lep93, Lep92a] for kernel
estimators and local polynomials for various risks and several improvements of the method have then
been proposed, see [LMS97, GL09, GL+ 08]. In this paper we adapt Lepski?s method for selecting
? h . Our method is supported by mathematical guarantees:
h in the graph Laplacian estimator ?
first we obtain an oracle inequality - see Theorem 3.1 - and second we obtain the correct rate of
convergence - see Theorem 3.3 - already proved in the asymptotical studies of [BN05] and [GK06]
for non data-driven choices of the bandwidth. Our approach follows the ideas recently proposed in
[LM15], but for the specific problem of Laplacian operators on smooth manifolds. In this first work
about the data-driven estimation of Laplace-Beltrami operator, we focus as in [BN05] and [GK06] on
the pointwise estimation problem: we consider a smooth function f on M and the aim is to estimate
? h f for the L2 -norm k ? k2,M on M ? Rm . The data driven method presented here may be adapted
?
and generalized for other types of risks (uniform norms on functional family and convergence of the
spectrum) and other types of graph Laplacian operators, this will be the subject of future works.
The paper is organized as follows: Lepski?s method is introduced in Section 2. The main results are
stated in Section 3 and a sketch of their proof is given in Section 4 (the complete proofs are given in
the supplementary material). A numerical illustration and a discussion about the proposed method
are given in Sections 5 and 6 respectively.
2
Lepski?s procedure for estimating the Laplace-Beltrami operator
All the Riemannian manifolds considered in the paper are smooth compact d-dimensional submanifolds (without boundary) of Rm endowed with the Riemannian metric induced by the Euclidean
structure of Rm . Recall that, given a compact d-dimensional smooth Riemannian manifold M with
volume measure ?, its Laplace-Beltrami operator is the linear operator ? defined on the space of
smooth functions on M as ?(f ) = ? div(?f ) where ?f is the gradient vector field and div the
divergence operator. In other words, using the Stoke?s formula, ? is the unique linear operator
satisfying
Z
Z
2
k?f k d? =
?(f )f d?.
M
M
Replacing the volume measure ? by a distribution P which is absolutely continuous with respect to ?
, the weighted Laplace-Beltrami operator ?P is defined as
1
?P f = ?f + h?p, ?f i ,
p
(2)
where p is the density of P with respect to ?. The reader may refer to classical textbooks such as,
e.g., [Ros97] or [Gri09] for a general and detailed introduction to Laplace operators on manifolds.
In the following, we assume that we are given n points X1 , . . . , Xn sampled on M according to
the distribution P. Given a smooth function f on M , the aim is to estimate ?P f , by selecting
2
? h f )h?H , where H is a finite family of
an estimator in a given finite family of graph Laplacian (?
bandwidth parameters.
Lepski?s procedure is generally presented as a method for selecting bandwidth in an adaptive way.
More generally, this method can be seen as an estimator selection procedure.
2.1
Lepski?s procedure
We first shortly explain the ideas of Lepski?s method. Consider a target quantity s, a collection
of estimators (?
sh )h?H and a loss function `(?, ?). A standard objective when selecting s?h is trying
to minimize the risk E`(s, s?h ) among the family of estimators. In most settings, the risk of an
estimator can be decomposed into a bias part and a variance part. Of course neither the risk,
the bias nor the variance of an estimator are known in practice. However in many cases, the
variance term can be controlled quite precisely. Lepski?s method requires that the variance of each
estimator s?h can be tightly upper bounded by a quantity v(h). In most cases, the bias can be
written as `(s, s?h ) where s?h corresponds to some (deterministic) averaged version of s?h . It thus
seems natural to estimate `(s, s?h ) by `(?
sh0 , s?h ) for some h0 smaller than h. The later quantity
incorporates some randomness while the bias does not. The idea is to remove the ?random part"
of the estimation by considering [`(?
sh0 , s?h ) ? v(h) ? v(h0 )]+ , where [ ]+ denotes the positive part.
The bias term is estimated by considering all pairs of estimators (sh , s?h0 ) through the quantity
suph0 ?h [`(?
sh0 , s?h ) ? v(h) ? v(h0 )]+ . Finally, the estimator minimizing the sum of the estimated
bias and variance is selected, see eq. (3) below.
? h is not tight enough to
In our setting, the control of the variance of the graph Laplacian estimators ?
directly apply the above described method. To overcome this issue, we use a more flexible version of
Lepski?s method that involves some multiplicative coefficients a and b introduced in the variance and
? h] ? ?
? h )f k2 ]. The
bias terms. More precisely, let V (h) = Vf (h) be an upper bound for E[k(E[?
2,M
? selected by our Lepski?s procedure is defined by
bandwidth h
?=h
? f = arg min {B(h) + bV (h)}
h
(3)
h?H
where
B(h) = Bf (h) =
max0
h0 ?h, h ?H
h
i
? h0 ? ?
? h )f k2 ? aV (h0 )
k(?
2,M
+
(4)
with 0 < a ? b. The calibration of the constants a and b in practice is beyond the scope of this paper,
but we suggest a heuristic procedure inspired from [LM15] in section 5.
2.2
Variance of the graph Laplacian for smooth functions
In order to control the variance term, we consider for this paper the set F of smooth functions
f : M ? R uniformly bounded up to the third order. For some constant CF > 0 , let
n
o
F = f ? C 3 (M, R) , kf (k) k? ? CF , k = 0, . . . , 3 .
(5)
We introduce some notation before giving the variance term for f ? F. Define
Z
2
Ckuk?+2
1
?
d
D? =
+
C
kuk
e?kukd /4 du
1
d
d
(4?) Rd
2
Z
2
Ckuk?+2
1
?
d
?
D? =
+ C1 kukd e?kukd /8 du
4
(4?)d/2 Rd
(6)
(7)
where C and C1 are geometric constants that only depend on the metric structure of M (see Appendix).
We also introduce the d-dimensional Gaussian kernel on Rd :
Kd (u) =
2
1
e?kukd /4 ,
d/2
(4?)
u ? Rd
and we denote by k ? kp,d the Lp -norm on Rd . The next proposition provides an explicit bound V (h)
on the variance term.
3
Proposition 2.1. Given h ? H, for any f ? F, we have
V (h) =
where
D6 + 3D
?d (h) = h2 2D4 + D + 3?d kKd k22,d + h4
2
with
D=
3
2
CF
2?d kKd k22,d + ?d (h) ,
nhd+2
3?(M )
(4?)d/2
(8)
and ?d = 3 ? 2d/2?1 .
Results
? ? , or in other words,
We now give the main result of the paper: an oracle inequality for the estimator ?
h
a bound on the risk that shows that the performance of the estimator is almost as good as it would be
if we knew the risks of each estimator. In particular it performs an (almost) optimal trade-off between
the variance term V (h) and the approximation term
?
?
?
D(h) = Df (h) = max k(p?P ? E[?h ])f k2,M , sup k(E[?h0 ] ? E[?h ])f k2,M
h0 ?h
? h0 ])f k2,M .
? 2 sup k(p?P ? E[?
h0 ?h
Theorem 3.1. According to the notation introduced in the previous section, let =
?
X
min{2 , } n
2
0
?(h) =
max exp ?
, exp ?c ?d (h )
24
0
?
a/2 ? 1 and
h ?h
where c > 0 is an absolute constant and
0
?d (h ) =
"
1
kpk? h0d
2?d kKd k22,d + ?d (h0 )
#
2
(2?d kKd k1,d + ?d (h))
with ?d defined by (8) and
? 3 + D) + h3 (D
? 4 + D).
?d (h) = 2h?d kKd k1,d + h2 (2D
P
Given f ? C 2 (M, R), with probability at least 1 ? 2 h?H ?(h),
n
o
? p
? ? )f k2,M ? inf 3D(h) + (1 + 2) bV (h) .
k(p?P ? ?
h
(9)
h?H
Broadly speaking, Theorem 3.1 says that there exists an event of large probability for which the
estimator selected by Lepski?s method is almost as good as the best estimator in the collection.
Note
P
that the size of the bandwidth family H has an impact on the probability term 1 ? 2 h?H ?(h). If
? ? f can be easily deduced from the later result.
H is not too large, an oracle inequality for the risk of ?
h
Henceforth we assume that f ? F. We first give a control on the approximation term D(h).
Proposition 3.2. Assume that the density p is C 2 . It holds that
D(h) ? ? CF h
where CF is defined in eq. (5) and ? > 0 is a constant depending on kpk? , kp0 k? , kp00 k? and on
M.
We consider the following grid of bandwidths:
H = e?k , dlog log(n)e ? k ? blog(n)c .
The previous results lead to the pointwise rate of convergence of the graph Laplacian selected by
Lepski?s method:
Theorem 3.3. Assume that the density p is C 2 . For any f ? F, we have
h
i
1
? ? )f k2,M . n? d+4
E k(p?P ? ?
.
(10)
h
4
4
Sketch of the proof of theorem 3.1
We observe that the following inequality holds
? ? )f k2,M ? D(h) + k(E[?
? h] ? ?
? h )f k2,M +
k(p?P ? ?
h
Indeed, for h ? H,
p
2 (B(h) + bV (h)).
(11)
? ? )f k2,M ? k(p?P ? ?h )f k2,M + k(?h ? ?
? h )f k2,M + k(?
?h ? ?
? ? )f k2,M
k(p?P ? ?
h
h
?
?
?
? D(h) + k(?h ? ?h )f k2,M + k(?h ? ?? )f k2,M .
h
By definition of B(h), for any h0 ? h,
? h0 ? ?
? h )f k22,M ? B(h) + aV (h0 ) ? B(max{h, h0 }) + aV (min{h, h0 }),
k(?
? in eq. (3) and recalling that a ? b,
so that, according to the definition of h
?? ??
? h )f k22,M ? 2 [B(h) + aV (h)] ? 2 [B(h) + bV (h)]
k(?
h
which proves eq. (11).
We are now going to bound the terms that appear in eq. (11). The bound for D(h) is already given
? h] ? ?
? h )f k2,M . More
in proposition 3.2, so that in the following we focus on B(h) and k(E[?
precisely the bounds we present in the next two propositions are based on the following lemma from
[LM15].
Lemma 4.1. Let X1 , . . . , Xn be an i.i.d. sequence of variables. Let Se a countable set of functions
P
e Assume that there exist constants ? and
and let ?(s) = n1 i [gs (Xi ) ? E[gs (Xi )]] for any s ? S.
e
vg such that for any s ? S
kgs k? ? ?
and Var[gs (X)] ? vg .
Denote H = E[sups?Se ?(s)]. Then for any > 0 and any H 0 ? H
"
#
2
nH 02
min{, 1}nH 0
0
P sup ?(s) ? (1 + )H ? max exp ?
.
, exp ?
6vg
24 ?
s?Se
?
a
2
? 1. Given h ? H, define
?
X
22
min{2 , } n
, exp ?
?d (h0 )
.
?1 (h) =
max exp ?
24
3
0
Proposition 4.2. Let =
h ?h
With probability at least 1 ? ?1 (h)
B(h) ? 2D(h)2 .
Proposition 4.3. Let ? = a ? 1. Given h ? H define
2
?
min{?
2 , ?} n
?
?2 (h) = max exp ?
, exp ? ?d (h)
.
24
24
?
With probability at least 1 ? ?2 (h)
? h] ? ?
? h )f k2,M ?
k(E[?
p
aV (h).
Combining the above propositions with eq. (11), we get that, for any h ? H, with probability at least
1 ? (?1 (h) + ?2 (h)),
p
p
? ? )f k2,M ? D(h) + aV (h) + 4D(h)2 + 2bV (h)
k(p?P ? ?
h
? p
? 3D(h) + (1 + 2) bV (h)
where we have used the fact that a ? b. Taking a union bound on h ? H we conclude the proof.
5
5
Numerical illustration
In this section we illustrate the results of the previous section on a simple example. In section 5.1, we
describe a practical procedure when the data set X is sampled according to the uniform measure on
M . A numerical illustration us given in Section 5.2 when M is the unit 2-dimensional sphere in R3 .
5.1
Practical application of the Lepksi?s method
Lepski?s method presented in Section 2 can not be directly applied in practice for two reasons. First,
we can not compute the L2 -norm k k2,M on M , the manifold M being unknown. Second, the variance
terms involved in Lepski?s method are not completely explicit.
Regarding the first issue, we can approximate k k2,M by splitting the data into two samples: an
estimation sample X1 for computing the estimators and a validation sample X2 for evaluating
? h f and ?
? h0 f computed using X1 , the quantity
this norm. More precisely, given two estimators ?
P
2
? h f (x) ? ?
? h0 f (x)|2 ,
?
?
k(?h ? ?h0 )f k2,M /?(M ) is approximated by the averaged sum n12 x?X2 |?
where n2 is the number of points in X2 . We use these approximations to evaluate the bias terms B(h)
defined by (4).
The second issue comes from the fact that the variance terms involved in Lepski?s method depend on
the metric properties of the manifold and on the sampling density, which are both unknown. Theses
variance terms are thus only known up to a multiplicative constant. This situation contrasts with more
standard frameworks for which a tight and explicit control on the variance terms can be proposed, as
in [Lep92b, Lep93, Lep92a]. To address this second issue, we follow the calibration strategy recently
proposed in [LM15] (see also [LMR16]). In practice we remove all the multiplicative constants from
V (h): all these constants are passed into the terms a and b. This means that we rewrite Lepski?s
method as follows:
1
?
h(a, b) = arg min B(h) + b 4
h?H
nh
where
1
2
?
?
B(h) = 0 max0
k(?h0 ? ?h )f k2,M ? a 0 4
.
h ?h, h ?H
nh +
We choose a and b according to the following heuristic:
? a),
1. Take b = a and consider the sequence of selected models: h(a,
2. Starting from large values of a, make a decrease and find the location a0 of the main
? a),
bandwidth jump in the step function a 7? h(a,
? 0 , 2a0 ).
3. Select the model h(a
The justification of this calibration method is currently the subject of mathematical studies ([LM15]).
Note that a similar strategy called "slope heuristic" has been proposed for calibrating `0 penalties in
various settings by strong mathematical results, see for instance [BM07, AM09, BMM12].
5.2
Illustration on the sphere
In this section we illustrate the complete method on a simple example with data points generated
uniformly on the sphere S2 in R3 . In this case, the weighted Laplace-Beltrami operator is equal to
the (non weighted) Laplace-Beltrami operator on the sphere.
We consider the function f (x, y, z) = (x2 + y 2 + z) sin x cos x. The restriction of this function on
the sphere has the following representation in spherical coordinates:
f?(?, ?) = (sin2 ? + cos ?) sin(sin ? cos ?) cos(sin ? cos ?).
It is well known that the Laplace-Beltrami operator on the sphere satisfies (see Section 3 in [Gri09]):
1 ?2u
1 ?
?u
?S 2 u =
+
sin
?
sin ? ??
??
sin2 ? ??2
for any smooth polar function u. This allows us to derive an analytic expression of ?S 2 f?.
6
We sample n1 = 106 points on the sphere for computing the graph Laplacians and we use n = 103
?h ? ?
? h0 )f?k2 . We compute the graph Laplacians for
points for approximating the norms k(?
2,M
bandwidths in a grid H between 0.001 and 0.8 (see fig. 1). The risk of each graph Laplacian is
estimated by a standard Monte Carlo procedure (see fig. 2).
Figure 1: Choosing h is crucial for estimating ?S 2 f?: small bandwidth overfits ?S 2 f? whereas large
bandwidth leads to almost constant approximation functions of ?S 2 f?.
Figure 2: Estimation of the risk of each graph Laplacian operator: the oracle Laplacian is for
approximatively h = 0.15.
Figure 3 illustrates the calibration method. On this picture, the x-axis corresponds to the values of a
? a).
and the y-axis represents the bandwidths. The blue step function represents the function a 7? h(a,
?
The red step function gives the model selected by the rule a 7? h(a, 2a). Following the heuristics
given in Section 5.1, one could take for this example the value a0 ? 3.5 (location of the bandwidth
? 0 , 2a0 ) ? 0.2 (red curve).
jump for the blue curve) which leads to select the model h(a
6
Discussion
This paper is a first attempt for a complete and well-founded data driven method for inferring LaplaceBeltrami operators from data points. Our results suggest various extensions and raised some questions
of interest. For instance, other versions of the graph Laplacian have been studied in the literature (see
7
Figure 3: Bandwidth jump heuristic: find the location of the jump (blue curve) and deduce the
selected bandwidth with the red curve.
for instance [HAL07, BN08]), for instance when data is not sampled uniformly. It would be relevant
to propose a bandwidth selection method for these alternative estimators also.
From a practical point of view, as explained in section 5, there is a gap between the theory we
obtain in the paper and what can be done in practice. To fill this gap, a first objective is to prove an
oracle inequality in the spirit of Theorem 3.1 for a bias term defined in terms of the empirical norms
computed in practice. A second objective is to propose mathematically well-founded heuristics for
the calibration of the parameters a and b.
Tuning bandwidths for the estimation of the spectrum of the Laplace-Beltrami operator is a difficult
but important problem in data analysis. We are currently working on the adaptation of our results to
the case of operator norms and spectrum estimation.
Appendix: the geometric constants C and C1
The following classical lemma (see, e.g. [GK06][Prop. 2.2 and Eq. 3.20]) relates the constants C
and C1 introduced in Equations (6) and (7) to the geometric structure of M .
Lemma 6.1. There exist constants C, C1 > 0 and a positive real number r > 0 such that for any
x ? M , and any v ? Tx M such that kvk ? r,
q
1
det(gij )(v) ? 1 ? C1 kvk2d
and
kvk2d ? kvk2d ? Ckvk4d ? kEx (v) ? xk2m ? kvk2d
2
(12)
where Ex : Tx M ? M is the exponential map and (gi,j )i,j ? {1, ? ? ? , d} are the components of the
metric tensor in any normal coordinate system around x.
Although the proof of the lemma is beyond the scope of this paper, notice that one can indeed give
explicit bounds on r and C in terms of the reach and injectivity radius of the submanifold M .
Acknowledgments
The authors are grateful to Pascal Massart for helpful discussions on Lepski?s method. This work
was supported by the ANR project TopData ANR-13-BS01-0008 and ERC Gudhi No. 339025
References
[AC+ 10] Sylvain Arlot, Alain Celisse, et al. A survey of cross-validation procedures for model selection.
Statistics surveys, 4:40?79, 2010.
[AM09] Sylvain Arlot and Pascal Massart. Data-driven calibration of penalties for least-squares regression.
The Journal of Machine Learning Research, 10:245?279, 2009.
8
[BM07] Lucien Birg? and Pascal Massart. Minimal penalties for gaussian model selection. Probability
theory and related fields, 138(1-2):33?73, 2007.
[BMM12] Jean-Patrick Baudry, Cathy Maugis, and Bertrand Michel. Slope heuristics: overview and implementation. Statistics and Computing, 22(2):455?470, 2012.
[BN03] Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps for dimensionality reduction and data
representation. Neural computation, 15(6):1373?1396, 2003.
[BN04] Mikhail Belkin and Partha Niyogi. Semi-supervised learning on riemannian manifolds. Machine
learning, 56(1-3):209?239, 2004.
[BN05] Mikhail Belkin and Partha Niyogi. Towards a theoretical foundation for laplacian-based manifold
methods. In Learning theory, pages 486?500. Springer, 2005.
[BN07] Mikhail Belkin and Partha Niyogi. Convergence of laplacian eigenmaps. Advances in Neural
Information Processing Systems, 19:129, 2007.
[BN08] Mikhail Belkin and Partha Niyogi. Towards a theoretical foundation for laplacian-based manifold
methods. Journal of Computer and System Sciences, 74(8):1289?1308, 2008.
[GK06] E. Gin? and V. Koltchinskii. Empirical graph laplacian approximation of laplace?beltrami operators:
Large sample results. In High dimensional probability, pages 238?259. IMS, 2006.
[GL+ 08] Alexander Goldenshluger, Oleg Lepski, et al. Universal pointwise selection rule in multivariate
function estimation. Bernoulli, 14(4):1150?1190, 2008.
[GL09] A. Goldenshluger and O. Lepski. Structural adaptation via lp-norm oracle inequalities. Probability
Theory and Related Fields, 143(1-2):41?71, 2009.
[Gri09] Alexander Grigoryan. Heat kernel and analysis on manifolds, volume 47. American Mathematical
Soc., 2009.
[HAL07] M. Hein, JY Audibert, and U.von Luxburg. Graph laplacians and their convergence on random
neighborhood graphs. Journal of Machine Learning Research, 8(6), 2007.
[Lep92a] Oleg V Lepski. On problems of adaptive estimation in white gaussian noise. Topics in nonparametric
estimation, 12:87?106, 1992.
[Lep92b] OV Lepskii. Asymptotically minimax adaptive estimation. i: Upper bounds. optimally adaptive
estimates. Theory of Probability & Its Applications, 36(4):682?697, 1992.
[Lep93] OV Lepskii. Asymptotically minimax adaptive estimation. ii. schemes without optimal adaptation:
Adaptive estimators. Theory of Probability & Its Applications, 37(3):433?448, 1993.
[LM15] Claire Lacour and Pascal Massart. Minimal penalty for goldenshluger-lepski method. arXiv preprint
arXiv:1503.00946, 2015.
[LMR16] Claire Lacour, Pascal Massart, and Vincent Rivoirard. Estimator selection: a new method with
applications to kernel density estimation. arXiv preprint arXiv:1607.05091, 2016.
[LMS97] Oleg V Lepski, Enno Mammen, and Vladimir G Spokoiny. Optimal spatial adaptation to inhomogeneous smoothness: an approach based on kernel estimates with variable bandwidth selectors. The
Annals of Statistics, pages 929?947, 1997.
[NLCK06] B. Nadler, S. Lafon, RR Coifman, and IG Kevrekidis. Diffusion maps, spectral clustering and reaction coordinates of dynamical systems. Applied and Computational Harmonic Analysis, 21(1):113?
127, 2006.
[Rie15] Antonio Rieser. A topological approach to spectral clustering. arXiv:1506.02633, 2015.
[Ros97] Steven Rosenberg. The Laplacian on a Riemannian manifold: an introduction to analysis on
manifolds. Number 31. Cambridge University Press, 1997.
[THJ11] Daniel Ting, Ling Huang, and Michael Jordan. An analysis of the convergence of graph laplacians.
arXiv preprint arXiv:1101.5435, 2011.
[VL07] Ulrike Von Luxburg. A tutorial on spectral clustering. Statistics and computing, 17(4):395?416,
2007.
[VLBB08] Ulrike Von Luxburg, Mikhail Belkin, and Olivier Bousquet. Consistency of spectral clustering. The
Annals of Statistics, pages 555?586, 2008.
9
| 6210 |@word version:3 polynomial:1 norm:10 seems:1 bf:1 open:1 fifteen:1 reduction:2 giulini:2 lepskii:2 selecting:5 bs01:1 daniel:1 ecole:1 reaction:1 com:1 written:1 numerical:3 analytic:1 remove:2 selected:7 kkd:5 provides:1 math:1 location:3 mathematical:5 h4:1 become:1 prove:1 introduce:2 coifman:1 indeed:2 nor:1 discretized:1 inspired:2 bertrand:3 decomposed:1 spherical:1 kp0:1 little:1 considering:2 spain:1 estimating:2 moreover:1 bounded:2 ilaria:2 notation:2 project:1 kevrekidis:1 what:1 kg:1 submanifolds:1 rieser:1 textbook:1 guarantee:2 rm:5 k2:24 control:4 unit:1 appear:1 arlot:2 positive:2 before:1 local:1 limit:1 establishing:1 inria:3 umr:1 koltchinskii:1 studied:2 co:5 averaged:2 practical:4 unique:1 acknowledgment:1 practice:8 union:1 procedure:10 universal:1 empirical:2 discretizations:1 word:2 suggest:2 get:1 selection:7 operator:26 context:1 risk:10 restriction:1 deterministic:1 map:2 starting:1 survey:2 splitting:1 estimator:23 kykm:2 rule:2 fill:1 stoke:1 n12:1 coordinate:3 justification:1 laplace:13 annals:2 target:1 user:1 olivier:1 satisfying:1 approximated:1 cloud:2 steven:1 preprint:3 trade:1 decrease:1 rigorously:1 depend:4 carrying:1 rewrite:2 tight:2 grateful:1 ov:2 completely:1 easily:1 various:5 tx:2 heat:1 describe:1 monte:1 kp:1 cathy:1 choosing:3 h0:23 neighborhood:1 jean:2 whose:2 widely:1 supplementary:1 quite:1 say:2 heuristic:7 anr:2 statistic:6 gi:1 niyogi:5 ros97:2 sequence:2 rr:1 propose:2 fr:3 adaptation:4 relevant:1 combining:1 convergence:10 depending:1 illustrate:2 ac:3 derive:1 nearest:1 h3:1 eq:7 strong:1 soc:1 involves:1 come:1 beltrami:12 radius:1 inhomogeneous:1 correct:1 material:1 ckuk:2 proposition:8 mathematically:1 extension:1 hold:2 around:1 considered:1 normal:1 exp:8 nadler:1 scope:2 enno:1 estimation:14 polar:1 currently:2 lucien:1 tool:3 weighted:3 gaussian:4 aim:2 rosenberg:1 focus:2 improvement:1 bernoulli:1 contrast:2 sin2:2 helpful:1 cnrs:1 typically:1 a0:4 going:1 france:3 nantes:3 kex:1 arg:2 issue:4 among:1 flexible:1 pascal:5 proposes:1 smoothing:1 raised:1 spatial:1 field:3 equal:1 sampling:1 represents:2 future:1 few:1 belkin:6 divergence:1 tightly:1 baudry:1 n1:2 recalling:1 attempt:1 interest:1 analyzed:1 sh:2 kvk:1 euclidean:2 hein:1 theoretical:3 minimal:2 instance:5 uniform:3 submanifold:1 eigenmaps:3 too:1 optimally:1 deduced:1 density:5 fundamental:1 off:1 michael:1 thesis:1 central:1 von:3 choose:2 huang:1 henceforth:1 american:1 leading:1 michel:3 de:2 coefficient:1 spokoiny:1 audibert:1 later:2 multiplicative:3 lot:1 view:1 overfits:1 sup:4 red:3 ulrike:2 slope:2 partha:5 minimize:1 square:1 variance:16 efficiently:2 vincent:1 carlo:1 randomness:1 explain:1 chazal:2 kpk:2 reach:1 definition:2 involved:2 obvious:1 proof:5 riemannian:7 sampled:4 proved:2 popular:3 recall:1 dimensionality:2 organized:1 oleg:3 leray:1 supervised:2 follow:1 done:1 just:1 sketch:2 working:1 replacing:1 name:1 k22:5 calibrating:1 white:1 sin:6 during:1 self:1 mammen:1 unnormalized:3 d4:1 generalized:1 trying:1 complete:3 performs:1 harmonic:1 matiques:1 recently:4 functional:2 overview:1 volume:3 nh:4 ims:1 refer:1 cambridge:1 smoothness:1 tuning:4 rd:5 grid:2 consistency:1 erc:1 calibration:6 deduce:1 patrick:1 multivariate:1 recent:1 inf:1 driven:8 inequality:7 blog:1 seen:1 injectivity:1 semi:2 relates:1 ii:1 smooth:9 adapt:1 cross:3 sphere:7 jy:1 laplacian:23 controlled:1 impact:1 regression:1 metric:5 df:1 arxiv:7 kernel:6 c1:6 whereas:1 laboratoire:1 crucial:1 massart:6 asymptotical:1 subject:2 induced:1 incorporates:1 spirit:1 jordan:1 structural:1 door:1 easy:1 enough:1 bandwidth:22 idea:3 regarding:1 det:1 maugis:1 expression:1 passed:1 effort:1 penalty:4 speaking:1 antonio:1 sh0:3 generally:2 detailed:1 se:3 tune:1 nonparametric:1 locally:1 exist:2 tutorial:1 notice:1 estimated:3 blue:3 broadly:1 celisse:1 nevertheless:1 neither:1 kuk:1 diffusion:2 vast:1 graph:25 asymptotically:2 year:1 sum:2 luxburg:3 family:6 reader:1 almost:4 ric:1 appendix:2 vf:1 bound:9 topological:2 oracle:7 g:3 adapted:1 bv:6 precisely:5 x2:4 bousquet:1 min:7 according:5 centrale:1 kd:1 smaller:1 lp:2 explained:1 dlog:1 taken:1 equation:1 remains:1 previously:1 r3:2 know:1 endowed:1 apply:1 observe:1 spectral:7 birg:1 alternative:1 shortly:1 top:1 clustering:6 denotes:1 cf:5 giving:1 ting:1 k1:2 prof:1 approximating:1 classical:2 tensor:1 objective:3 already:2 quantity:5 question:1 strategy:2 div:2 gradient:1 gin:1 distance:1 d6:1 me:1 topic:1 manifold:15 reason:1 pointwise:4 illustration:4 minimizing:1 vladimir:1 difficult:1 stated:1 implementation:1 countable:1 unknown:2 upper:3 av:6 finite:4 situation:1 introduced:5 pair:1 established:2 barcelona:1 nip:1 address:3 beyond:2 usually:1 below:1 dynamical:1 laplacians:7 saclay:2 built:1 max:6 event:1 natural:1 minimax:2 scheme:1 picture:1 axis:2 geometric:4 literature:2 l2:2 kf:1 asymptotic:2 loss:1 var:1 vg:3 validation:4 h2:2 foundation:2 sufficient:1 claire:2 course:1 gl:2 last:1 supported:2 alain:1 bias:9 understand:1 neighbor:2 taking:2 absolute:1 mikhail:6 boundary:1 dimension:1 xn:3 overcome:1 evaluating:1 curve:4 lafon:1 author:1 made:1 adaptive:6 collection:2 jump:4 palaiseau:2 founded:3 ec:1 far:1 ig:1 approximate:1 compact:2 selector:1 nhd:2 conclude:1 knew:1 xi:4 spectrum:3 continuous:1 lepski:24 du:2 laplacebeltrami:2 main:3 s2:1 noise:1 ling:1 n2:1 x1:5 fig:2 inferring:1 explicit:4 exponential:1 third:1 theorem:8 lacour:3 formula:1 specific:1 frederic:1 intrinsic:1 exists:1 illustrates:1 gap:2 led:1 approximatively:1 springer:1 corresponds:2 satisfies:1 relies:1 prop:1 towards:2 sylvain:2 uniformly:3 lemma:5 max0:2 called:3 gij:1 select:2 alexander:2 absolutely:1 evaluate:1 ex:1 |
5,760 | 6,211 | Supervised Learning with Tensor Networks
E. M. Stoudenmire
Perimeter Institute for Theoretical Physics
Waterloo, Ontario, N2L 2Y5, Canada
David J. Schwab
Department of Physics
Northwestern University, Evanston, IL
Abstract
Tensor networks are approximations of high-order tensors which are efficient to
work with and have been very successful for physics and mathematics applications.
We demonstrate how algorithms for optimizing tensor networks can be adapted to
supervised learning tasks by using matrix product states (tensor trains) to parameterize non-linear kernel learning models. For the MNIST data set we obtain less
than 1% test set classification error. We discuss an interpretation of the additional
structure imparted by the tensor network to the learned model.
1
Introduction
Recently there has been growing appreciation for tensor methods in machine learning. Tensor
decompositions can solve non-convex optimization problems [1, 2] and be used for other important
tasks such as extracting features from input data and parameterizing neural nets [3, 4, 5]. Tensor
methods have also become prominent in the field of physics, especially the use of tensor networks
which accurately capture very high-order tensors while avoiding the the curse of dimensionality
through a particular geometry of low-order contracted tensors [6]. The most successful use of
tensor networks in physics has been to approximate exponentially large vectors arising in quantum
mechanics [7, 8].
Another context where very large vectors arise is non-linear kernel learning, where input vectors x
are mapped into a higher dimensional space via a feature map ?(x) before being classified by a
decision function
f (x) = W ? ?(x) .
(1)
The feature vector ?(x) and weight vector W can be exponentially large or even infinite. One
approach to deal with such large vectors is the well-known kernel trick, which only requires working
with scalar products of feature vectors [9].
In what follows we propose a rather different approach. For certain learning tasks and a specific class
of feature map ?, we find the optimal weight vector W can be approximated as a tensor network?a
contracted sequence of low-order tensors. Representing W as a tensor network and optimizing it
directly (without passing to the dual representation) has many interesting consequences. Training the
model scales only linearly in the training set size; the evaluation cost for a test input is independent
of training set size. Tensor networks are also adaptive: dimensions of tensor indices internal to the
network grow and shrink during training to concentrate resources on the particular correlations within
the data most useful for learning. The tensor network form of W presents opportunities to extract
information hidden within the trained model and to accelerate training by optimizing different internal
tensors in parallel [10]. Finally, the tensor network form is an additional type of regularization beyond
the choice of feature map, and could have interesting consequences for generalization.
One of the best understood types of tensor networks is the matrix product state (MPS) [11, 8], also
known as the tensor train decomposition [12]. Though MPS are best at capturing one-dimensional
correlations, they are powerful enough to be applied to distributions with higher-dimensional correlations as well. MPS have been very useful for studying quantum systems, and have recently
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
?
Figure 1: The matrix product state (MPS) decomposition, also known as a tensor train. (Lines
represent tensor indices and connecting two lines implies summation.)
been investigated for machine learning applications such as learning features by decomposing tensor
representations of data [4] and compressing the weight layers of neural networks [5].
While applications of MPS to machine learning have been a success, one aim of the present work is
to have tensor networks play a more central role in developing learning models; another is to more
easily incorporate powerful algorithms and tensor networks which generalize MPS developed by the
physics community for studying higher dimensional and critical systems [13, 14, 15]. But in what
follows, we only consider the case of MPS tensor networks as a proof of principle.
The MPS decomposition is an approximation of an order-N tensor by a contracted chain of N lowerorder tensors shown in Fig. 1. (Throughout we will use tensor diagram notation: shapes represent
tensors and lines emanating from them are tensor indices; connecting two lines implies contraction of
a pair of indices. We emphasize that tensor diagrams are not merely schematic, but have a rigorous
algorithmic interpretation. For a helpful review of this notation, see Cichocki [16].)
Representing the weights W of Eq. (1) as an MPS allows one to efficiently optimize these weights and
adaptively change their number by varying W locally a few tensors at a time, in close analogy to the
density matrix renormalization group (DMRG) algorithm used in physics [17, 8]. Similar alternating
least squares methods for tensor trains have been explored more recently in applied mathematics [18].
This paper is organized as follows: first we propose our general approach and describe an algorithm
for optimizing the weight vector W in MPS form. Then we test our approach on the MNIST
handwritten digit set and find very good performance for remarkably small MPS bond dimensions.
Finally, we discuss the structure of the functions realized by our proposed models.
For researchers interested in reproducing our results, we have made our codes publicly available at:
https://github.com/emstoudenmire/TNML. The codes are based on the ITensor library [19].
2
Encoding Input Data
Tensor networks in physics are typically used in a context where combining N independent systems
corresponds to taking a tensor product of a vector describing each system. With the goal of applying
similar tensor networks to machine learning, we choose a feature map of the form
s1 s2 ???sN
?s1 s2 ???sN (x) = ?s1 (x1 ) ? ?s2 (x2 ) ? ? ? ? ?sN (xN ) .
(2)
sj
The tensor ?
is the tensor product of a local feature map ? (xj ) applied to each input
component xj of the N -dimensional vector x (where j = 1, 2, . . . , N ). The indices sj run from 1
to d, where d is known as the local dimension and is a hyper-parameter defining the classification
model. Though one could use a different local feature map for each input component xj , we will
only consider the case of homogeneous inputs with the same local map applied to each xj . Thus each
xj is mapped to a d-dimensional vector, and the full feature map ?(x) can be viewed as a vector in a
dN -dimensional space or as an order-N tensor. The tensor diagram for ?(x) is shown in Fig. 2. This
type of tensor is said be rank-1 since it is manifestly the product of N order-1 tensors.
For a concrete example of this type of feature map, which we will use later, consider inputs which are
grayscale images with N pixels, where each pixel value ranges from 0.0 for white to 1.0 for black. If
the grayscale value of pixel number j is xj ? [0, 1], a simple choice for the local map ?sj (xj ) is
h ?
? i
?sj (xj ) = cos xj , sin xj
(3)
2
2
and is illustrated in Fig. 3. The full image is represented as a tensor product of these local vectors. The
above feature map is somewhat ad-hoc, and is motivated by ?spin? vectors encountered in quantum
systems. More research is needed to understand the best choices for ?s (x), but the most crucial
~
~ 0 ) is a smooth and slowly varying function of x and x0 , and
property seems to be that ?(x)
? ?(x
induces a distance metric in feature space that tends to cluster similar images together.
2
s1 s2 s3 s4 s5 s6
=
s1
s2
s3
s4
s5
s6
Figure 2: Input data is mapped to a normalized order N tensor with a rank-1 product structure.
Figure 3: For the case of a grayscale image and d = 2, each pixel value is mapped to a normalized
two-component vector. The full image is mapped to the tensor product of all the local pixel vectors
as shown in Fig. 2.
The feature map Eq. (2) defines a kernel which is the product of N local kernels, one for each
component xj of the input data. Kernels of this type have been discussed previously in Vapnik [20, p.
193] and have been argued by Waegeman et al. [21] to be useful for data where no relationship is
assumed between different components of the input vector prior to learning.
3
Classification Model
In what follows we are interested in classifying data with pre-assigned hidden labels, for which we
choose a ?one-versus-all? strategy, which we take to mean optimizing a set of functions indexed by a
label `
f ` (x) = W ` ? ?(x)
(4)
and classifying an input x by choosing the label ` for which |f ` (x)| is largest.
Since we apply the same feature map ? to all input data, the only quantity that depends on the label
` is the weight vector W ` . Though one can view W ` as a collection of vectors labeled by `, we
will prefer to view W ` as an order N + 1 tensor where ` is a tensor index and f ` (x) is a function
mapping inputs to the space of labels. The tensor diagram for evaluating f ` (x) for a particular input
is depicted in Fig. 4.
Because the weight tensor Ws`1 s2 ???sN has NL ? dN components, where NL is the number of labels,
we need a way to regularize and optimize this tensor efficiently. The strategy we will use is to
represent W ` as a tensor network, namely as an MPS which have the key advantage that methods for
manipulating and optimizing them are well understood and highly efficient. An MPS decomposition
of the weight tensor W ` has the form
X
?1 ?2
1
j ?j+1
N ?1
Ws`1 s2 ???sN =
A?
? ? ? A`;?
? ? ? A?
(5)
s1 As2
sj
sN
{?}
`
W`
(x)
`
=
f ` (x)
Figure 4: The overlap of the weight tensor W ` with a specific input vector ?(x) defines the decision
function f ` (x). The label ` for which f ` (x) has maximum magnitude is the predicted label for x.
3
`
`
?
Figure 5: Approximation of the weight tensor W ` by a matrix product state. The label index ` is
placed arbitrarily on one of the N tensors but can be moved to other locations.
and is illustrated in Fig. 5. Each A tensor has d m2 elements which are the latent variables parameterizing the approximation of W ; the A tensors are in general not unique and can be constrained to
bestow nice properties on the MPS, like making the A tensors partial isometries.
The dimensions of each internal index ?j of an MPS are known as the bond dimensions and are the
(hyper) parameters controlling complexity of the MPS approximation. For sufficiently large bond
dimensions an MPS can represent any tensor [22]. The name matrix product state refers to the fact
that any specific component of the full tensor Ws`1 s2 ???sN can be recovered efficiently by summing
over the {?j } indices from left to right via a sequence of matrix products (the term ?state? refers to
the original use of MPS to describe quantum states of matter).
In the above decomposition Eq. (5), the label index ` was arbitrarily placed on the tensor at some
position j, but this index can be moved to any other tensor of the MPS without changing the overall
W ` tensor it represents. To do so, one contracts the tensor at position j with one of its neighbors,
then decomposes this larger tensor using a singular value decomposition such that ` now belongs to
the neighboring tensor?see Fig. 7(a).
4
?Sweeping? Optimization Algorithm
Inspired by the very successful DMRG algorithm developed for physics applications [17, 8], here we
propose a similar algorithm which ?sweeps? back and forth along an MPS, iteratively minimizing the
cost function defining the classification task.
To describe the algorithm in concrete terms, we wish to optimize the quadratic cost
PNT P `
` 2
`
C = 12 n=1
` (f (xn ) ? yn ) where n runs over the NT training inputs and yn is the vector
Ln
`
of desired outputs for input n. If the correct label of xn is Ln , then yn = 1 and yn = 0 for all other
labels ` (i.e. a one-hot encoding).
Our strategy for minimizing this cost function will be to vary only two neighboring MPS tensors at a
time within the approximation Eq. (5). We could conceivably just vary one at a time, but varying two
tensors makes it simple to adaptively change the MPS bond dimension.
Say we want to improve the tensors at sites j and j + 1. Assume we have moved the label index `
to the MPS tensor at site j. First we combine the MPS tensors A`sj and Asj+1 into a single ?bond
?
`?j+1
tensor? Bsjj?1
sj+1
by contracting over the index ?j as shown in Fig. 6(a).
Next we compute the derivative of the cost function C with respect to the bond tensor B ` in order to
update it using a gradient descent step. Because the rest of the MPS tensors are kept fixed, let us show
that to compute the gradient it suffices to feed, or project, each input xn through the fixed ?wings? of
the MPS as shown on the left-hand side of Fig. 6(b) (connected lines in the diagram indicate sums
? n shown on the
over pairs of indices). The result is a projected, four-index version of the input ?
right-hand of Fig. 6(b). The current decision function can be efficiently computed from this projected
? n and the current bond tensor B ` as
input ?
X
X
`?j+1 ? sj sj+1
f ` (xn ) =
Bs?jj?1
(?n )?j?1 `?j+1
(6)
sj+1
?j?1 ?j+1 sj sj+1
or as illustrated in Fig. 6(c). The gradient update to the tensor B ` can be computed as
?B ` = ?
NT
X
?C
?n .
=
(yn` ? f ` (xn ))?
?B `
n=1
4
(7)
`
`
(a)
=
j j +1
j j +1
(b)
n
(c)
`
=
?n
`
=
`
B
?n
(d)
B`
=
f ` (xn )
`
(yn`
X
f ` (xn ))
?n
n
Figure 6: Steps leading to computing the gradient of the bond tensor B ` at bond j: (a) forming
the bond tensor; (b) projecting a training input into the ?MPS basis? at bond j; (c) computing the
decision function in terms of a projected input; (d) the gradient correction to B ` . The dark shaded
circular tensors in step (b) are ?effective features? formed from m different linear combinations of
many original features.
The tensor diagram for ?B ` is shown in Fig. 6(d).
Having computed the gradient, we use it to make a small update to B ` , replacing it with B ` + ??B `
for some small ?. Having obtained our improved B ` , we must decompose it back into separate
MPS tensors to maintain efficiency and apply our algorithm to the next bond. Assume the next
bond we want to optimize is the one to the right (bond j + 1). Then we can compute a singular
value decomposition (SVD) of B ` , treating it as a matrix with a collective row index (?j?1 , sj ) and
collective column index (`, ?j+1 , sj+1 ) as shown in Fig. 7(a). Computing the SVD this way restores
the MPS form, but with the ` index moved to the tensor on site j + 1. If the SVD of B ` is given by
X ?
`?j+1
?0j
?j `?j+1
Bs?jj?1
=
Usjj?1
,
(8)
?j Vsj+1
sj+1
?0 S
j
?0j ?j
then to proceed to the next step we define the new MPS tensor at site j to be A0sj = Usj and the new
`
tensor at site j + 1 to be A0`
sj+1 = SVsj+1 where a matrix multiplication over the suppressed ? indices
is implied. Crucially at this point, only the m largest singular values in S are kept and the rest are
truncated (along with the corresponding columns of U and V ? ) in order to control the computational
cost of the algorithm. Such a truncation is guaranteed to produce an optimal approximation of the
tensor B ` (minimizes the norm of the difference before and after truncation); furthermore if all of
the MPS tensors to the left and right of B ` are formed from (possibly truncated) unitary matrices
similar to the definition of A0sj above, then the optimality of the truncation of B ` applies globally
to the entire MPS as well. For further background reading on these technical aspects of MPS, see
Refs. [8] and [16].
Finally, when proceeding to the next bond, it would be inefficient to fully project each training input
over again into the configuration in Fig. 6(b). Instead it is only necessary to advance the projection
by one site using the MPS tensor set from a unitary matrix after the SVD as shown in Fig. 7(b). This
allows the cost of each local step of the algorithm to remain independent of the size of the input space,
making the total algorithm scale only linearly with input space size (i.e. the number of components
of an input vector x).
The above algorithm highlights a key advantage of MPS and tensor networks relevant to machine
learning applications. Following the SVD of the improved bond tensor B 0` , the dimension of the new
MPS bond can be chosen adaptively based on the number of large singular values encountered in
the SVD (defined by a threshold chosen in advance). Thus the MPS form of W ` can be compressed
as much as possible, and by different amounts on each bond, while still ensuring an accurate
approximation of the optimal decision function.
5
~t = (1,
0,
`
2
Z
~t = (0,
=
1, 0)
2
Usj S Vs`j+1
?
~t = (0,
0,
0)
Z2
(b)
`
Z
>0
~t = (0,
1,
A0sj
`
Z2
<0
?
, t3
t 1, t 2
1)
?
B 0`
Z2
~t = (0,
0,
1)
?
?
?
Fib
~t = (0,
0,
, t3
t 1, t 2
Z2
1, 0)
0)
SVD
Z2
~t = (0,
?
(a)
~t = (
1, 0, 0)
=
A0sj A0`
sj+1
1)
t1
t3
t2
Figure 7: Restoration (a) of MPS form, and (b) advancing a projected training input before optimizing
the tensors at the next bond. In diagram (a), if the label index ` was on the site j tensor before forming
B ` , then the operation shown moves the label to site j + 1.
The scaling of the above algorithm is d3 m3 N NL NT , where recall m is the typical MPS bond
dimension; N the number of components of input vectors x; NL the number of labels; and NT
the size of the training data set. Thus the algorithm scales linearly in the training set size: a
major improvement over typical kernel-trick methods which typically scale at least as NT2 without
specialized techniques [23]. This scaling assumes that the MPS bond dimension m needed is
independent of NT , which should be satisfied once NT is a large, representative sample.
In practice, the training cost is dominated by the large size of the training set NT , so it would be
very desirable to reduce this cost. One solution could be to use stochastic gradient descent, but our
experiments at blending this approach with the MPS sweeping algorithm did not match the accuracy
of using the full, or batch gradient. Mixing stochastic gradient with MPS sweeping thus appears to be
non-trivial but is a promising direction for further research.
5
MNIST Handwritten Digit Test
To test the tensor network approach on a realistic task, we used the MNIST data set [24]. Each
image was scaled down from 28 ? 28 to 14 ? 14 by averaging clusters of four pixels; otherwise we
performed no further modifications to the training or test sets. Working with smaller images reduced
the time needed for training, with the tradeoff of having less information available for learning.
When approximating the weight tensor as an MPS, one must choose a one-dimensional ordering of
the local indices s1 , s2 , . . . , sN . We chose a ?zig-zag? ordering meaning the first row of pixels are
mapped to the first 14 external MPS indices; the second row to the next 14 MPS indices; etc. We then
mapped each grayscale image x to a tensor ?(x) using the local map Eq. (3).
Using the sweeping algorithm in Section 4 to optimize the weights, we found the algorithm quickly
converged after a few passes, or sweeps, over the MPS. Typically five or less sweeps were needed to
see good convergence, with test error rates changing only hundreths of a percent thereafter.
Test error rates also decreased rapidly with the maximum MPS bond dimension m. For m = 10 we
found both a training and test error of about 5%; for m = 20 the error dropped to only 2%. The
largest bond dimension we tried was m = 120, where after three sweeps we obtained a test error of
0.97%; the corresponding training set error was 0.05%. MPS bond dimensions in physics applications
can reach many hundreds or even thousands, so it is remarkable to see such small classification errors
for only m = 120.
6
Interpreting Tensor Network Models
A natural question is which set of functions of the form f ` (x) = W ` ? ?(x) can be realized when
using a tensor-product feature map ?(x) of the form Eq. (2) and a tensor-network decomposition of
W ` . As we will argue, the possible set of functions is quite general, but taking the tensor network
structure into account provides additional insights, such as determining which features the model
actually uses to perform classification.
6
(a)
Ws`1 ???s6
=
U s1 U s2 U s3 ` V s 4 V s 5 V s 6
C
U sj
(b)
V sj
=
=
Us?j
Vs?j
(c)
(x)
=
? (x)
Figure 8: (a) Decomposition of W ` as an MPS with a central tensor and orthogonal site tensors. (b)
Orthogonality conditions for U and V type site tensors. (c) Transformation defining a reduced feature
?
map ?(x).
6.1
Representational Power
To simplify the question of which decision functions can be realized for a tensor-product feature map
of the form Eq. (2), let us fix ` to a single label and omit it from the notation. We will also temporarily
consider W to be a completely general order-N tensor with no tensor network constraint. Then f (x)
is a function of the form
X
f (x) =
Ws1 s2 ???sN ?s1 (x1 ) ? ?s2 (x2 ) ? ? ? ? ?sN (xN ) .
(9)
{s}
s
If the functions {? (x)}, s = 1, 2, . . . , d form a basis for a Hilbert space of functions over x ? [0, 1],
then the tensor product basis ?s1 (x1 ) ? ?s2 (x2 ) ? ? ? ? ?sN (xN ) forms a basis for a Hilbert space
of functions over x ? [0, 1]?N . Moreover, in the limit that the basis {?s (x)} becomes complete,
then the tensor product basis would also be complete and f (x) could be any square integrable
function; however, practically reaching this limit would eventually require prohibitively large tensor
dimensions.
6.2
Implicit Feature Selection
Of course we have not been considering an arbitrary weight tensor W ` but instead approximating the
weight tensor as an MPS tensor network. The MPS form implies that the decision function f ` (x)
has interesting additional structure. One way to analyze this structure is to separate the MPS into a
central tensor, or core tensor C ?i `?i+1 on some bond i and constrain all MPS site tensors to be left
orthogonal for sites j ? i or right orthogonal for sites j ? i. This means W ` has the decomposition
Ws`1 s2 ???sN =
X
`
?i+1
?N ?1
i
Us?11 ? ? ? U??i?1
si C?i ?i+1 Vsi+1 ?i+2 ? ? ? VsN
(10)
{?}
as illustrated in Fig. 8(a). To say the U and V tensors are left or right orthogonal means when viewed
as matrices U?j?1 sj ?j and V ?j?1 sj ?j these tensors have the property U ? U = I and V V ? = I
where I is the identity; these orthogonality conditions can be understood more clearly in terms of the
diagrams in Fig. 8(b). Any MPS can be brought into the form Eq. (10) through an efficient sequence
of tensor contractions and SVD operations similar to the steps in Fig. 7(a).
The form in Eq. (10) suggests an interpretation where the decision function f ` (x) acts in three
stages. First, an input x is mapped into the dN dimensional feature space defined by ?(x), which is
exponentially larger than the dimension N of the input space. Next, the feature vector ? is mapped
into a much smaller m2 dimensional space by contraction with all the U and V site tensors of the
?
MPS. This second step defines a new feature map ?(x)
with m2 components as illustrated in Fig. 8(c).
?
Finally, f ` (x) is computed by contracting ?(x)
with C ` .
7
?
To justify calling ?(x)
a feature map, it follows from the left- and right-orthogonality conditions of
the U and V tensors of the MPS Eq. (10) that the indices ?i and ?i+1 of the core tensor C label an
?
orthonormal basis for a subspace of the original feature space. The vector ?(x)
is the projection of
?(x) into this subspace.
The above interpretation implies that training an MPS model uncovers a relatively small set of
important features and simultaneously trains a decision function using only these reduced features.
The feature selection step occurs when computing the SVD in Eq. (8), where any basis elements
?j which do not contribute meaningfully to the optimal bond tensor are discarded. (In our MNIST
experiment the first and last tensors of the MPS completely factorized during training, implying they
were not useful for classification as the pixels at the corners of each image were always white.) Such
a picture is roughly similar to popular interpretations of simultaneously training the hidden and output
layers of shallow neural network models [25]. (MPS were first proposed for learning features in
Bengua et al. [4], but with a different, lower-dimensional data representation than what is used here.)
7
Discussion
We have introduced a framework for applying quantum-inspired tensor networks to supervised
learning tasks. While using an MPS ansatz for the model parameters worked well even for the
two-dimensional data in our MNIST experiment, other tensor networks such as PEPS [6], which
are explicitly designed for two-dimensional systems, or MERA tensor networks [15], which have a
multi-scale structure and can capture power-law correlations, may be more suitable and offer superior
performance. Much work remains to determine the best tensor network for a given domain.
There is also much room to improve the optimization algorithm by incorporating standard techniques
such as mini-batches, momentum, or adaptive learning rates. It would be especially interesting to
investigate unsupervised techniques for initializing the tensor network. Additionally, while the tensor
network parameterization of a model clearly regularizes it in the sense of reducing the number of
parameters, it would be helpful to understand the consquences of this regularization for specific
learning tasks. It could also be fruitful to include standard regularizations of the parameters of the
tensor network, such as weight decay or L1 penalties. We were surprised to find good generalization
without using explicit parameter regularization.
We anticipate models incorporating tensor networks will continue be successful for quite a large
variety of learning tasks because of their treatment of high-order correlations between features
and their ability to be adaptively optimized. With the additional opportunities they present for
interpretation of trained models due to the internal, linear tensor network structure, we believe there
are many promising research directions for tensor network models.
Note: while we were preparing our final manuscript, Novikov et al. [26] published a related framework
for using MPS (tensor trains) to parameterize supervised learning models.
References
[1] Animashree Anandkumar, Rong Ge, Daniel Hsu, Sham M. Kakade, and Matus Telgarsky.
Tensor decompositions for learning latent variable models. Journal of Machine Learning
Research, 15:2773?2832, 2014.
[2] Animashree Anandkumar, Rong Ge, Daniel Hsu, and Sham M. Kakade. A tensor approach
to learning mixed membership community models. J. Mach. Learn. Res., 15(1):2239?2312,
January 2014. ISSN 1532-4435.
[3] Anh Huy Phan and Andrzej Cichocki. Tensor decompositions for feature extraction and
classification of high dimensional datasets. Nonlinear theory and its applications, IEICE, 1(1):
37?68, 2010.
[4] J.A. Bengua, H.N. Phien, and H.D. Tuan. Optimal feature extraction and classification of tensors
via matrix product state decomposition. In 2015 IEEE Intl. Congress on Big Data (BigData
Congress), pages 669?672, June 2015.
[5] Alexander Novikov, Dmitry Podoprikhin, Anton Osokin, and Dmitry Vetrov. Tensorizing neural
networks. arxiv:1509.06569, 2015.
8
[6] Glen Evenbly and Guifr? Vidal. Tensor network states and geometry. Journal of Statistical
Physics, 145:891?918, 2011.
[7] Jacob C. Bridgeman and Christopher T. Chubb. Hand-waving and interpretive dance: An
introductory course on tensor networks. arxiv:1603.03039, 2016.
[8] U. Schollw?ck. The density-matrix renormalization group in the age of matrix product states.
Annals of Physics, 326(1):96?192, 2011.
[9] K. R. Muller, S. Mika, G. Ratsch, K. Tsuda, and B. Scholkopf. An introduction to kernel-based
learning algorithms. IEEE Transactions on Neural Networks, 12(2):181?201, Mar 2001.
[10] E. M. Stoudenmire and Steven R. White. Real-space parallel density matrix renormalization
group. Phys. Rev. B, 87:155137, Apr 2013.
[11] Stellan ?stlund and Stefan Rommer. Thermodynamic limit of density matrix renormalization.
Phys. Rev. Lett., 75(19):3537?3540, Nov 1995.
[12] I. Oseledets. Tensor-train decomposition. SIAM Journal on Scientific Computing, 33(5):
2295?2317, 2011.
[13] F. Verstraete and J. I. Cirac. Renormalization algorithms for quantum-many body systems in
two and higher dimensions. cond-mat/0407066, 2004.
[14] Guifr? Vidal. Entanglement renormalization. Phys. Rev. Lett., 99(22):220405, Nov 2007.
[15] Glen Evenbly and Guifr? Vidal. Algorithms for entanglement renormalization. Phys. Rev. B,
79:144108, Apr 2009.
[16] Andrzej Cichocki. Tensor networks for big data analytics and large-scale optimization problems.
arxiv:1407.3124, 2014.
[17] Steven R. White. Density matrix formulation for quantum renormalization groups. Phys. Rev.
Lett., 69(19):2863?2866, 1992.
[18] Sebastian Holtz, Thorsten Rohwedder, and Reinhold Schneider. The alternating linear scheme
for tensor optimization in the tensor train format. SIAM Journal on Scientific Computing, 34(2):
A683?A713, 2012.
[19] ITensor Library (version 2.0.11). http://itensor.org/.
[20] Vladimir Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag New York, 2000.
[21] W. Waegeman, T. Pahikkala, A. Airola, T. Salakoski, M. Stock, and B. De Baets. A kernel-based
framework for learning graded relations from data. Fuzzy Systems, IEEE Transactions on, 20
(6):1090?1101, Dec 2012.
[22] F. Verstraete, D. Porras, and J. I. Cirac. Density matrix renormalization group and periodic
boundary conditions: A quantum information perspective. Phys. Rev. Lett., 93(22):227205, Nov
2004.
[23] N. Cesa-Bianchi, Y. Mansour, and O. Shamir. On the complexity of learning with kernels.
Proceedings of The 28th Conference on Learning Theory, pages 297?325, 2015.
[24] Christopher J.C. Burges Yann LeCun, Corinna Cortes. MNIST handwritten digit database.
http://yann.lecun.com/exdb/mnist/.
[25] Michael Nielsen. Neural Networks and Deep Learning. Determination Press, 2015.
[26] Alexander Novikov, Mikhail Trofimov, and Ivan Oseledets.
arxiv:1605.03795, 2016.
9
Exponential machines.
| 6211 |@word version:2 seems:1 norm:1 trofimov:1 crucially:1 tried:1 decomposition:15 contraction:3 uncovers:1 jacob:1 configuration:1 daniel:2 recovered:1 com:2 nt:7 current:2 z2:5 si:1 ws1:1 must:2 realistic:1 shape:1 treating:1 designed:1 update:3 v:2 implying:1 parameterization:1 podoprikhin:1 core:2 provides:1 contribute:1 location:1 schwab:1 org:1 five:1 dn:3 along:2 become:1 surprised:1 scholkopf:1 combine:1 introductory:1 x0:1 roughly:1 growing:1 mechanic:1 multi:1 inspired:2 globally:1 porras:1 curse:1 considering:1 becomes:1 spain:1 project:2 notation:3 moreover:1 factorized:1 anh:1 what:4 minimizes:1 fuzzy:1 developed:2 transformation:1 act:1 prohibitively:1 scaled:1 evanston:1 control:1 omit:1 yn:6 before:4 t1:1 dropped:1 understood:3 local:11 congress:2 tends:1 consequence:2 limit:3 vetrov:1 encoding:2 mach:1 black:1 chose:1 mika:1 suggests:1 shaded:1 co:1 analytics:1 range:1 unique:1 lecun:2 practice:1 digit:3 vsj:1 projection:2 pre:1 refers:2 close:1 selection:2 context:2 applying:2 optimize:5 fruitful:1 map:19 convex:1 m2:3 parameterizing:2 insight:1 orthonormal:1 regularize:1 s6:3 oseledets:2 annals:1 controlling:1 play:1 shamir:1 homogeneous:1 us:1 baets:1 trick:2 manifestly:1 element:2 approximated:1 labeled:1 database:1 role:1 steven:2 initializing:1 capture:2 parameterize:2 thousand:1 compressing:1 connected:1 ordering:2 zig:1 entanglement:2 complexity:2 trained:2 efficiency:1 basis:8 completely:2 accelerate:1 easily:1 stock:1 represented:1 train:8 describe:3 effective:1 emanating:1 hyper:2 choosing:1 quite:2 larger:2 solve:1 say:2 otherwise:1 compressed:1 ability:1 final:1 hoc:1 sequence:3 advantage:2 net:1 propose:3 product:20 neighboring:2 relevant:1 combining:1 rapidly:1 mixing:1 ontario:1 representational:1 forth:1 moved:4 convergence:1 cluster:2 intl:1 produce:1 telgarsky:1 novikov:3 eq:11 predicted:1 implies:4 indicate:1 concentrate:1 direction:2 correct:1 stochastic:2 argued:1 require:1 vsn:1 suffices:1 generalization:2 fix:1 decompose:1 anticipate:1 summation:1 blending:1 rong:2 correction:1 practically:1 sufficiently:1 algorithmic:1 mapping:1 matus:1 major:1 vary:2 vsi:1 bond:26 label:18 waterloo:1 largest:3 stefan:1 brought:1 clearly:2 always:1 pnt:1 aim:1 rather:1 reaching:1 ck:1 varying:3 june:1 improvement:1 rank:2 rigorous:1 sense:1 helpful:2 membership:1 typically:3 entire:1 a0:2 hidden:3 w:5 manipulating:1 relation:1 interested:2 pixel:8 overall:1 classification:9 dual:1 constrained:1 restores:1 field:1 once:1 having:3 extraction:2 preparing:1 represents:1 unsupervised:1 t2:1 simplify:1 few:2 simultaneously:2 geometry:2 maintain:1 usj:2 highly:1 circular:1 investigate:1 evaluation:1 nl:4 perimeter:1 chain:1 accurate:1 partial:1 necessary:1 orthogonal:4 indexed:1 desired:1 re:1 tsuda:1 theoretical:1 column:2 restoration:1 cost:9 hundred:1 successful:4 periodic:1 adaptively:4 density:6 siam:2 contract:1 physic:12 ansatz:1 michael:1 connecting:2 together:1 concrete:2 quickly:1 again:1 central:3 satisfied:1 cesa:1 choose:3 slowly:1 possibly:1 external:1 corner:1 derivative:1 wing:1 leading:1 inefficient:1 account:1 de:1 matter:1 mp:61 explicitly:1 ad:1 depends:1 later:1 view:2 performed:1 analyze:1 parallel:2 waving:1 il:1 publicly:1 square:2 spin:1 formed:2 accuracy:1 efficiently:4 t3:3 generalize:1 anton:1 handwritten:3 accurately:1 researcher:1 published:1 classified:1 converged:1 reach:1 phys:6 sebastian:1 definition:1 proof:1 hsu:2 treatment:1 popular:1 animashree:2 recall:1 dimensionality:1 organized:1 hilbert:2 nielsen:1 actually:1 back:2 appears:1 feed:1 manuscript:1 higher:4 supervised:4 improved:2 formulation:1 shrink:1 though:3 mar:1 furthermore:1 just:1 implicit:1 stage:1 correlation:5 working:2 hand:3 replacing:1 christopher:2 nonlinear:1 defines:3 scientific:2 believe:1 ieice:1 name:1 normalized:2 regularization:4 assigned:1 alternating:2 iteratively:1 illustrated:5 deal:1 white:4 sin:1 during:2 prominent:1 exdb:1 complete:2 demonstrate:1 l1:1 interpreting:1 percent:1 image:9 meaning:1 recently:3 superior:1 specialized:1 holtz:1 exponentially:3 discussed:1 interpretation:6 pep:1 s5:2 n2l:1 mathematics:2 etc:1 imparted:1 isometry:1 perspective:1 optimizing:7 belongs:1 certain:1 verlag:1 success:1 arbitrarily:2 continue:1 muller:1 integrable:1 additional:5 somewhat:1 schneider:1 determine:1 full:5 desirable:1 sham:2 thermodynamic:1 smooth:1 technical:1 match:1 determination:1 offer:1 schematic:1 ensuring:1 metric:1 bridgeman:1 arxiv:4 kernel:10 represent:4 dec:1 background:1 remarkably:1 want:2 decreased:1 diagram:8 grow:1 singular:4 ratsch:1 crucial:1 rest:2 bengua:2 pass:1 meaningfully:1 anandkumar:2 extracting:1 unitary:2 enough:1 variety:1 xj:11 ivan:1 reduce:1 tradeoff:1 motivated:1 penalty:1 passing:1 proceed:1 jj:2 york:1 deep:1 useful:4 amount:1 s4:2 dark:1 locally:1 induces:1 reduced:3 http:3 s3:3 arising:1 mat:1 group:5 key:2 waegeman:2 four:2 threshold:1 thereafter:1 d3:1 changing:2 kept:2 advancing:1 merely:1 sum:1 run:2 powerful:2 throughout:1 yann:2 chubb:1 decision:9 prefer:1 scaling:2 capturing:1 layer:2 guaranteed:1 quadratic:1 encountered:2 adapted:1 orthogonality:3 constraint:1 constrain:1 as2:1 x2:3 worked:1 fib:1 calling:1 dominated:1 aspect:1 optimality:1 relatively:1 format:1 department:1 developing:1 combination:1 remain:1 smaller:2 suppressed:1 kakade:2 shallow:1 rev:6 making:2 s1:10 conceivably:1 b:2 modification:1 projecting:1 thorsten:1 ln:2 resource:1 previously:1 remains:1 discus:2 describing:1 eventually:1 needed:4 ge:2 studying:2 available:2 decomposing:1 operation:2 vidal:3 apply:2 batch:2 corinna:1 original:3 assumes:1 andrzej:2 include:1 tuan:1 opportunity:2 especially:2 graded:1 approximating:2 tensor:146 sweep:4 implied:1 move:1 realized:3 quantity:1 question:2 strategy:3 occurs:1 said:1 gradient:9 subspace:2 distance:1 separate:2 mapped:9 stoudenmire:2 y5:1 argue:1 trivial:1 code:2 issn:1 index:24 relationship:1 mini:1 minimizing:2 vladimir:1 collective:2 perform:1 appreciation:1 bianchi:1 datasets:1 discarded:1 tensorizing:1 descent:2 truncated:2 january:1 defining:3 regularizes:1 incorporate:1 mansour:1 reproducing:1 sweeping:4 arbitrary:1 community:2 canada:1 reinhold:1 david:1 introduced:1 pair:2 namely:1 optimized:1 learned:1 barcelona:1 nip:1 beyond:1 reading:1 hot:1 critical:1 overlap:1 natural:1 power:2 suitable:1 representing:2 scheme:1 improve:2 github:1 library:2 picture:1 extract:1 cichocki:3 sn:12 review:1 prior:1 nice:1 multiplication:1 determining:1 law:1 contracting:2 fully:1 highlight:1 lowerorder:1 northwestern:1 interesting:4 mixed:1 analogy:1 versus:1 remarkable:1 age:1 principle:1 classifying:2 row:3 course:2 placed:2 last:1 truncation:3 side:1 understand:2 burges:1 institute:1 neighbor:1 taking:2 mikhail:1 boundary:1 dimension:16 xn:10 evaluating:1 lett:4 quantum:8 made:1 adaptive:2 collection:1 projected:4 osokin:1 transaction:2 sj:21 approximate:1 emphasize:1 asj:1 nov:3 dmitry:2 summing:1 assumed:1 grayscale:4 latent:2 decomposes:1 additionally:1 promising:2 learn:1 nature:1 investigated:1 domain:1 did:1 apr:2 linearly:3 s2:14 big:2 arise:1 huy:1 ref:1 x1:3 interpretive:1 contracted:3 fig:19 site:14 representative:1 body:1 renormalization:9 position:2 momentum:1 wish:1 explicit:1 exponential:1 glen:2 down:1 specific:4 explored:1 decay:1 cortes:1 phien:1 incorporating:2 mnist:8 vapnik:2 magnitude:1 phan:1 depicted:1 forming:2 temporarily:1 scalar:1 applies:1 springer:1 corresponds:1 goal:1 viewed:2 identity:1 room:1 change:2 infinite:1 typical:2 reducing:1 averaging:1 justify:1 total:1 svd:9 m3:1 cond:1 zag:1 nt2:1 internal:4 alexander:2 bigdata:1 dance:1 avoiding:1 |
5,761 | 6,212 | Diffusion-Convolutional Neural Networks
James Atwood and Don Towsley
College of Information and Computer Science
University of Massachusetts
Amherst, MA, 01003
{jatwood|towsley}@cs.umass.edu
Abstract
We present diffusion-convolutional neural networks (DCNNs), a new model for
graph-structured data. Through the introduction of a diffusion-convolution operation, we show how diffusion-based representations can be learned from graphstructured data and used as an effective basis for node classification. DCNNs have
several attractive qualities, including a latent representation for graphical data that
is invariant under isomorphism, as well as polynomial-time prediction and learning
that can be represented as tensor operations and efficiently implemented on a GPU.
Through several experiments with real structured datasets, we demonstrate that
DCNNs are able to outperform probabilistic relational models and kernel-on-graph
methods at relational node classification tasks.
1
Introduction
Working with structured data is challenging. On one hand, finding the right way to express and
exploit structure in data can lead to improvements in predictive performance; on the other, finding
such a representation may be difficult, and adding structure to a model can dramatically increase the
complexity of prediction
The goal of this work is to design a flexible model for a general class of structured data that offers
improvements in predictive performance while avoiding an increase in complexity. To accomplish
this, we extend convolutional neural networks (CNNs) to general graph-structured data by introducing
a ?diffusion-convolution? operation. Briefly, rather than scanning a ?square? of parameters across a
grid-structured input like the standard convolution operation, the diffusion-convolution operation
builds a latent representation by scanning a diffusion process across each node in a graph-structured
input.
This model is motivated by the idea that a representation that encapsulates graph diffusion can provide
a better basis for prediction than a graph itself. Graph diffusion can be represented as a matrix power
series, providing a straightforward mechanism for including contextual information about entities
that can be computed in polynomial time and efficiently implemented on a GPU.
In this paper, we present diffusion-convolutional neural networks (DCNNs) and explore their performance on various classification tasks on graphical data. Many techniques include structural
information in classification tasks, such as probabilistic relational models and kernel methods;
DCNNs offer a complementary approach that provides a significant improvement in predictive
performance at node classification tasks.
As a model class, DCNNs offer several advantages:
? Accuracy: In our experiments, DCNNs significantly outperform alternative methods for
node classification tasks and offer comparable performance to baseline methods for graph
classification tasks.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Nt
Yt
Yt
Wd (H x F)
Wd (H x F)
Zt
H
Zt
H
Nt
Wc (H x F)
Nt
Pt
Xt
Nt
F
Wc (H x F)
Nt
Nt
(a) Node classification
Pt
Xt
Nt
F
Nt
(b) Graph classification
Figure 1: DCNN model definition for node and graph classification tasks.
? Flexibility: DCNNs provide a flexible representation of graphical data that encodes node
features, edge features, and purely structural information with little preprocessing. DCNNs can be used for a variety of classification tasks with graphical data, including node
classification and whole-graph classification.
? Speed: Prediction from an DCNN can be expressed as a series of polynomial-time tensor
operations, allowing the model to be implemented efficiently on a GPU using existing
libraries.
The remainder of this paper is organized as follows. In Section 2, we present a formal definition of
the model, including descriptions of prediction and learning procedures. This is followed by several
experiments in Section 3 that explore the performance of DCNNs at node and graph classification
tasks. We briefly describe the limitations of the model in Section 4, then, in Section 5, we present
related work and discuss the relationship between DCNNs and other methods. Finally, conclusions
and future work are presented in Section 6.
2
Model
Consider a situation where we have a set of T graphs G = {Gt |t ? 1...T }. Each graph Gt = (Vt , Et )
is composed of vertices Vt and edges Et . The vertices are collectively described by an Nt ? F design
matrix Xt of features1 , where Nt is the number of nodes in Gt , and the edges Et are encoded by an
Nt ? Nt adjacency matrix At , from which we can compute a degree-normalized transition matrix Pt
that gives the probability of jumping from node i to node j in one step. No constraints are placed on
the form of Gt ; the graph can be weighted or unweighted, directed or undirected. Either the nodes or
graphs have labels Y associated with them, with the dimensionality of Y differing in each case.
We are interested in learning to predict Y ; that is, to predict a label for each of the nodes in each
graph or a label for each graph itself. In each case, we have access to some labeled entities (be they
nodes or graphs), and our task is predict the values of the remaining unlabeled entities.
This setting can represent several well-studied machine learning tasks. If T = 1 (i.e. there is only
one input graph) and the labels Y are associated with the nodes, this reduces to the problem of
semisupervised classification; if there are no edges present in the input graph, this reduces further to
standard supervised classification. If T > 1 and the labels Y are associated with each graph, then
this represents the problem of supervised graph classification.
DCNNs are designed to perform any task that can be represented within this formulation. An DCNN
takes G as input and returns either a hard prediction for Y or a conditional distribution P(Y |X). Each
1
Without loss of generality, we assume that the features are real-valued.
2
entity of interest (be it a node or a graph) is transformed to a diffusion-convolutional representation,
which is a H ? F real matrix defined by H hops of graph diffusion over F features, and it is defined
by an H ? F real-valued weight tensor W c and a nonlinear differentiable function f that computes
the activations. So, for node classification tasks, the diffusion-convolutional representation of graph t,
Zt , will be a Nt ? H ? F tensor, as illustrated in Figure 1a; for graph classification tasks, Zt will be
a H ? F matrix, as illustrated in Figures 1b.
The model is built on the idea of a diffusion kernel, which can be thought of as a measure of the level
of connectivity between any two nodes in a graph when considering all paths between them, with
longer paths being discounted more than shorter paths. Diffusion kernels provide an effective basis
for node classification tasks [1].
The term ?diffusion-convolution? is meant to evoke the ideas of feature learning, parameter tying, and
invariance that are characteristic of convolutional neural networks. The core operation of a DCNN is a
mapping from nodes and their features to the results of a diffusion process that begins at that node. In
contrast with standard CNNs, DCNN parameters are tied according diffusion search depth rather than
their position in a grid. The diffusion-convolutional representation is invariant with respect to node
index rather than position; in other words, the diffusion-convolututional activations of two isomorphic
input graphs will be the same2 . Unlike standard CNNs, DCNNs have no pooling operation.
Node Classification Consider a node classification task where a label Y is predicted for each input
node in a graph. Let Pt? be an Nt ? H ? Nt tensor containing the power series of Pt , defined as
follows:
j
?
Ptijk
= Ptik
(1)
The diffusion-convolutional activation Ztijk for node i, hop j, and feature k of graph t is given by
!
Nt
X
c
?
Ztijk = f Wjk ?
Ptijl Xtlk
(2)
l=1
The activations can be expressed more concisely using tensor notation as
Zt = f (W c Pt? Xt )
(3)
where the operator represents element-wise multiplication; see Figure 1a. The model only
entails O(H ? F ) parameters, making the size of the latent diffusion-convolutional representation
independent of the size of the input.
The model is completed by a dense layer that connects Z to Y . A hard prediction for Y , denoted Y? ,
can be obtained by taking the maximum activation and a conditional probability distribution P(Y |X)
can be found by applying the softmax function:
Y? = arg max f W d Z
(4)
d
P(Y |X) = softmax f W Z
(5)
This keeps the same form in the following extensions.
Graph Classification
over the nodes
DCNNs can be extended to graph classification by taking the mean activation
Zt = f W c 1TNt Pt? Xt /Nt
(6)
where 1Nt is an Nt ? 1 vector of ones, as illustrated in Figure 1b.
Purely Structural DCNNs DCNNs can be applied to input graphs with no features by associating
a ?bias feature? with value 1.0 with each node. Richer structure can be encoded by adding additional
structural node features such as Pagerank or clustering coefficient, although this does introduce some
hand-engineering and pre-processing.
2
A proof is given in the appendix.
3
cora: accuracy
1.0
pubmed: accuracy
1.0
Node Classification
1.00
0.8
0.6
0.6
0.4
dcnn2
logisticl1
logisticl2
0.2
0.0
0.1
0.2
0.3
0.4
0.2
0.4
0.5
0.6
0.7
0.8
0.9
1.0
0.90
accuracy
0.8
accuracy
accuracy
0.95
0.0
0.1
0.80
0.75
0.70
dcnn2
logisticl1
logisticl2
pubmed
cora
0.65
0.60
0.2
Training Proportion
(a) Cora Learning Curve
0.85
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
0
Training Proportion
1
2
3
4
5
N Hops
(b) Pubmed Learning Curve
(c) Search Breadth
Figure 2: Learning curves (2a - 2b) and effect of search breadth (2c) for the Cora and Pubmed
datasets.
Learning DCNNs are learned via stochastic minibatch gradient descent on backpropagated error.
At each epoch, node indices are randomly grouped into several batches. The error of each batch is
computed by taking slices of the graph definition power series and propagating the input forward to
predict the output, then setting the weights by gradient ascent on the back-propagated error. We also
make use of windowed early stopping; training is ceased if the validation error of a given epoch is
greater than the average of the last few epochs.
3
Experiments
In this section we present several experiments to investigate how well DCNNs perform at node
and graph classification tasks. In each case we compare DCNNs to other well-known and effective
approaches to the task.
In each of the following experiments, we use the AdaGrad algorithm [2] for gradient ascent with a
learning rate of 0.05. All weights are initialized by sampling from a normal distribution with mean
zero and variance 0.01. We choose the hyperbolic tangent for the nonlinear differentiable function
f and use the multiclass hinge loss between the model predictions and ground truth as the training
objective. The model was implemented in Python using Lasagne and Theano [3].
3.1
Node classification
We ran several experiments to investigate how well DCNNs classify nodes within a single graph. The
graphs were constructed from the Cora and Pubmed datasets, which each consist of scientific papers
(nodes), citations between papers (edges), and subjects (labels).
Protocol In each experiment, the set G consists of a single graph G. During each trial, the input
graph?s nodes are randomly partitioned into training, validation, and test sets, with each set having
Model
l1logistic
l2logistic
KED
KLED
CRF-LBP
2-hop DCNN
Accuracy
0.7087
0.7292
0.8044
0.8229
0.8449
0.8677
Cora
F (micro)
0.7087
0.7292
0.8044
0.8229
?
0.8677
F (macro)
0.6829
0.7013
0.7928
0.8117
0.8248
0.8584
Accuracy
0.8718
0.8631
0.8125
0.8228
?
0.8976
Pubmed
F (micro)
0.8718
0.8631
0.8125
0.8228
?
0.8976
F (macro)
0.8698
0.8614
0.7978
0.8086
?
0.8943
Table 1: A comparison of the performance between baseline `1 and `2-regularized logistic regression
models, exponential diffusion and Laplacian exponential diffusion kernel models, loopy belief
propagation (LBP) on a partially-observed conditional random field (CRF), and a two-hop DCNN on
the Cora and Pubmed datasets. The DCNN offers the best performance according to each measure,
and the gain is statistically significant in each case. The CRF-LBP result is quoted from [4], which
follows the same experimental protocol.
4
the same number of nodes. During training, all node features X, all edges E, and the labels Y of
the training and validation sets are visible to the model. We report classification accuracy as well
as micro? and macro?averaged F1; each measure is reported as a mean and confidence interval
computed from several trials.
We also provide learning curves for the CORA and Pubmed datasets. In this experiment, the validation
and test set each contain 10% of the nodes, and the amount of training data is varied between 10%
and 100% of the remaining nodes.
Baseline Methods ?l1logistic? and ?l2logistic? indicate `1 and `2-regularized logistic regression,
respectively. The inputs to the logistic regression models are the node features alone (e.g. the graph
structure is not used) and the regularization parameter is tuned using the validation set. ?KED? and
?KLED? denote the exponential diffusion and Laplacian exponential diffusion kernels-on-graphs,
respectively, which have previously been shown to perform well on the Cora dataset [1]. These kernel
models take the graph structure as input (e.g. node features are not used) and the validation set is
used to determine the kernel hyperparameters. ?CRF-LBP? indicates a partially-observed conditional
random field that uses loopy belief propagation for inference. Results for this model are quoted from
prior work [4] that uses the same dataset and experimental protocol.
Node Classification Data The Cora corpus [5] consists of 2,708 machine learning papers and the
5,429 citation edges that they share. Each paper is assigned a label drawn from seven possible machine
learning subjects, and each paper is represented by a bit vector where each feature corresponds to
the presence or absence of a term drawn from a dictionary with 1,433 unique entries. We treat the
citation network as an undirected graph.
The Pubmed corpus [5] consists of 19,717 scientific papers from the Pubmed database on the subject
of diabetes. Each paper is assigned to one of three classes. The citation network that joins the papers
consists of 44,338 links, and each paper is represented by a Term Frequency Inverse Document
Frequency (TFIDF) vector drawn from a dictionary with 500 terms. As with the CORA corpus, we
construct an adjacency-based DCNN that treats the citation network as an undirected graph.
Results Discussion Table 1 compares the performance of a two-hop DCNN with several baselines.
The DCNN offers the best performance according to different measures including classification
accuracy and micro? and macro?averaged F1, and the gain is statistically significant in each case
with negligible p-values. For all models except the CRF, we assessed this via a one-tailed two-sample
Welch?s t-test. The CRF result is quoted from prior work, so we used a one-tailed one-sample test.
Figures 2a and Figure 2b show the learning curves for the Cora and Pubmed datasets. The DCNN
generally outperforms the baseline methods on the Cora dataset regardless of the amount of training
data available, although the Laplacian exponential diffusion kernel does offer comparable performance
when the entire training set is available. Note that the kernel methods were prohibitively slow to run
on the Pubmed dataset, so we do not include them in the learning curve.
Finally, the impact of diffusion breadth on performance is shown in Figure 2. Most of the performance
is gained as the diffusion breadth grows from zero to three hops, then levels out as the diffusion
process converges.
3.2
Graph Classification
We also ran experiments to investigate how well DCNNs can learn to label whole graphs.
Protocol At the beginning of each trial, input graphs are randomly assigned to training, validation,
or test, with each set having the same number of graphs. During the learning phase, the training and
validation graphs, their node features, and their labels are made visible; the training set is used to
determine the parameters and the validation set to determine hyperparameters. At test time, the test
graphs and features are made visible and the graph labels are predicted and compared with ground
truth. Table 2 reports the mean accuracy, micro-averaged F1, and macro-averaged F1 over several
trials.
We also provide learning curves for the MUTAG (Figure 3a) and ENZYMES (Figure 3b) datasets.
In these experiments, validation and test sets each containing 10% of the graphs, and we report the
5
mutag: accuracy
0.6
0.4
0.2
0.0
0.1
0.8
0.6
0.4
0.2
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
0.0
0.1
Training Proportion
Graph Classification
1.0
dcnn2
logisticl1
logisticl2
0.8
accuracy
accuracy
0.8
enzymes: accuracy
1.0
dcnn2
logisticl1
logisticl2
accuracy
1.0
0.6
0.4
nci109
enzymes
nci1
ptc
mutag
0.2
0.0
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
0
2
Training Proportion
(a) MUTAG Learning Curve
4
6
8
10
N Hops
(b) ENZYMES Learning Curve
(c) Search Breadth
Figure 3: Learning curves for the MUTAG (3a) and ENZYMES (3b) datasets as well as the effect of
search breadth (3c)
performance of each model as a function of the proportion of the remaining graphs that are made
available for training.
Baseline Methods As a simple baseline, we apply linear classifiers to the average feature vector of
each graph; ?l1logistic? and ?l2logistic? indicate `1 and `2-regularized logistic regression applied as
described. ?deepwl? indicates the Weisfeiler-Lehman (WL) subtree deep graph kernel. Deep graph
kernels decompose a graph into substructures, treat those substructures as words in a sentence, and fit
a word-embedding model to obtain a vectorization [6].
Graph Classification Data We apply DCNNs to a standard set of graph classification datasets
that consists of NCI1, NCI109, MUTAG, PCI, and ENZYMES. The NCI1 and NCI109 [7] datasets
consist of 4100 and 4127 graphs that represent chemical compounds. Each graph is labeled with
whether it is has the ability to suppress or inhibit the growth of a panel of human tumor cell lines,
and each node is assigned one of 37 (for NCI1) or 38 (for NCI109) possible labels. MUTAG [8]
contains 188 nitro compounds that are labeled as either aromatic or heteroaromatic with seven node
features. PTC [9] contains 344 compounds labeled with whether they are carcinogenic in rats with 19
node features. Finally, ENZYMES [10] is a balanced dataset containing 600 proteins with three node
features.
Results Discussion In contrast with the node classification experiments, there is no clear best
model choice across the datasets or evaluation measures. In fact, according to Table 2, the only clear
choice is the ?deepwl? graph kernel model on the ENZYMES dataset, which significantly outperforms
the other methods in terms of accuracy and micro? and macro?averaged F measure. Furthermore,
as shown in Figure 3, there is no clear benefit to broadening the search breadth H. These results
suggest that, while diffusion processes are an effective representation for nodes, they do a poor job of
summarizing entire graphs. It may be possible to improve these results by finding a more effective
way to aggregate the node operations than a simple mean, but we leave this as future work.
Model
l1logistic
l2logistic
deepwl
2-hop DCNN
5-hop DCNN
Accuracy
0.5728
0.5688
0.6215
0.6250
0.6261
Model
l1logistic
l2logistic
deepwl
2-hop DCNN
5-hop DCNN
Accuracy
0.7190
0.7016
0.6563
0.6635
0.6698
NCI1
F (micro)
0.5728
0.5688
0.6215
0.5807
0.5898
MUTAG
F (micro)
0.7190
0.7016
0.6563
0.7975
0.8013
F (macro)
0.5711
0.5641
0.5821
0.5807
0.5898
Accuracy
0.5555
0.5586
0.5801
0.6275
0.6286
F (macro)
0.6405
0.5795
0.5942
0.79747
0.8013
Accuracy
0.5470
0.5565
0.5113
0.5660
0.5530
NCI109
F (micro)
0.5555
0.5568
0.5801
0.5884
0.5950
PTC
F (micro)
0.5470
0.5565
0.5113
0.0500
0.0
F (macro)
0.5411
0.5402
0.5178
0.5884
0.5899
F (macro)
0.4272
0.4460
0.4444
0.0531
0.0526
Accuracy
0.1640
0.2030
0.2155
0.1590
0.1810
ENZYMES
F (micro)
0.1640
0.2030
0.2155
0.1590
0.1810
F (macro)
0.0904
0.1110
0.1431
0.0809
0.0991
Table 2: A comparison of the performance between baseline methods and two and five-hop DCNNs
on several graph classification datasets.
6
4
Limitations
Scalability DCNNs are realized as a series of operations on dense tensors. Storing the largest tensor
(P ? , the transition matrix power series) requires O(Nt2 H) memory, which can lead to out-of-memory
errors on the GPU for very large graphs in practice. As such, DCNNs can be readily applied to graphs
of tens to hundreds of thousands of nodes, but not to graphs with millions to billions of nodes.
Locality The model is designed to capture local behavior in graph-structured data. As a consequence of constructing the latent representation from diffusion processes that begin at each node,
we may fail to encode useful long-range spatial dependencies between individual nodes or other
non-local graph behavior.
5
Related Work
In this section we describe existing approaches to the problems of semi-supervised learning, graph
classification, and edge classification, and discuss their relationship to DCNNs.
Other Graph-Based Neural Network Models Other researchers have investigated how CNNs can
be extended from grid-structured to more general graph-structured data. [11] propose a spatial method
with ties to hierarchical clustering, where the layers of the network are defined via a hierarchical
partitioning of the node set. In the same paper, the authors propose a spectral method that extends
the notion of convolution to graph spectra. Later, [12] applied these techniques to data where
a graph is not immediately present but must be inferred. DCNNs, which fall within the spatial
category, are distinct from this work because their parameterization makes them transferable; a
DCNN learned on one graph can be applied to another. A related branch of work that has focused on
extending convolutional neural networks to domains where the structure of the graph itself is of direct
interest [13, 14, 15]. For example, [15] construct a deep convolutional model that learns real-valued
fingerprint representation of chemical compounds.
Probabilistic Relational Models DCNNs also share strong ties to probabilistic relational models
(PRMs), a family of graphical models that are capable of representing distributions over relational
data [16]. In contrast to PRMs, DCNNs are deterministic, which allows them to avoid the exponential
blowup in learning and inference that hampers PRMs.
Our results suggest that DCNNs outperform partially-observed conditional random fields, the stateof-the-art model probabilistic relational model for semi-supervised learning. Furthermore, DCNNs
offer this performance at considerably lower computational cost. Learning the parameters of both
DCNNs and partially-observed CRFs involves numerically minimizing a nonconvex objective ? the
backpropagated error in the case of DCNNs and the negative marginal log-likelihood for CRFs.
In practice, the marginal log-likelihood of a partially-observed CRF is computed using a contrastof-partition-functions approach that requires running loopy belief propagation twice; once on the
entire graph and once with the observed labels fixed [17]. This algorithm, and thus each step in
the numerical optimization, has exponential time complexity O(Et NtCt ) where Ct is the size of
the maximal clique in Gt [18]. In contrast, the learning subroutine for an DCNN requires only one
forward and backward pass for each instance in the training data. The complexity is dominated by
the matrix multiplication between the graph definition matrix A and the design matrix V , giving an
overall polynomial complexity of O(Nt2 F ).
Kernel Methods Kernel methods define similarity measures either between nodes (so-called
kernels on graphs) [1] or between graphs (graph kernels) and these similarities can serve as a
basis for prediction via the kernel trick. The performance of graph kernels can be improved by
decomposing a graph into substructures, treating those substructures as a words in a sentence, and
fitting a word-embedding model to obtain a vectorization [6].
DCNNs share ties with the exponential diffusion family of kernels on graphs. The exponential
diffusion graph kernel KED is a sum of a matrix power series:
?
X
? j Aj
KED =
= exp(?A)
(7)
j!
j=0
7
The diffusion-convolution activation given in (3) is also constructed from a power series. However,
the representations have several important differences. First, the weights in (3) are learned via
backpropagation, whereas the kernel representation is not learned from data. Second, the diffusionconvolutional representation is built from both node features and the graph structure, whereas the
exponential diffusion kernel is built from the graph structure alone. Finally, the representations have
different dimensions: KED is an Nt ? Nt kernel matrix, whereas Zt is a Nt ? H ? F tensor that
does not conform to the definition of a kernel.
6
Conclusion and Future Work
By learning a representation that encapsulates the results of graph diffusion, diffusion-convolutional
neural networks offer performance improvements over probabilistic relational models and kernel
methods at node classification tasks. We intend to investigate methods for a) improving DCNN
performance at graph classification tasks and b) making the model scalable in future work.
7
Appendix: Representation Invariance for Isomorphic Graphs
If two graphs G1 and G2 are isomorphic, then their diffusion-convolutional activations are the same.
Proof by contradiction; assume that G1 and G2 are isomorphic and that their diffusion-convolutional
activations are different. The diffusion-convolutional activations can be written as
!
X X
c
?
Z1jk = f Wjk
P1vjv0 X1v0 k /N1
v?V1 v 0 ?V1
!
Z2jk = f
c
Wjk
X X
?
P2vjv
0 X2v 0 k /N2
v?V2 v 0 ?V2
Note that
V1 = V2 = V
X1vk = X2vk = Xvk ? v ? V, k ? [1, F ]
?
?
?
0
P1vjv
0 = P2vjv 0 = Pvjv 0 ? v, v ? V, j ? [0, H]
N1 = N2 = N
by isomorphism, allowing us to rewrite the activations as
!
Z1jk = f
c
Wjk
X X
v?V
?
Pvjv
0 Xv 0 k /N
v 0 ?V
!
Z2jk = f
c
Wjk
X X
?
Pvjv
0 Xv 0 k /N
v?V v 0 ?V
This implies that Z1 = Z2 which presents a contradiction and completes the proof.
Acknowledgments
We would like to thank Bruno Ribeiro, Pinar Yanardag, and David Belanger for their feedback on
drafts of this paper. This work was supported in part by Army Research Office Contract W911NF12-1-0385 and ARL Cooperative Agreement W911NF-09-2-0053. This work was also supported by
NVIDIA through the donation of equipment used to perform experiments.
References
[1] Fran?ois Fouss, Kevin Francoisse, Luh Yen, Alain Pirotte, and Marco Saerens. An experimental investigation of kernels on graphs for collaborative recommendation and semisupervised
classification. Neural Networks, 31:53?72, July 2012.
8
[2] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning
and stochastic optimization. The Journal of Machine Learning Research, 2011.
[3] James Bergstra, Olivier Breuleux, Fr?d?ric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume
Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a CPU and GPU
math expression compiler. In Proceedings of the Python for Scientific Computing Conference
(SciPy), 2010.
[4] P Sen and L Getoor. Link-based classification. Technical Report, 2007.
[5] Prithviraj Sen, Galileo Mark Namata, Mustafa Bilgic, Lise Getoor, Brian Gallagher, and Tina
Eliassi-Rad. Collective Classification in Network Data. AI Magazine, 2008.
[6] Pinar Yanardag and S V N Vishwanathan. Deep Graph Kernels. In the 21th ACM SIGKDD
International Conference, pages 1365?1374, New York, New York, USA, 2015. ACM Press.
[7] Nikil Wale, Ian A Watson, and George Karypis. Comparison of descriptor spaces for chemical
compound retrieval and classification. Knowledge and Information Systems, 14(3):347?375,
August 2007.
[8] Asim Kumar Debnath, Rosa L Lopez de Compadre, Gargi Debnath, Alan J Shusterman, and
Corwin Hansch. Structure-activity relationship of mutagenic aromatic and heteroaromatic
nitro compounds. correlation with molecular orbital energies and hydrophobicity. Journal of
medicinal chemistry, 34(2):786?797, 1991.
[9] Hannu Toivonen, Ashwin Srinivasan, Ross D King, Stefan Kramer, and Christoph Helma. Statistical evaluation of the predictive toxicology challenge 2000?2001. Bioinformatics, 19(10):1183?
1193, 2003.
[10] Karsten M Borgwardt, Cheng Soon Ong, Stefan Sch?nauer, SVN Vishwanathan, Alex J Smola,
and Hans-Peter Kriegel. Protein function prediction via graph kernels. Bioinformatics, 21(suppl
1):i47?i56, 2005.
[11] Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally
connected networks on graphs. arXiv.org, 2014.
[12] M Henaff, J Bruna, and Y LeCun. Deep Convolutional Networks on Graph-Structured Data.
arXiv.org, 2015.
[13] F Scarselli, M Gori, Ah Chung Tsoi, M Hagenbuchner, and G Monfardini. The Graph Neural
Network Model. IEEE Transactions on Neural Networks, 2009.
[14] A Micheli. Neural Network for Graphs: A Contextual Constructive Approach. IEEE Transactions on Neural Networks, 2009.
[15] David K Duvenaud, Dougal Maclaurin, Jorge Aguilera-Iparraguirre, Rafael G?mez-Bombarelli,
Timothy Hirzel, Al?n Aspuru-Guzik, and Ryan P Adams. Convolutional Networks on Graphs
for Learning Molecular Fingerprints. NIPS, 2015.
[16] Daphne Koller and Nir Friedman. Probabilistic Graphical Models: Principles and Techniques.
The MIT Press, 2009.
[17] Jakob Verbeek and William Triggs. Scene segmentation with crfs learned from partially labeled
images. NIPS, 2007.
[18] Trevor Cohn. Efficient Inference in Large Conditional Random Fields. ECML, 2006.
9
| 6212 |@word trial:4 briefly:2 polynomial:4 proportion:5 triggs:1 series:8 uma:1 contains:2 tuned:1 document:1 outperforms:2 existing:2 contextual:2 nt:22 wd:2 z2:1 activation:11 must:1 readily:1 gpu:5 written:1 john:1 visible:3 partition:1 numerical:1 mutagenic:1 designed:2 treating:1 alone:2 parameterization:1 beginning:1 core:1 provides:1 draft:1 node:61 pascanu:1 math:1 org:2 daphne:1 five:1 windowed:1 constructed:2 direct:1 lopez:1 consists:5 wale:1 fitting:1 introduce:1 orbital:1 karsten:1 blowup:1 behavior:2 discounted:1 little:1 cpu:1 considering:1 spain:1 features1:1 begin:2 notation:1 panel:1 tying:1 differing:1 finding:3 growth:1 tie:3 zaremba:1 prohibitively:1 classifier:1 partitioning:1 szlam:1 negligible:1 engineering:1 local:2 treat:3 xv:2 consequence:1 path:3 twice:1 studied:1 lasagne:1 challenging:1 christoph:1 range:1 statistically:2 averaged:5 karypis:1 directed:1 unique:1 acknowledgment:1 lecun:2 galileo:1 tsoi:1 practice:2 backpropagation:1 razvan:1 procedure:1 significantly:2 thought:1 hyperbolic:1 word:5 pre:1 confidence:1 protein:2 suggest:2 unlabeled:1 operator:1 applying:1 deterministic:1 yt:2 crfs:3 straightforward:1 regardless:1 focused:1 welch:1 immediately:1 scipy:1 contradiction:2 lamblin:1 embedding:2 notion:1 pt:7 magazine:1 guzik:1 olivier:1 us:2 diabetes:1 trick:1 element:1 agreement:1 labeled:5 database:1 observed:6 cooperative:1 capture:1 thousand:1 connected:1 inhibit:1 ran:2 balanced:1 complexity:5 warde:1 ong:1 rewrite:1 debnath:2 predictive:4 purely:2 serve:1 basis:4 represented:5 various:1 distinct:1 effective:5 describe:2 aggregate:1 pci:1 kevin:1 encoded:2 richer:1 valued:3 elad:1 ability:1 g1:2 itself:3 online:1 hagenbuchner:1 advantage:1 differentiable:2 sen:2 propose:2 maximal:1 remainder:1 macro:11 fr:1 flexibility:1 description:1 ceased:1 wjk:5 scalability:1 billion:1 extending:1 adam:1 converges:1 leave:1 donation:1 propagating:1 job:1 strong:1 implemented:4 c:1 predicted:2 indicate:2 involves:1 implies:1 arl:1 ois:1 fouss:1 cnns:4 stochastic:2 human:1 adjacency:2 shusterman:1 f1:4 decompose:1 investigation:1 tfidf:1 brian:1 ryan:1 extension:1 marco:1 duvenaud:1 ground:2 normal:1 exp:1 maclaurin:1 mapping:1 predict:4 desjardins:1 dictionary:2 early:1 label:14 ross:1 grouped:1 wl:1 largest:1 weighted:1 stefan:2 cora:13 namata:1 mit:1 dcnn:19 i56:1 rather:3 avoid:1 office:1 encode:1 lise:1 improvement:4 indicates:2 likelihood:2 contrast:4 sigkdd:1 equipment:1 baseline:8 summarizing:1 inference:3 stopping:1 entire:3 koller:1 transformed:1 subroutine:1 interested:1 dcnns:34 arg:1 classification:44 flexible:2 nci1:5 denoted:1 stateof:1 pascal:1 overall:1 spatial:3 softmax:2 art:1 marginal:2 field:4 construct:2 once:2 having:2 rosa:1 sampling:1 hop:13 represents:2 future:4 report:4 yoshua:1 micro:11 few:1 randomly:3 composed:1 hamper:1 individual:1 scarselli:1 phase:1 connects:1 n1:2 william:1 friedman:1 interest:2 dougal:1 investigate:4 evaluation:2 farley:1 edge:8 capable:1 arthur:1 jumping:1 shorter:1 initialized:1 instance:1 classify:1 w911nf:1 loopy:3 cost:1 introducing:1 vertex:2 entry:1 hundred:1 reported:1 dependency:1 scanning:2 accomplish:1 considerably:1 borgwardt:1 nitro:2 amherst:1 international:1 probabilistic:7 contract:1 atwood:1 connectivity:1 containing:3 choose:1 weisfeiler:1 chung:1 return:1 wojciech:1 de:1 luh:1 bergstra:1 chemistry:1 coefficient:1 lehman:1 later:1 towsley:2 hazan:1 compiler:1 hirzel:1 substructure:4 yen:1 collaborative:1 square:1 accuracy:22 convolutional:18 variance:1 characteristic:1 efficiently:3 descriptor:1 researcher:1 aromatic:2 ah:1 trevor:1 definition:5 energy:1 frequency:2 james:2 associated:3 proof:3 propagated:1 gain:2 dataset:6 massachusetts:1 aguilera:1 knowledge:1 dimensionality:1 organized:1 segmentation:1 back:1 supervised:4 improved:1 formulation:1 generality:1 furthermore:2 mez:1 smola:1 correlation:1 working:1 hand:2 belanger:1 iparraguirre:1 cohn:1 nonlinear:2 mutag:8 propagation:3 minibatch:1 logistic:4 quality:1 aj:1 scientific:3 grows:1 semisupervised:2 usa:1 effect:2 compadre:1 normalized:1 contain:1 regularization:1 assigned:4 chemical:3 illustrated:3 attractive:1 toxicology:1 during:3 transferable:1 rat:1 crf:7 demonstrate:1 duchi:1 saerens:1 image:1 wise:1 million:1 extend:1 numerically:1 significant:3 ai:1 ashwin:1 grid:3 bruno:1 fingerprint:2 bruna:2 access:1 entail:1 longer:1 similarity:2 han:1 gt:5 enzyme:9 henaff:1 compound:6 nvidia:1 nonconvex:1 watson:1 jorge:1 vt:2 additional:1 greater:1 george:1 determine:3 july:1 semi:2 branch:1 reduces:2 alan:1 technical:1 offer:9 long:1 retrieval:1 molecular:2 laplacian:3 impact:1 prediction:10 scalable:1 regression:4 verbeek:1 arxiv:2 kernel:29 represent:2 suppl:1 cell:1 lbp:4 whereas:3 interval:1 completes:1 sch:1 breuleux:1 unlike:1 ascent:2 pooling:1 subject:3 undirected:3 eliassi:1 structural:4 presence:1 bengio:1 variety:1 fit:1 associating:1 idea:3 multiclass:1 svn:1 ked:5 whether:2 motivated:1 expression:1 isomorphism:2 peter:1 york:2 prms:3 deep:5 dramatically:1 generally:1 useful:1 clear:3 amount:2 backpropagated:2 ten:1 locally:1 category:1 outperform:3 conform:1 express:1 pinar:2 srinivasan:1 helma:1 ptc:3 drawn:3 breadth:7 diffusion:41 backward:1 v1:3 graph:101 subgradient:1 sum:1 run:1 inverse:1 extends:1 family:2 yann:1 fran:1 nikil:1 appendix:2 ric:1 comparable:2 bit:1 layer:2 ct:1 followed:1 nci109:5 cheng:1 activity:1 constraint:1 vishwanathan:2 alex:1 scene:1 encodes:1 asim:1 dominated:1 wc:2 speed:1 xvk:1 kumar:1 structured:11 according:4 poor:1 across:3 partitioned:1 yanardag:2 joseph:1 encapsulates:2 making:2 invariant:2 theano:2 previously:1 discus:2 mechanism:1 fail:1 singer:1 available:3 operation:10 decomposing:1 apply:2 hierarchical:2 v2:3 spectral:2 alternative:1 batch:2 gori:1 remaining:3 include:2 clustering:2 completed:1 graphical:6 running:1 hinge:1 tina:1 exploit:1 giving:1 yoram:1 build:1 tensor:9 objective:2 intend:1 realized:1 gradient:3 micheli:1 link:2 thank:1 pirotte:1 entity:4 seven:2 index:2 relationship:3 providing:1 minimizing:1 difficult:1 negative:1 suppress:1 design:3 zt:7 collective:1 perform:4 allowing:2 toivonen:1 convolution:7 datasets:12 descent:1 i47:1 ecml:1 situation:1 relational:8 extended:2 hansch:1 tnt:1 varied:1 jakob:1 august:1 inferred:1 david:3 sentence:2 z1:1 rad:1 concisely:1 learned:6 barcelona:1 nip:3 able:1 kriegel:1 challenge:1 monfardini:1 pagerank:1 built:3 including:5 max:1 memory:2 belief:3 power:6 getoor:2 regularized:3 bilgic:1 representing:1 improve:1 library:1 nir:1 joan:1 epoch:3 prior:2 medicinal:1 tangent:1 python:2 multiplication:2 adagrad:1 loss:2 limitation:2 xtlk:1 validation:10 hydrophobicity:1 degree:1 principle:1 storing:1 share:3 placed:1 last:1 supported:2 soon:1 alain:1 formal:1 bias:1 fall:1 aspuru:1 taking:3 benefit:1 slice:1 curve:10 depth:1 dimension:1 transition:2 feedback:1 unweighted:1 computes:1 forward:2 made:3 author:1 preprocessing:1 adaptive:1 ribeiro:1 transaction:2 citation:5 rafael:1 evoke:1 keep:1 clique:1 mustafa:1 corpus:3 quoted:3 don:1 spectrum:1 search:6 latent:4 vectorization:2 tailed:2 table:5 learn:1 improving:1 broadening:1 investigated:1 constructing:1 protocol:4 domain:1 dense:2 whole:2 hyperparameters:2 turian:1 n2:2 complementary:1 join:1 pubmed:12 slow:1 position:2 exponential:10 tied:1 learns:1 ian:1 xt:5 bastien:1 consist:2 adding:2 gained:1 gallagher:1 subtree:1 locality:1 timothy:1 explore:2 heteroaromatic:2 army:1 expressed:2 partially:6 g2:2 recommendation:1 collectively:1 corresponds:1 truth:2 acm:2 ma:1 conditional:6 goal:1 kramer:1 king:1 absence:1 hard:2 except:1 tumor:1 called:1 isomorphic:4 invariance:2 experimental:3 pas:1 college:1 nt2:2 guillaume:1 mark:1 meant:1 assessed:1 bioinformatics:2 constructive:1 avoiding:1 |
5,762 | 6,213 | Optimal Learning for Multi-pass Stochastic Gradient
Methods
Junhong Lin
LCSL, IIT-MIT, USA
[email protected]
Lorenzo Rosasco
DIBRIS, Univ. Genova, ITALY
LCSL, IIT-MIT, USA
[email protected]
Abstract
We analyze the learning properties of the stochastic gradient method when multiple
passes over the data and mini-batches are allowed. In particular, we consider
the square loss and show that for a universal step-size choice, the number of
passes acts as a regularization parameter, and optimal finite sample bounds can be
achieved by early-stopping. Moreover, we show that larger step-sizes are allowed
when considering mini-batches. Our analysis is based on a unifying approach,
encompassing both batch and stochastic gradient methods as special cases.
1
Introduction
Modern machine learning applications require computational approaches that are at the same time
statistically accurate and numerically efficient [2]. This has motivated a recent interest in stochastic
gradient methods (SGM), since on the one hand they enjoy good practical performances, especially
in large scale scenarios, and on the other hand they are amenable to theoretical studies. In particular,
unlike other learning approaches, such as empirical risk minimization or Tikhonov regularization,
theoretical results on SGM naturally integrate statistical and computational aspects.
Most generalization studies on SGM consider the case where only one pass over the data is allowed
and the step-size is appropriately chosen, [5, 14, 29, 26, 9, 16] (possibly considering averaging [18]).
In particular, recent works show how the step-size can be seen to play the role of a regularization
parameter whose choice controls the bias and variance properties of the obtained solution [29, 26, 9].
These latter works show that balancing these contributions, it is possible to derive a step-size choice
leading to optimal learning bounds. Such a choice typically depends on some unknown properties of
the data generating distributions and in practice can be chosen by cross-validation.
While processing each data point only once is natural in streaming/online scenarios, in practice SGM
is often used as a tool for processing large data-sets and multiple passes over the data are typically
considered. In this case, the number of passes over the data, as well as the step-size, need then to
be determined. While the role of multiple passes is well understood if the goal is empirical risk
minimization [3], its effect with respect to generalization is less clear and a few recent works have
recently started to tackle this question. In particular, results in this direction have been derived in [10]
and [11]. The former work considers a general stochastic optimization setting and studies stability
properties of SGM allowing to derive convergence results as well as finite sample bounds. The latter
work, restricted to supervised learning, further develops these results to compare the respective roles
of step-size and number of passes, and show how different parameter settings can lead to optimal
error bounds. In particular, it shows that there are two extreme cases: one between the step-size or the
number of passes is fixed a priori, while the other one acts as a regularization parameter and needs
to be chosen adaptively. The main shortcoming of these latter results is that they are in the worst
case, in the sense that they do not consider the possible effect of capacity assumptions [30, 4] shown
to lead to faster rates for other learning approaches such as Tikhonov regularization. Further, these
results do not consider the possible effect of mini-batches, rather than a single point in each gradient
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
step [21, 8, 24, 15]. This latter strategy is often considered especially for parallel implementation of
SGM.
The study in this paper, fills in these gaps in the case where the loss function is the least squares loss.
We consider a variant of SGM for least squares, where gradients are sampled uniformly at random
and mini-batches are allowed. The number of passes, the step-size and the mini-batch size are then
parameters to be determined. Our main results highlight the respective roles of these parameters and
show how can they be chosen so that the corresponding solutions achieve optimal learning errors. In
particular, we show for the first time that multi-pass SGM with early stopping and a universal step-size
choice can achieve optimal learning rates, matching those of ridge regression [23, 4]. Further, our
analysis shows how the mini-batch size and the step-size choice are tightly related. Indeed, larger
mini-batch sizes allow to consider larger step-sizes while keeping the optimal learning bounds. This
result could give an insight on how to exploit mini-batches for parallel computations while preserving
optimal statistical accuracy. Finally we note that a recent work [19] is tightly related to the analysis
in the paper. The generalization properties of a multi-pass incremental gradient are analyzed in
[19], for a cyclic, rather than a stochastic, choice of the gradients and with no mini-batches. The
analysis in this latter case appears to be harder and results in [19] give good learning bounds only in
restricted setting and considering iterates rather than the excess risk. Compared to [19] our results
show how stochasticity can be exploited to get faster capacity dependent rates and analyze the role of
mini-batches. The basic idea of our proof is to approximate the SGM learning sequence in terms of
the batch GM sequence, see Subsection 3.4 for further details. This thus allows one to study batch
and stochastic gradient methods simultaneously, and may be also useful for analysing other learning
algorithms.
The rest of this paper is organized as follows. Section 2 introduces the learning setting and the SGM
algorithm. Main results with discussions and proof sketches are presented in Section 3. Finally,
simple numerical simulations are given in Section 4 to complement our theoretical results.
Notation For any a, b ? R, a ? b denotes the maximum of a and b. N is the set of all positive
integers. For any T ? N, [T ] denotes the set {1, ? ? ? , T }. For any two positive sequences {at }t?[T ]
and {bt }t?[T ] , the notation at . bt for all t ? [T ] means that there exists a positive constant C ? 0
such that C is independent of t and that at ? Cbt for all t ? [T ].
2
Learning with SGM
We begin by introducing the learning setting we consider, and then describe the SGM learning
algorithm. Following [19], the formulation we consider is close to the setting of functional regression,
and covers the reproducing kernel Hilbert space (RKHS) setting as a special case. In particular, it
reduces to standard linear regression for finite dimensions.
2.1
Learning Problems
Let H be a separable Hilbert space, with inner product and induced norm denoted by h?, ?iH and
k ? kH , respectively. Let the input space X ? H and the output space Y ? R. Let ? be an unknown
probability measure on Z = X ? Y, ?X (?) the induced marginal measure on X, and ?(?|x) the
conditional probability measure on Y with respect to x ? X and ?.
Considering the square loss function, the problem under study is the minimization of the risk,
Z
inf E(?), E(?) =
(h?, xiH ? y)2 d?(x, y),
(1)
??H
X?Y
when the measure ? is known only through a sample z = {zi = (xi , yi )}m
i=1 of size m ? N,
independently and identically distributed (i.i.d.) according to ?. In the following, we measure the
quality of an approximate solution ?
? ? H (an estimator) considering the excess risk, i.e.,
E(?
? ) ? inf E(?).
(2)
??H
Throughout this paper, we assume that there exists a constant ? ? [1, ?[, such that
hx, x0 iH ? ?2 ,
2.2
?x, x0 ? X.
Stochastic Gradient Method
We study the following SGM (with mini-batches, without penalization or constraints).
2
(3)
Algorithm 1. Let b ? [m]. Given any sample z, the b-minibatch stochastic gradient method is defined
by ?1 = 0 and
?t+1 = ?t ? ?t
1
b
bt
X
(h?t , xji iH ? yji )xji ,
t = 1, . . . , T,
(4)
i=b(t?1)+1
where {?t > 0} is a step-size sequence. Here, j1 , j2 , ? ? ? , jbT are independent and identically
distributed (i.i.d.) random variables from the uniform distribution on [m] 1 .
Different choices for the (mini-)batch size b can lead to different algorithms. In particular, for b = 1,
the above algorithm corresponds to a simple SGM, while for b = m, it is a stochastic version of the
batch gradient descent.
The aim of this paper is to derive excess risk bounds for the above algorithm under appropriate
assumptions. Throughout this paper, we assume that {?t }t is non-increasing, and T ? N with T ? 3.
We denote by Jt the set {jl : l = b(t ? 1) + 1, ? ? ? , bt} and by J the set {jl : l = 1, ? ? ? , bT }.
3
Main Results with Discussions
In this section, we first state some basic assumptions. Then, we present and discuss our main results.
3.1
Assumptions
The following assumption is related to a moment hypothesis on |y|2 . It is weaker than the often
considered bounded output assumption, and trivially verified in binary classification problems where
Y = {?1, 1}.
Assumption 1. There exists constants M ?]0, ?[ and v ?]1, ?[ such that
Z
y 2l d?(y|x) ? l!M l v, ?l ? N,
(5)
Y
?X -almost surely.
2
2
To present our
R next assumption, we introduce the operator L : L (H, ?X ) ? L (H, ?X ), defined
by L(f ) = X hx, ?iH f (x)?X (x). Under Assumption (3), L can be proved to be positive trace class
operators, and hence L? with ? ? R can be defined by using the spectrum theory [7].
The Hilbert space of square integral functions from H to R with respect to ?X , with induced norm
1/2
R
2
2
given by kf k? = X |f (x)|
R d?X (x) 2 , is denoted by (L (H, ?X ), k ? k? ). It is well known that
the function minimizing Z (f (x) ? y) d?(z) over all measurable functions f : H ? R is the
regression function, which is given by
Z
f? (x) =
yd?(y|x),
x ? X.
(6)
Y
Define another Hilbert space H? = {f : X ? R|?? ? H with f (x) = h?, xiH , ?X -almost surely}.
Under Assumption 3, it is easy to see that H? is a subspace of L2 (H, ?X ). Let fH be the projection
of the regression function f? onto the closure of H? in L2 (H, ?X ). It is easy to see that the search for
a solution of Problem (1) is equivalent to the search of a linear function from H? to approximate fH .
From this point of view, bounds on the excess risk of a learning algorithm naturally depend on the
following assumption, which quantifies how well, the target function fH can be approximated by H? .
Assumption 2. There exist ? > 0 and R > 0, such that kL?? fH k? ? R.
The above assumption is fairly standard [7, 19] in non-parametric regression. The bigger ? is, the
more stringent the assumption is, since L?1 (L2 (H, ?X )) ? L?2 (L2 (H, ?X )) when ?1 ? ?2 . In
particular, for ? = 0, we are assuming kfH k? < ?, while for ? = 1/2, we are requiring fH ? H? ,
since [25, 19]
H? = L1/2 (L2 (H, ?X )).
Finally, the last assumption relates to the capacity of the hypothesis space.
1
Note that, the random variables j1 , ? ? ? , jbT are conditionally independent given the sample z.
3
Assumption 3. For some ? ?]0, 1] and c? > 0, L satisfies
tr(L(L + ?I)?1 ) ? c? ??? ,
for all ? > 0.
(7)
The LHS of (7) is called as the effective dimension, or the degrees of freedom [30, 4]. It can be
related to covering/entropy number conditions, see [25] for further details. Assumption 3 is always
true for ? = 1 and c? = ?2 , since
P L is a trace class operator which implies the eigenvalues of L,
denoted as ?i , satisfy tr(L) = i ?i ? ?2 . This is referred as the capacity independent setting.
Assumption 3 with ? ?]0, 1] allows to derive better error rates. It is satisfied, e.g., if the eigenvalues
of L satisfy a polynomial decaying condition ?i ? i?1/? , or with ? = 0 if L is finite rank.
3.2
Main Results
We start with the following corollary, which is a simplified version of our main results stated next.
Corollary 3.1. Under Assumptions 2 and 3, let ? ? 1/2 and |y| ? M ?X -almost surely for some
M > 0. Consider
the SGM with
1
1
for all t ? [(p? m)], and ?
? p? = ?p? m+1 .
1) p? = dm 2?+? e, b = 1, ?t ' m
2
If m is large enough, with high probability , there holds
2?
EJ [E(?
?p? )] ? inf E . m? 2?+? .
??H
Furthermore, the above also holds for the SGM with3
?
?
1
2) or p? = dm 2?+? e, b = m, ?t ' ?1m for all t ? [(p? m)], and ?
? p? = ?p? ?m+1 .
bt
In the above, p? is the number of ?passes? over the data, which is defined as d m
e at t iterations.
The above result asserts that, at p? passes over the data, the simple SGM with fixed step-size achieves
optimal learning error bounds, matching those of ridge regression [4]. Furthermore, using mini-batch
allows to use a larger step-size while achieving the same optimal error bounds.
Remark 3.2 (Finite Dimensional Case). With a simple modification of our proofs, we can derive
similar results for the finite dimensional case, i.e., H = Rd , where in this case, ? = 0. In particular,
letting ? = 1/2, under the same assumptions of Corollary 3.1, if one considers the SGM with b = 1
1
and ?t ' m
for all t ? m2 , then with high probability, EJ [E(?m2 +1 )] ? inf ??H E . d/m, provided
that m & d log d.
Our main theorem of this paper is stated next, and provides error bounds for the studied algorithm.
For the sake of readability, we only consider the case ? ? 1/2 in a fixed step-size setting. General
results in a more general setting (?t = ?1 t?? with 0 ? ? < 1, and/or the case ? ?]0, 1/2]) can be
found in the appendix.
Theorem 3.3. Under Assumptions 1, 2 and 3, let ? ? 1/2, ? ?]0, 1[, ?t = ???2 for all t ? [T ], with
? ? 8(log1T +1) . If m ? m? , then the following holds with probability at least 1 ? ?: for all t ? [T ],
2?
1
EJ [E(?t+1 )] ? inf E ? q1 (?t)?2? + q2 m? 2?+? (1 + m? 2?+? ?t)2 log2 T log2
??H
1
1
?
(8)
+q3 ?b?1 (1 ? m? 2?+? ?t) log T.
Here, m? , q1 , q2 and q3 are positive constants depending on ?2 , kT k, M, v, ?, R, c? , ?, and m? also
on ? (which will be given explicitly in the proof).
There are three terms in the upper bounds of (8). The first term depends on the regularity of the
target function and it arises from bounding the bias, while the last two terms result from estimating
the sample variance and the computational variance (due to the random choices of the points),
respectively. To derive optimal rates, it is necessary to balance these three terms. Solving this
trade-off problem leads to different choices on ?, T , and b, corresponding to different regularization
strategies, as shown in subsequent corollaries.
The first corollary gives generalization error bounds for SGM, with a universal step-size depending
on the number of sample points.
2
3
Here, ?high probability?
? refers to the sample z.
Here, we assume that m is an integer.
4
1
Corollary 3.4. Under Assumptions 1, 2 and 3, let ? ? 1/2 , ? ?]0, 1[, b = 1 and ?t ' m
for all
2
t ? [T ], where T ? m . If m ? m? , then with probability at least 1 ? ?, there holds
t 2
2?+2
1
m 2?
? 2?+?
+m
? log2 m log2 , ?t ? [T ],
EJ [E(?t+1 )] ? inf E .
(9)
??H
t
m
?
and in particular,
2?
1
EJ [E(?T ? +1 )] ? inf E . m? 2?+? log2 m log2 ,
??H
?
(10)
2?+?+1
where T ? = dm 2?+? e. Here, m? is exactly the same as in Theorem 3.3.
Remark 3.5. Ignoring the logarithmic term and letting t = pm, Eq. (9) becomes
2?+2
EJ [E(?pm+1 )] ? inf E . p?2? + m? 2?+? p2 .
??H
A smaller p may lead to a larger bias, while a larger p may lead to a larger sample error. From this
point of view, p has a regularization effect.
The second corollary provides error bounds for SGM with a fixed mini-batch size and a fixed step-size
(which depend on the number of sample points).
?
Corollary 3.6. Under Assumptions 1, 2 and 3, let ? ? 1/2, ? ?]0, 1[, b = d me and ?t ' ?1m for
all t ? [T ], where T ? m2 . If m ? m? , then with probability at least 1 ? ?, there holds
?
t 2
2?+2
m 2?
1
? 2?+?
?
(11)
EJ [E(?t+1 )] ? inf E .
+m
log2 m log2 , ?t ? [T ],
??H
t
?
m
and particularly,
2?
1
EJ [E(?T ? +1 )] ? inf E . m? 2?+? log2 m log2 ,
??H
?
1
(12)
1
where T ? = dm 2?+? + 2 e.
The above two corollaries follow from Theorem 3.3 with the simple observation that the dominating
terms in (8) are the terms related to the bias and the sample variance, when a small step-size is chosen.
The only free parameter in (9) and (11) is the number of iterations/passes.
The ideal stopping rule is achieved by balancing the two terms related to the bias and the sample
variance, showing the regularization effect of the number of passes. Since the ideal stopping rule
depends on the unknown parameters ? and ?, a hold-out cross-validation procedure is often used to
tune the stopping rule in practice. Using an argument similar to that in Chapter 6 from [25], it is
possible to show that this procedure can achieve the same convergence rate.
We give some further remarks. First, the upper bound in (10) is optimal up to a logarithmic factor,
in the sense that it matches the minimax lower rate in [4]. Second, according to Corollaries 3.4 and
1
?
2?+? passes over the data are needed to obtain optimal rates in both cases. Finally,
3.6, bT
m ' m
in comparing the simple SGM and the mini-batch SGM, Corollaries 3.4 and 3.6 show that a larger
step-size is allowed to use for the latter.
In the next result, both the step-size and the stopping rule are tuned to obtain optimal rates for
simple SGM with multiple passes. In this case, the step-size and the number of iterations are the
regularization parameters.
2?
Corollary 3.7. Under Assumptions 1, 2 and 3, let ? ? 1/2, ? ?]0, 1[, b = 1 and ?t ' m? 2?+? for
2?+1
all t ? [T ], where T ? m2 . If m ? m? , and T ? = dm 2?+? e, then (10) holds with probability at
least 1 ? ?.
Remark 3.8. If we make no assumption on the capacity, i.e., ? = 1, Corollary 3.7 recovers the result
in [29] for one pass SGM.
The next corollary shows that for some suitable mini-batch sizes, optimal rates can be achieved with
a constant step-size (which is nearly independent of the number of sample points) by early stopping.
2?
Corollary 3.9. Under Assumptions 1, 2 and 3, let ? ? 1/2, ? ?]0, 1[, b = dm 2?+? e and ?t '
for all t ? [T ], where T ? m2 . If m ? m? , and T ? = dm
least 1 ? ?.
5
1
2?+?
1
log m
e, then (10) holds with probability at
1??
According to Corollaries 3.7 and 3.9, around m 2?+? passes over the data are needed to achieve the
best performance in the above two strategies. In comparisons with Corollaries 3.4 and 3.6 where
?+1
around m 2?+? passes are required, the latter seems to require fewer passes over the data. However, in
this case, one might have to run the algorithms multiple times to tune the step-size, or the mini-batch
size.
Finally, the last result gives generalization error bounds for ?batch? SGM with a constant step-size
(nearly independent of the number of sample points).
Corollary 3.10. Under Assumptions 1, 2 and 3, let ? ? 1/2, ? ?]0, 1[, b = m and ?t ' log1 m for all
1
t ? [T ], where T ? m2 . If m ? m? , and T ? = dm 2?+? e, then (10) holds with probability at least
1 ? ?.
As will be seen in the proof from the appendix, the above result also holds when replacing the
sequence {?t } by the sequence {?t }t generated from batch GM in (14). In this sense, we study the
gradient-based learning algorithms simultaneously.
3.3
Discussions
We compare our results with previous works. For non-parametric regression with the square loss, one
pass SGM has been studied in, e.g., [29, 22, 26, 9]. In particular, [29] proved capacity independent rate
2?
2?
of order O(m? 2?+1 log m) with a fixed step-size ? ' m? 2?+1 , and [9] derived capacity dependent
2 min(?,1)
error bounds of order O(m? 2 min(?,1)+? ) (when 2? + ? > 1) for the average. Note also that a
regularized version of SGM has been studied in [26], where the derived convergence rate there is
2?
of order O(m? 2?+1 ) assuming that ? ? [ 12 , 1]. In comparison with these existing convergence rates,
our rates from (10) are comparable, either involving the capacity condition, or allowing a broader
regularity parameter ? (which thus improves the rates).
More recently, [19] studied multiple passes SGM with a fixed ordering at each pass, also called
?
incremental gradient method. Making no assumption on the capacity, rates of order O(m? ?+1 ) (in
L2 (H, ?X )-norm) with a universal step-size ? ' 1/m are derived. In comparisons, Corollary 3.4
achieves better rates, while considering the capacity assumption. Note also that [19] proved sharp
rate in H-norm for ? ? 1/2 in the capacity independent case. In fact, we can extend our analysis to
the H-norm for Algorithm 4. We postpone this extension to a longer version of this paper.
The idea of using mini-batches (and parallel implements) to speed up SGM in a general stochastic
optimization setting can be found, e.g., in [21, 8, 24, 15]. Our theoretical findings, especially the
interplay between the mini-batch size and the step-size, can give further insights on parallelization
learning. Besides,
it has been shown in [6, 8] that for one pass mini-batch SGM with a fixed step?
size ? ' b/ m and a smooth loss function, assuming the existence of at least one solution
in the
p
hypothesis space for the expected risk minimization, the convergence rate is of order O( 1/m+b/m)
by considering an averaging scheme. When adapting to the learning settingpwe consider, this reads as
that if fH ? H? , i.e., ? = 1/2, the convergence rate for the average is O( 1/m + b/m). Note that,
fH does not necessarily belong to H? in general. Also, our derived convergence rate from Corollary
3.6 is better, when the regularity parameter ? is greater than 1/2, or ? is smaller than 1.
3.4
Proof Sketch (Error Decomposition)
The key to our proof is a novel error decomposition, which may be also used in analysing other learning algorithms. One may also use the approach in [12, 11] which is based on the error decomposition,
i.e., for some suitably intermediate element ?
? ? H,
EE(?t ) ? inf E = [E(E(?t ) ? Ez (?t )) + EEz (?
? ) ? E(?
? )] + E(Ez (?t ) ? Ez (?
? )) + E(?
? ) ? inf E,
w?H
??H
where Ez denotes the empirical risk. However, one can only derive a sub-optimal convergence rate,
since the proof procedure involves upper bounding the learning sequence to estimate the sample
error (the first term of RHS). In this case the ?regularity? of the regression function can not be fully
adapted for bounding the bias (the last term). Thanks to the property of squares loss, we can exploit a
different error decomposition leading to better results.
We first introduce two sequences. The population iteration is defined by ?1 = 0 and
Z
?t+1 = ?t ? ?t
(h?t , xiH ? f? (x))xd?X (x),
t = 1, . . . , T.
(13)
X
6
The above iterated procedure is ideal and can not be implemented in practice, since the distribution
?X is unknown in general. Replacing ?X by the empirical measure and f? (xi ) by yi , we derive the
sample iteration (associated with the sample z), i.e., ?1 = 0 and
m
?t+1 = ?t ? ?t
1 X
(h?t , xi iH ? yi )xi ,
m i=1
(14)
t = 1, . . . , T.
Clearly, ?t is deterministic and ?t is a H-valued random variable depending on z. Given the sample
z, the sequence {?t }t has a natural relationship with the learning sequence {?t }t , since
(15)
EJ [?t ] = ?t .
Indeed, taking the expectation with respect to Jt on both sides of (4),
Pmand noting that ?t depends only
1
on J1 , ? ? ? , Jt?1 (given any z), one has EJt [?t+1 ] = ?t ? ?t m
i=1 (h?t , xi iH ? yi )xi , and thus,
Pm
1
(hE
[?
],
x
i
?
y
)x
,
t
=
1,
.
.
.
, T, which satisfies the iterative
EJ [?t+1 ] = EJ [?t ] ? ?t m
J
t
i
H
i
i
i=1
relationship given in (14). By an induction argument, (15) can then be proved.
Let S? : H ? L2 (H, ?X ) be the linear map defined by (S? ?)(x) = h?, xiH , ??, x ? H. We have
the following error decomposition.
Proposition 3.11. We have
EJ [E(?t )] ? inf E(f ) ? 2kS? ?t ? fH k2? + 2kS? ?t ? S? ?t k2? + EJ [kS? ?t ? S? ?t k2 ].
f ?H
(16)
Proof. For any ? ? H, we have [25, 19] E(?) ? inf f ?H E(f ) = kS? ? ? fH k2? . Thus, E(?t ) ?
inf f ?H E(f ) = kS? ?t ? fH k2? , and
EJ [kS? ?t ? fH k2? ] = EJ [kS? ?t ? S? ?t + S? ?t ? fH k2? ]
= EJ [kS? ?t ? S? ?t k2? + kS? ?t ? fH k2? ] + 2EJ hS? ?t ? S? ?t , S? ?t ? fH i? .
Using (15) to the above, we get EJ [kS? ?t ? fH k2? ] = EJ [kS? ?t ? S? ?t k2? + kS? ?t ? fH k2? ]. Now
the proof can be finished by considering
kS? ?t ? fH k2? = kS? ?t ? S? ?t + S? ?t ? fH k2? ? 2kS? ?t ? S? ?t k2? + 2kS? ?t ? S? fH k2? .
There are three terms in the upper bound of the error decomposition (16). We refer to the deterministic
term kS? ?t ? fH k2? as the bias, the term kS? ?t ? S? ?t k2? depending on z as the sample variance,
and EJ [kS? ?t ? S? ?t k2? ] as the computational variance. The bias term is deterministic and is well
studied in the literature, see e.g., [28] and also [19]. The main novelties are the estimate of the sample
and computational variances. The proof of these results is quite lengthy and makes use of some
ideas from [28, 23, 1, 29, 26, 20]. These three error terms will be estimated in the appendix, see
Lemma B.2, Theorem C.6 and Theorem D.9. The bound in Theorem 3.3 thus follows plugging these
estimations in the error decomposition.
4
Numerical Simulations
In order to illustrate our theoretical results and the error decomposition, we first performed some
simulations on a simple problem. We constructed m = 100 i.i.d. training examples of the form
y = f? (xi ) + ?i . Here, the regression function is f? (x) = |x ? 1/2| ? 1/2, the input point xi is
uniformly distributed in [0, 1], and ?i is a Gaussian noise with zero mean and standard deviation
1, for each i ? [m]. We perform three experiments with the same H, a RKHS associated with a
Gaussian kernel K(x, x0 ) = exp(?(x ? x0 )2 /(2? 2 ))?where ? = 0.2. In the first experiment,
we
?
run mini-batch SGM, where the mini-batch size b = m, and the step-size ?t = 1/(8 m). In the
second experiment, we run simple SGM where the step-size is fixed as ?t = 1/(8m), while in the
third experiment, we run batch GM using the fixed step-size ?t = 1/8. For mini-batch SGM and
SGM, the total error kS? ?t ? f? k2L2 , the bias kS? ?
?t ? f? k2L2 , the sample variance kS? ?t ? S? ?
?t k2L2
?
?
?
?
?
?
and the computational variance kS? ?t ? S? ?t k2L2 , averaged over 50 trials, are depicted in Figures 1a
?
?
and 1b, respectively. For batch GM, the total error kS? ?t ? f? k2L2 , the bias kS? ?
?t ? f? k2L2 and the
?
?
7
?
?
SGM
Minibatch SGM
Bias
Sample Error
Computational Error
Total Error
0.07
0.08
Bias
Sample Error
Computational Error
Total Error
0.07
0.06
0.05
0.05
Error
0.06
0.05
0.04
0.04
0.04
0.03
0.03
0.03
0.02
0.02
0.02
0.01
0.01
0.01
0
0
0
20
40
60
80
100
Pass
120
140
160
180
200
0
20
40
(a) Minibatch SGM
60
80
100
Pass
120
140
160
180
Bias
Sample Error
Total Error
0.07
0.06
Error
Error
Batch GM
0.08
0.08
0
200
0
20
40
60
80
100
Pass
120
140
160
180
200
(c) Batch GM
(b) SGM
Figure 1: Error decompositions for gradient-based learning algorithms on synthesis data, where m =
100.
Classification Errors of Minibatch SGM
Classification Errors of SGM
0.09
Classification Errors of GM
0.08
Training Error
Validation Error
0.08
0.08
Training Error
Validation Error
0.07
0.07
Training Error
Validation Error
0.07
0.06
0.06
0.05
0.05
0.04
Error
0.05
Error
Error
0.06
0.04
0.04
0.03
0.03
0.02
0.02
0.03
0.02
0.01
0.01
0
0
0.2
0.4
0.6
0.8
1
Pass
1.2
1.4
(a) Minibatch SGM
1.6
1.8
2
4
x 10
0
0.01
0
0.2
0.4
0.6
0.8
1
Pass
1.2
(b) SGM
1.4
1.6
1.8
2
4
x 10
0
0
0.2
0.4
0.6
0.8
1
Pass
1.2
1.4
1.6
1.8
2
4
x 10
(c) Batch GM
Figure 2: Misclassification Errors for gradient-based learning algorithms on BreastCancer dataset.
sample variance kS? ?t ? ?
?t k2L2 , averaged over 50 trials are depicted in Figure 1c. Here, we replace
?
?
P2000
1
the unknown marginal distribution ?X by an empirical measure ?? = 2000
?i
?i , where each x
i=1 ?x
is uniformly distributed in [0, 1]. From Figure 1a or 1b, we see that as the number of passes increases4 ,
the bias decreases, while the sample error increases. Furthermore, we see that in comparisons with
the bias and the sample error, the computational error is negligible. In all these experiments, the
minimal total error is achieved when the bias and the sample error are balanced. These empirical
results show the effects of the three terms from the error decomposition, and complement the derived
bound (8), as well as the regularization effect of the number of passes over the data. Finally, we
tested the simple SGM, mini-batch SGM, and batch GM, using similar step-sizes as those in the first
simulation, on the BreastCancer data-set 5 . The classification errors on the training set and the testing
set of these three algorithms are depicted in Figure 2. We see that all of these algorithms perform
similarly, which complement the bounds in Corollaries 3.4, 3.6 and 3.10.
Acknowledgments
This material is based upon work supported by the Center for Brains, Minds and Machines (CBMM),
funded by NSF STC award CCF-1231216. L. R. acknowledges the financial support of the Italian
Ministry of Education, University and Research FIRB project RBFR12M3AC.
References
[1] F. Bauer, S. Pereverzev, and L. Rosasco. On regularization algorithms in learning theory.
Journal of Complexity, 23(1):52?72, 2007.
[2] O. Bousquet and L. Bottou. The tradeoffs of large scale learning. In Advances in Neural
Information Processing Systems, pages 161?168, 2008.
[3] S. Boyd and A. Mutapcic. Stochastic subgradient methods. Notes for EE364b, Standford
University, Winter 2007.
4
Note that the terminology ?running the algorithm with p passes? means ?running the algorithm with dmp/be
iterations?, where b is the mini-batch size.
5
https://archive.ics.uci.edu/ml/datasets/
8
[4] A. Caponnetto and E. De Vito. Optimal rates for the regularized least-squares algorithm.
Foundations of Computational Mathematics, 7(3):331?368, 2007.
[5] N. Cesa-Bianchi, A. Conconi, and C. Gentile. On the generalization ability of on-line learning
algorithms. IEEE Transactions on Information Theory, 50(9):2050?2057, 2004.
[6] A. Cotter, O. Shamir, N. Srebro, and K. Sridharan. Better mini-batch algorithms via accelerated
gradient methods. In Advances in Neural Information Processing Systems, pages 1647?1655,
2011.
[7] F. Cucker and D.-X. Zhou. Learning Theory: an Approximation Theory Viewpoint, volume 24.
Cambridge University Press, 2007.
[8] O. Dekel, R. Gilad-Bachrach, O. Shamir, and L. Xiao. Optimal distributed online prediction
using mini-batches. Journal of Machine Learning Research, 13(1):165?202, 2012.
[9] A. Dieuleveut and F. Bach. Non-parametric stochastic approximation with large step sizes.
Annals of Statistics, 44(4):1363?1399, 2016.
[10] M. Hardt, B. Recht, and Y. Singer. Train faster, generalize better: Stability of stochastic gradient
descent. In International Conference on Machine Learning, 2016.
[11] J. Lin, R. Camoriano, and L. Rosasco. Generalization properties and implicit regularization of
multiple passes SGM. In International Conference on Machine Learning, 2016.
[12] J. Lin, L. Rosasco, and D.-X. Zhou. Iterative regularization for learning with convex loss
functions. Journal of Machine Learning Research, 17(77):1?38, 2016.
[13] S. Minsker. On some extensions of bernstein?s inequality for self-adjoint operators. arXiv
preprint arXiv:1112.5448, 2011.
[14] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach
to stochastic programming. SIAM Journal on Optimization, 19(4):1574?1609, 2009.
[15] A. Ng. Machine learning. Coursera, Standford University, 2016.
[16] F. Orabona. Simultaneous model selection and optimization through parameter-free stochastic
learning. In Advances in Neural Information Processing Systems, pages 1116?1124, 2014.
[17] I. Pinelis and A. Sakhanenko. Remarks on inequalities for large deviation probabilities. Theory
of Probability & Its Applications, 30(1):143?148, 1986.
[18] B. T. Poljak. Introduction to Optimization. Optimization Software, 1987.
[19] L. Rosasco and S. Villa. Learning with incremental iterative regularization. In Advances in
Neural Information Processing Systems, pages 1621?1629, 2015.
[20] A. Rudi, R. Camoriano, and L. Rosasco. Less is more: Nystr?m computational regularization.
In Advances in Neural Information Processing Systems, pages 1648?1656, 2015.
[21] S. Shalev-Shwartz, Y. Singer, N. Srebro, and A. Cotter. Pegasos: Primal estimated sub-gradient
solver for svm. Mathematical Programming, 127(1):3?30, 2011.
[22] O. Shamir and T. Zhang. Stochastic gradient descent for non-smooth optimization: Convergence
results and optimal averaging schemes. In International Conference on Machine Learning,
pages 71?79, 2013.
[23] S. Smale and D.-X. Zhou. Learning theory estimates via integral operators and their approximations. Constructive Approximation, 26(2):153?172, 2007.
[24] S. Sra, S. Nowozin, and S. J. Wright. Optimization for Machine Learning. MIT Press, 2012.
[25] I. Steinwart and A. Christmann. Support Vector Machines. Springer Science Business Media,
2008.
[26] P. Tarres and Y. Yao. Online learning as stochastic approximation of regularization paths:
Optimality and almost-sure convergence. IEEE Transactions on Information Theory, 60(9):5716?
5735, 2014.
[27] J. A. Tropp. User-friendly tools for random matrices: An introduction. Technical report, DTIC
Document, 2012.
[28] Y. Yao, L. Rosasco, and A. Caponnetto. On early stopping in gradient descent learning.
Constructive Approximation, 26(2):289?315, 2007.
[29] Y. Ying and M. Pontil. Online gradient descent learning algorithms. Foundations of Computational Mathematics, 8(5):561?596, 2008.
[30] T. Zhang. Learning bounds for kernel regression using effective data dimensionality. Neural
Computation, 17(9):2077?2098, 2005.
9
| 6213 |@word h:1 trial:2 version:4 polynomial:1 norm:5 seems:1 suitably:1 dekel:1 closure:1 simulation:4 decomposition:10 q1:2 tr:2 nystr:1 harder:1 moment:1 cyclic:1 tuned:1 rkhs:2 document:1 existing:1 comparing:1 numerical:2 subsequent:1 j1:3 juditsky:1 fewer:1 iterates:1 provides:2 readability:1 zhang:2 mathematical:1 constructed:1 introduce:2 firb:1 x0:4 expected:1 indeed:2 xji:2 multi:3 brain:1 considering:8 increasing:1 becomes:1 spain:1 begin:1 moreover:1 notation:2 bounded:1 estimating:1 medium:1 project:1 provided:1 q2:2 finding:1 act:2 friendly:1 tackle:1 xd:1 exactly:1 k2:19 control:1 enjoy:1 positive:5 negligible:1 understood:1 minsker:1 k2l2:7 path:1 yd:1 solver:1 might:1 studied:5 k:26 nemirovski:1 statistically:1 averaged:2 practical:1 acknowledgment:1 testing:1 practice:4 postpone:1 implement:1 procedure:4 pontil:1 universal:4 empirical:6 adapting:1 matching:2 projection:1 boyd:1 refers:1 get:2 onto:1 close:1 selection:1 operator:5 pegasos:1 breastcancer:2 risk:9 measurable:1 equivalent:1 deterministic:3 map:1 center:1 pereverzev:1 independently:1 convex:1 bachrach:1 m2:6 insight:2 estimator:1 rule:4 fill:1 financial:1 stability:2 population:1 annals:1 target:2 play:1 gm:9 shamir:3 user:1 programming:2 hypothesis:3 element:1 approximated:1 particularly:1 role:5 preprint:1 worst:1 coursera:1 ordering:1 trade:1 decrease:1 balanced:1 sgm:45 complexity:1 vito:1 depend:2 solving:1 upon:1 dmp:1 iit:3 chapter:1 train:1 univ:1 shortcoming:1 describe:1 effective:2 shalev:1 whose:1 quite:1 larger:8 dominating:1 valued:1 ability:1 statistic:1 online:4 interplay:1 sequence:10 eigenvalue:2 product:1 j2:1 uci:1 achieve:4 adjoint:1 kh:1 asserts:1 convergence:10 regularity:4 generating:1 incremental:3 derive:8 depending:4 illustrate:1 pinelis:1 mutapcic:1 eq:1 p2:1 implemented:1 involves:1 implies:1 christmann:1 direction:1 stochastic:19 stringent:1 material:1 education:1 require:2 hx:2 generalization:7 proposition:1 extension:2 hold:10 around:2 considered:3 cbmm:1 ic:1 exp:1 wright:1 camoriano:2 achieves:2 early:4 fh:20 estimation:1 standford:2 tool:2 cotter:2 minimization:4 mit:4 clearly:1 always:1 gaussian:2 aim:1 rather:3 zhou:3 ej:20 broader:1 corollary:21 derived:6 q3:2 rank:1 sense:3 dependent:2 stopping:8 streaming:1 typically:2 bt:7 italian:1 classification:5 denoted:3 priori:1 special:2 fairly:1 marginal:2 once:1 ng:1 nearly:2 report:1 develops:1 few:1 modern:1 winter:1 simultaneously:2 tightly:2 freedom:1 interest:1 introduces:1 analyzed:1 extreme:1 primal:1 dieuleveut:1 amenable:1 accurate:1 kt:1 integral:2 necessary:1 lh:1 respective:2 poljak:1 theoretical:5 minimal:1 cover:1 introducing:1 deviation:2 uniform:1 adaptively:1 thanks:1 recht:1 international:3 siam:1 off:1 cucker:1 synthesis:1 yao:2 satisfied:1 cesa:1 rosasco:7 possibly:1 leading:2 de:1 satisfy:2 explicitly:1 depends:4 performed:1 view:2 analyze:2 start:1 decaying:1 parallel:3 contribution:1 square:8 accuracy:1 variance:11 generalize:1 iterated:1 simultaneous:1 lengthy:1 lcsl:2 dm:8 naturally:2 proof:11 associated:2 recovers:1 sampled:1 proved:4 dataset:1 hardt:1 subsection:1 improves:1 dimensionality:1 organized:1 hilbert:4 appears:1 supervised:1 follow:1 formulation:1 furthermore:3 implicit:1 hand:2 sketch:2 steinwart:1 tropp:1 replacing:2 minibatch:5 quality:1 usa:2 effect:7 requiring:1 true:1 ccf:1 former:1 regularization:16 hence:1 read:1 jbt:2 conditionally:1 self:1 covering:1 ridge:2 cbt:1 l1:1 novel:1 recently:2 functional:1 volume:1 jl:2 extend:1 belong:1 he:1 numerically:1 refer:1 cambridge:1 rd:1 trivially:1 pm:3 similarly:1 mathematics:2 stochasticity:1 funded:1 longer:1 recent:4 italy:1 lrosasco:1 inf:15 scenario:2 tikhonov:2 inequality:2 binary:1 kfh:1 rbfr12m3ac:1 yi:4 exploited:1 seen:2 preserving:1 greater:1 ministry:1 gentile:1 surely:3 novelty:1 relates:1 multiple:7 reduces:1 caponnetto:2 smooth:2 technical:1 faster:3 match:1 cross:2 bach:1 lin:4 award:1 bigger:1 plugging:1 prediction:1 variant:1 regression:11 basic:2 involving:1 expectation:1 arxiv:2 iteration:6 kernel:3 gilad:1 achieved:4 appropriately:1 parallelization:1 rest:1 unlike:1 ejt:1 archive:1 pass:22 sure:1 induced:3 sridharan:1 integer:2 ee:1 noting:1 ideal:3 intermediate:1 bernstein:1 identically:2 easy:2 enough:1 zi:1 inner:1 idea:3 tradeoff:1 motivated:1 remark:5 useful:1 clear:1 tune:2 http:1 shapiro:1 exist:1 nsf:1 estimated:2 key:1 terminology:1 lan:1 achieving:1 verified:1 subgradient:1 run:4 throughout:2 almost:4 appendix:3 genova:1 comparable:1 bound:22 rudi:1 adapted:1 constraint:1 software:1 sake:1 bousquet:1 aspect:1 speed:1 argument:2 min:2 optimality:1 separable:1 according:3 smaller:2 modification:1 making:1 restricted:2 discus:1 needed:2 mind:1 letting:2 singer:2 appropriate:1 batch:39 existence:1 denotes:3 running:2 log2:10 unifying:1 exploit:2 especially:3 question:1 strategy:3 parametric:3 villa:1 gradient:22 subspace:1 capacity:11 me:1 considers:2 induction:1 assuming:3 besides:1 relationship:2 mini:27 minimizing:1 balance:1 ying:1 smale:1 trace:2 stated:2 implementation:1 unknown:5 perform:2 allowing:2 upper:4 bianchi:1 observation:1 datasets:1 finite:6 descent:5 reproducing:1 sharp:1 complement:3 required:1 kl:1 xih:4 barcelona:1 nip:1 suitable:1 misclassification:1 natural:2 business:1 regularized:2 minimax:1 scheme:2 lorenzo:1 finished:1 started:1 acknowledges:1 log1:1 literature:1 l2:7 kf:1 loss:8 encompassing:1 highlight:1 fully:1 srebro:2 validation:5 penalization:1 integrate:1 foundation:2 degree:1 xiao:1 viewpoint:1 nowozin:1 balancing:2 supported:1 last:4 keeping:1 free:2 bias:16 allow:1 weaker:1 side:1 taking:1 distributed:5 bauer:1 dimension:2 simplified:1 transaction:2 excess:4 approximate:3 ml:1 xi:8 shwartz:1 spectrum:1 yji:1 search:2 iterative:3 quantifies:1 robust:1 sra:1 ignoring:1 bottou:1 necessarily:1 stc:1 main:9 rh:1 bounding:3 noise:1 dibris:1 allowed:5 referred:1 sub:2 third:1 theorem:7 jt:3 showing:1 svm:1 exists:3 ih:6 dtic:1 gap:1 entropy:1 depicted:3 logarithmic:2 ez:4 conconi:1 springer:1 corresponds:1 satisfies:2 conditional:1 goal:1 orabona:1 replace:1 analysing:2 determined:2 uniformly:3 averaging:3 lemma:1 called:2 total:6 pas:14 support:2 latter:7 arises:1 accelerated:1 constructive:2 tested:1 |
5,763 | 6,214 | Path-Normalized Optimization of Recurrent Neural
Networks with ReLU Activations
Behnam Neyshabur?
Toyota Technological Institute at Chicago
Yuhuai Wu?
University of Toronto
[email protected]
[email protected]
Ruslan Salakhutdinov
Carnegie Mellon University
Nathan Srebro
Toyota Technological Institute at Chicago
[email protected]
[email protected]
Abstract
We investigate the parameter-space geometry of recurrent neural networks (RNNs),
and develop an adaptation of path-SGD optimization method, attuned to this
geometry, that can learn plain RNNs with ReLU activations. On several datasets
that require capturing long-term dependency structure, we show that path-SGD can
significantly improve trainability of ReLU RNNs compared to RNNs trained with
SGD, even with various recently suggested initialization schemes.
1
Introduction
Recurrent Neural Networks (RNNs) have been found to be successful in a variety of sequence learning
problems [4, 3, 9], including those involving long term dependencies (e.g., [1, 23]). However, most
of the empirical success has not been with ?plain? RNNs but rather with alternate, more complex
structures, such as Long Short-Term Memory (LSTM) networks [7] or Gated Recurrent Units (GRUs)
[3]. Much of the motivation for these more complex models is not so much because of their modeling
richness, but perhaps more because they seem to be easier to optimize. As we discuss in Section
3, training plain RNNs using gradient-descent variants seems problematic, and the choice of the
activation function could cause a problem of vanishing gradients or of exploding gradients.
In this paper our goal is to better understand the geometry of plain RNNs, and develop better
optimization methods, adapted to this geometry, that directly learn plain RNNs with ReLU activations.
One motivation for insisting on plain RNNs, as opposed to LSTMs or GRUs, is because they
are simpler and might be more appropriate for applications that require low-complexity design
such as in mobile computing platforms [22, 5]. In other applications, it might be better to solve
optimization issues by better optimization methods rather than reverting to more complex models.
Better understanding optimization of plain RNNs can also assist us in designing, optimizing and
intelligently using more complex RNN extensions.
Improving training RNNs with ReLU activations has been the subject of some recent attention,
with most research focusing on different initialization strategies [12, 22]. While initialization can
certainly have a strong effect on the success of the method, it generally can at most delay the problem
of gradient explosion during optimization. In this paper we take a different approach that can be
combined with any initialization choice, and focus on the dynamics of the optimization itself.
Any local search method is inherently tied to some notion of geometry over the search space (e.g.
the space of RNNs). For example, gradient descent (including stochastic gradient descent) is tied to
the Euclidean geometry and can be viewed as steepest descent with respect to the Euclidean norm.
Changing the norm (even to a different quadratic norm, e.g. by representing the weights with respect
to a different basis in parameter space) results in different optimization dynamics. We build on prior
work on the geometry and optimization in feed-forward networks, which uses the path-norm [16]
?
Contributed equally.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Input nodes
FF (shared weights)
hv = x[v]
RNN notation
h0t = xt , hi0 = 0
Internal nodes
hP
i
(u?v)?E wu?v hu
i i?1
+
i
hit = Win
ht + Wrec
hit?1 +
hv =
hv =
Output nodes
P
(u?v)?E wu?v hu
hdt = Wout hd?1
t
Table 1: Forward computations for feedforward nets with shared weights.
(defined in Section 4) to determine a geometry leading to the path-SGD optimization method. To
do so, we investigate the geometry of RNNs as feedforward networks with shared weights (Section
2) and extend a line of work on Path-Normalized optimization to include networks with shared
weights. We show that the resulting algorithm (Section 4) has similar invariance properties on RNNs
as those of standard path-SGD on feedforward networks, and can result in better optimization with
less sensitivity to the scale of the weights.
2
Recurrent Neural Nets as Feedforward Nets with Shared Weights
We view Recurrent Neural Networks (RNNs) as feedforward networks with shared weights.
We denote a general feedforward network with ReLU activations and shared weights is indicated
by N (G, ?, p) where G(V, E) is a directed acyclic graph over the set of nodes V that corresponds
to units v ? V in the network, including special subsets of input and output nodes Vin , Vout ? V ,
p ? Rm is a parameter vector and ? : E ? {1, . . . , m} is a mapping from edges in G to parameters
indices. For any edge e ? E, the weight of the edge e is indicated by we = p?(e) . We refer to the
set of edges that share the ith parameter pi by Ei = {e ? E|?(e) = i}. That is, for any e1 , e2 ? Ei ,
?(e1 ) = ?(e2 ) and hence we1 = we2 = p?(e1 ) .
Such a feedforward network represents a function fN (G,?,p) : R|Vin | ? R|Vout | as follows: For any
input node v ? Vin , its output hv is the corresponding coordinate
of the input vector x i? R|Vin | .
hP
For each internal node v, the output is defined recursively as hv =
where
(u?v)?E wu?v ? hu
+
[z]+ = max(z, 0) is the ReLU P
activation function2 . For output nodes v ? Vout , no non-linearity is
applied and their output hv = (u?v)?E wu?v ? hu determines the corresponding coordinate of
the computed function fN (G,?,p) (x). Since we will fix the graph G and the mapping ? and learn
the parameters p, we use the shorthand fp = fN (G,?,p) to refer to the function implemented by
parameters p. The goal of training is to find parameters p that minimize some error functional L(fp )
that depends on p only through the function fp . E.g. in supervised learning L(f ) = E [loss(f (x), y)]
and this is typically done by minimizing an empirical estimate of this expectation.
If the mapping ? is a one-to-one mapping, then there is no weight sharing and it corresponds to
standard feedforward networks. On the other hand, weight sharing exists if ? is a many-to-one
mapping. Two well-known examples of feedforward networks with shared weights are convolutional
and recurrent networks. We mostly use the general notation of feedforward networks with shared
weights throughout the paper as this will be more general and simplifies the development and notation.
However, when focusing on RNNs, it is helpful to discuss them using a more familiar notation which
we briefly introduce next.
Recurrent Neural Networks Time-unfolded RNNs are feedforward networks with shared weights
that map an input sequence to an output sequence. Each input node corresponds to either a coordinate
of the input vector at a particular time step or a hidden unit at time 0. Each output node also
corresponds to a coordinate of the output at a specific time step. Finally, each internal node refers
to some hidden unit at time t ? 1. When discussing RNNs, it is useful to refer to different layers
and the values calculated at different time-steps. We use a notation for RNN structures in which
the nodes are partitioned into layers and hit denotes the output of nodes in layer i at time step t.
Let x = (x1 , . . . , xT ) be the input at different time steps where T is the maximum number of
i
propagations through time and we refer to it as the length of the RNN. For 0 ? i < d, let Win
and
i
Wrec be the input and recurrent parameter matrices of layer i and Wout be the output parameter
matrix. Table 1 shows forward computations for RNNs.The output of the function implemented by
RNN can then be calculated as fW,t (x) = hdt . Note that in this notations, weight matrices Win ,
Wrec and Wout correspond to ?free? parameters of the model that are shared in different time steps.
2
The bias terms can be modeled by having an additional special node vbias that is connected to all internal
and output nodes, where hvbias = 1.
2
3
Non-Saturating Activation Functions
The choice of activation function for neural networks can have a large impact on optimization. We
are particularly concerned with the distinction between ?saturating? and ?non-starting? activation
functions. We consider only monotone activation functions and say that a function is ?saturating? if it
is bounded?this includes, e.g. sigmoid, hyperbolic tangent and the piecewise-linear ramp activation
functions. Boundedness necessarily implies that the function values converge to finite values at
negative and positive infinity, and hence asymptote to horizontal lines on both sides. That is, the
derivative of the activation converges to zero as the input goes to both ?? and +?. Networks with
saturating activations therefore have a major shortcoming: the vanishing gradient problem [6]. The
problem here is that the gradient disappears when the magnitude of the input to an activation is large
(whether the unit is very ?active? or very ?inactive?) which makes the optimization very challenging.
While sigmoid and hyperbolic tangent have historically been popular choices for fully connected
feedforward and convolutional neural networks, more recent works have shown undeniable advantages
of non-saturating activations such as ReLU, which is now the standard choice for fully connected and
Convolutional networks [15, 10]. Non-saturating activations, including the ReLU, are typically still
bounded from below and asymptote to a horizontal line, with a vanishing derivative, at ??. But
they are unbounded from above, enabling their derivative to remain bounded away from zero as the
input goes to +?. Using ReLUs enables gradients to not vanish along activated paths and thus can
provide a stronger signal for training.
However, for recurrent neural networks, using ReLU activations is challenging in a different way, as
even a small change in the direction of the leading eigenvector of the recurrent weights could get
amplified and potentially lead to the explosion in forward or backward propagation [1].
To understand this, consider a long path from an input in the first element of the sequence to an output
of the last element, which passes through the same RNN edge at each step (i.e. through many edges
in some Ei in the shared-parameter representation). The length of this path, and the number of times
it passes through edges associated with a single parameter, is proportional to the sequence length,
which could easily be a few hundred or more. The effect of this parameter on the path is therefore
exponentiated by the sequence length, as are gradient updates for this parameter, which could lead to
parameter explosion unless an extremely small step size is used.
Understanding the geometry of RNNs with ReLUs could helps us deal with the above issues more
effectively. We next investigate some properties of geometry of RNNs with ReLU activations.
Invariances in Feedforward Nets with Shared Weights
Feedforward networks (with or without shared weights) are highly over-parameterized, i.e. there
are many parameter settings p that represent the same function fp . Since our true object of interest
is the function f , and not the identity p of the parameters, it would be beneficial if optimization
would depend only on fp and not get ?distracted? by difference in p that does not affect fp . It is
therefore helpful to study the transformations on the parameters that will not change the function
presented by the network and come up with methods that their performance is not affected by such
transformations.
Definition 1. We say a network N is invariant to a transformation T if for any parameter setting p,
fp = fT (p) . Similarly, we say an update rule A is invariant to T if for any p, fA(p) = fA(T (p)) .
Invariances have also been studied as different mappings from the parameter space to the same
function space [19] while we define the transformation as a mapping inside a fixed parameter space.
A very important invariance in feedforward networks is node-wise rescaling [17]. For any internal
node v and any scalar ? > 0, we can multiply all incoming weights into v (i.e. wu?v for any
(u ? v) ? E) by ? and all the outgoing weights (i.e. wv?u for any (v ? u) ? E) by 1/? without
changing the function computed by the network. Not all node-wise rescaling transformations can be
applied in feedforward nets with shared weights. This is due to the fact that some weights are forced
to be equal and therefore, we are only allowed to change them by the same scaling factor.
Definition 2. Given a network N , we say an invariant transformation Te that is defined over edge
weights (rather than parameters) is feasible for parameter mapping ? if the shared weights remain
equal after the transformation, i.e. for any i and for any e, e0 ? Ei , Te (w)e = Te (w)e0 .
3
1
1
1
1#
?
1
1#
?
1
1
1
1
1
1
T
1
1
?#
?
?#
?
1
1
1
?#
?
??
?
1
1
1
1
?#
?
?#
?
1
1
Figure 1: An example of invari-
1#
?
?#
?
1
1
1#
?
?
1
?
?#
?
?
?
ances in an RNN with two hidden
layers each of which has 2 hidden
units. The dashed lines correspond
to recurrent weights. The network
on the left hand side is equivalent
(i.e. represents the same function)
to the network on the right for any
nonzero ?11 = a, ?21 = b, ?12 = c,
?22 = d.
Therefore, it is helpful to understand what are the feasible node-wise rescalings for RNNs. In the
following theorem, we characterize all feasible node-wise invariances in RNNs.
Theorem 1. For any ? such that ?ji > 0, any Recurrent Neural Network with ReLU activation is invariant to the transformation T? ([Win , Wrec , Wout ]) = [Tin,? (Win ) , Trec,? (Wrec ) , Tout,? (Wout )]
where for any i, j, k:
i i
?j Win [j, k]
i = 1,
i
Tin,? (Win )i [j, k] =
(1)
?ji /?ki?1 Win
[j, k] 1 < i < d,
i
Trec,? (Wrec )i [j, k] = ?ji /?ki Wrec
[j, k],
Tout,? (Wout )[j, k] = 1/?kd?1 Wout [j, k].
Furthermore, any feasible node-wise rescaling transformation can be presented in the above form.
The proofs of all theorems and lemmas are given in the supplementary material. The above theorem
shows that there are many transformations under which RNNs represent the same function. An
example of such invariances is shown in Fig. 1. Therefore, we would like to have optimization
algorithms that are invariant to these transformations and in order to do so, we need to look at
measures that are invariant to such mappings.
4
Path-SGD for Networks with Shared Weights
As we discussed, optimization is inherently tied to a choice of geometry, here represented by a choice
of complexity measure or ?norm?3 . Furthermore, we prefer using an invariant measure which could
then lead to an invariant optimization method. In Section 4.1 we introduce the path-regularizer and in
Section 4.2, the derived Path-SGD optimization algorithm for standard feed-forward networks. Then
in Section 4.3 we extend these notions also to networks with shared weights, including RNNs, and
present two invariant optimization algorithms based on it. In Section 4.4 we show how these can be
implemented efficiently using forward and backward propagations.
4.1
Path-regularizer
The path-regularizer is the sum over all paths from input nodes to output nodes of the product of
squared weights along the path. To define it formally,let P be the set of directed paths from input to
output units so that for any path ? = ?0 , . . . , ?len(?) ? P of length len(?), we have that ?0 ? Vin ,
?len(?) ? Vout and for any 0 ? i ? len(?) ? 1, (?i ? ?i+1 ) ? E. We also abuse the notation and
denote e ? ? if for some i, e = (?i , ?i+1 ). Then the path regularizer can be written as:
2
?net
(w) =
X len(?)?1
Y
w?2i ??i+1
(2)
i=0
??P
Equivalently, the path-regularizer can be defined recursively on the nodes of the network as:
X
X
2
2
?v2 (w) =
?u2 (w)wu?v
,
?net
(w) =
?u2 (w)
(3)
u?Vout
(u?v)?E
3
The path-norm which we define is a norm on functions, not on weights, but as we prefer not getting into this
technical discussion here, we use the term ?norm? very loosely to indicate some measure of magnitude [18].
4
4.2
Path-SGD for Feedforward Networks
Path-SGD is an approximate steepest descent step with respect to the path-norm. More formally, for
a network without shared weights, where the parameters are the weights themselves, consider the
diagonal quadratic approximation of the path-regularizer about the current iterate w(t) :
D
E 1
2
2
2
2
??net
(w(t) + ?w) = ?net
(w(t) ) + ??net
(w(t) ), ?w + ?w> diag ?2 ?net
(w(t) ) ?w (4)
2
P
?2?2
0 2
0 2
Using the corresponding quadratic norm kw ? w k?? 2 (w(t) +?w) = 12 e?E ?wnet
2 (we ? we ) , we
net
e
can define an approximate steepest descent step as:
2
D
E
w(t+1) = min ? ?L(w), w ? w(t) +
w ? w(t)
2
.
(5)
(t)
w
?
?net (w
+?w)
Solving (5) yields the update:
we(t+1) = we(t) ?
?
?L
(w(t) )
(t)
?e (w ) ?we
where: ?e (w) =
2
(w)
1 ? 2 ?net
.
2
2
?we
(6)
(w(t) ) is called
The stochastic version that uses a subset of training examples to estimate ?w?L
u?v
Path-SGD [16]. We now show how Path-SGD can be extended to networks with shared weights.
4.3
Extending to Networks with Shared Weights
When the networks has shared weights, the path-regularizer is a function of parameters p and
therefore the quadratic approximation should also be with respect to the iterate p(t) instead of w(t)
which results in the following update rule:
D
E
p(t+1) = min ? ?L(p), p ? p(t) +
p ? p(t)
2 (t)
.
(7)
p
where kp ?
2
p0 k?? 2 (p(t) +?p)
net
?
?net (p
=
2
? 2 ?net
i=1 ?p2i
Pm
1
2
(pi ?
2
p0i ) .
+?p)
Solving (7) gives the following update:
2
(p)
?
?L (t)
1 ? 2 ?net
.
(p
)
where:
?
(p)
=
i
2
(t)
2
?pi
?i (p ) ?pi
The second derivative terms ?i are specified in terms of their path structure as follows:
(t+1)
pi
(t)
= pi ?
(1)
(8)
(2)
Lemma 1. ?i (p) = ?i (p) + ?i (p) where
len(?)?1
(1)
?i (p) =
X X
1e??
Y
p2?(?j ??j+1 ) =
j=0
e6=(?j ??j+1 )
e?Ei ??P
X
?e (w),
(9)
e?Ei
len(?)?1
(2)
?i (p) = p2i
X
X
e1,e2?Ei
e1 6=e2
??P
1e1 ,e2 ??
Y
p2?(?j ??j+1 ) ,
(10)
j=0
e1 6=(?j ??j+1 )
e2 6=(?j ??j+1 )
and ?e (w) is defined in (6).
(2)
The second term ?i (p) measures the effect of interactions between edges corresponding to the
same parameter (edges from the same Ei ) on the same path from input to output. In particular, if for
any path from an input unit to an output unit, no two edges along the path share the same parameter,
then ?(2) (p) = 0. For example, for any feedforward or Convolutional neural network, ?(2) (p) = 0.
But for RNNs, there certainly are multiple edges sharing a single parameter on the same path, and so
we could have ?(2) (p) 6= 0.
The above lemma gives us a precise update rule for the approximate steepest descent with respect to
the path-regularizer. The following theorem confirms that the steepest descent with respect to this
regularizer is also invariant to all feasible node-wise rescaling for networks with shared weights.
Theorem 2. For any feedforward networks with shared weights, the update (8) is invariant to all
(1)
feasible node-wise rescalings. Moreover, a simpler update rule that only uses ?i (p) in place of
?i (p) is also invariant to all feasible node-wise rescalings.
Equations (9) and (10) involve a sum over all paths in the network which is exponential in depth of
the network. However, we next show that both of these equations can be calculated efficiently.
5
&
' = 400, , = 10
0.00014
' = 400, , = 40
0.00022
' = 100, , = 10
0.00037
' = 100, , = 10
0.00048
Training Error
500
#
400
300
200
SGD
Path-SGD:5(1)
Path-SGD:5(1)+5(2)
400
300
200
100
100
0
Test Error
500
SGD
Path-SGD:5(1)
Path-SGD:5(1)+5(2)
Perplexity
#
! (%)
Perplexity
! (#)
0
50
100
Epoch
150
200
0
0
50
100
Epoch
150
200
Figure 2: Path-SGD with/without the second term in word-level language modeling on PTB. We use the
standard split (929k training, 73k validation and 82k test) and the vocabulary size of 10k words. We initialize
the weights by sampling from the uniform distribution with range [?0.1, 0.1]. The table on the left shows the
ratio of magnitude of first and second term for different lengths T and number of hidden units H. The plots
compare the training and test errors using a mini-batch of size 32 and backpropagating through T = 20 time
steps and using a mini-batch of size 32 where the step-size is chosen by a grid search.
4.4
Simple and Efficient Computations for RNNs
(1)
(2)
We show how to calculate ?i (p) and ?i (p) by considering a network with the same architecture
but with squared weights:
Theorem 3. For any network N (G, ?, p), consider N (G, ?, p?) where for any i, p?i = p2i . Define the
P|Vout |
function g : R|Vin | ? R to be the sum of outputs of this network: g(x) = i=1
fp? (x)[i]. Then ?(1)
(2)
and ? can be calculated as follows where 1 is the all-ones input vector:
X
?g(1) ?hu0 (?
p)
(2)
hu (?
p).
(11)
?(1) (p) = ?p? g(1),
?i (p) =
p?i
0
?h
(?
p
)
?h
(?
p
)
v
v
(u?v),(u0 ?v 0 )?E
(u?v)6=(u0 ?v 0 )
i
In the process of calculating the gradient ?p? g(1), we need to calculate hu (?
p) and ?g(1)/?hv (?
p)
for any u, v. Therefore, the only remaining term to calculate (besides ?p?g(1)) is ?hu0 (?
p)/?hv (?
p).
Recall that T is the length (maximum number of propagations through time) and d is the number
of layers in an RNN. Let H be the number of hidden units in each layer and B be the size of the
mini-batch. Then calculating the gradient of the loss at all points in the minibatch (the standard
work required for any mini-batch gradient approach) requires time O(BdT H 2 ). In order to calculate
(1)
?i (p), we need to calculate the gradient ?p? g(1) of a similar network at a single input?so the
time complexity is just an additional O(dT H 2 ). The second term ?(2) (p) can also be calculated
for RNNs in O(dT H 2 (T + H)). For an RNN, ?(2) (Win ) = 0 and ?(2) (Wout ) = 0 because only
recurrent weights are shared multiple times along an input-output path. ?(2) (Wrec ) can be written
and calculated in the matrix form:
#
"
T
?3
1 ?1
X
X
> T ?t
>
?g(1)
(2)
i
0i
0i t1
i
Wrec
h (?
p)
? (Wrec ) = Wrec
?hit1 +t2 +1 (?
p) t2
t1 =0
t2 =2
2
0i
i
where for any i, j, k we have Wrec
[j, k] = Wrec
[j, k] . The only terms that require extra computation are powers of Wrec which can be done in O(dT H 3 ) and the rest of the matrix computations need
O(dT 2 H 2 ). Therefore, the ratio of time complexity of calculating the first term and second term with
respect to the gradient over mini-batch is O(1/B) and O((T + H)/B) respectively. Calculating only
(1)
(2)
?i (p) is therefore very cheap with minimal per-minibatch cost, while calculating ?i (p) might be
(1)
expensive for large networks. Beyond the low computational cost, calculating ?i (p) is also very
easy to implement as it requires only taking the gradient with respect to a standard feed-forward
calculation in a network with slightly modified weights?with most deep learning libraries it can be
implemented very easily with only a few lines of code.
5
5.1
Experiments
The Contribution of the Second Term
As we discussed in section 4.4, the second term ?(2) in the update rule can be computationally
expensive for large networks. In this section we investigate the significance of the second term
6
0.20
MSE
0.15
Adding 100
IRNN
RNN Path
Adding 400
0.20
IRNN
RNN Path
0.20
0.15
0.10
0.00
0
IRNN
RNN Path
0.15
0.05
0.05
15
30
45
number of Epochs
Adding 750
0.10
0.10
0.05
0.00
0
0.25
80
160
number of Epochs
240
0.00
0
100
200
300
number of Epochs
400
Figure 3: Test errors for the addition problem of different lengths.
and show that at least in our experiments, the contribution of the second term is negligible. To
compare the two terms ?(1) and ?(2) , we train a single layer RNN with H = 200 hidden units for the
task of word-level language modeling on Penn Treebank (PTB) Corpus [13]. Fig. 2 compares the
performance of SGD vs. Path-SGD with/without ?(2) . We clearly see that both versions of Path-SGD
are performing very similarly and both of them outperform SGD significantly. This results in Fig. 2
suggest that the first term is more significant and therefore we can ignore the second term.
To
better
the importance of the two terms, we compared the ratio of the norms
(2)
understand
?
/
?(1)
for different RNN lengths T and number of hidden units H. The table in Fig. 2
2
2
shows that the contribution of the second term is bigger when the network has fewer number of
hidden units and the length of the RNN is larger (H is small and T is large). However, in many cases,
it appears that the first term has a much bigger contribution in the update step and hence the second
term can be safely ignored. Therefore, in the rest of our experiments, we calculate the Path-SGD
updates only using the first term ?(1) .
5.2
Synthetic Problems with Long-term Dependencies
Training Recurrent Neural Networks is known to be hard for modeling long-term dependencies due to
the gradient vanishing/exploding problem [6, 2]. In this section, we consider synthetic problems that
are specifically designed to test the ability of a model to capture the long-term dependency structure.
Specifically, we consider the addition problem and the sequential MNIST problem.
Addition problem: The addition problem was introduced in [7]. Here, each input consists of two
sequences of length T , one of which includes numbers sampled from the uniform distribution with
range [0, 1] and the other sequence serves as a mask which is filled with zeros except for two entries.
These two entries indicate which of the two numbers in the first sequence we need to add and the task
is to output the result of this addition.
Sequential MNIST: In sequential MNIST, each digit image is reshaped into a sequence of length 784,
turning the digit classification task into sequence classification with long-term dependencies [12, 1].
For both tasks, we closely follow the experimental protocol in [12]. We train a single-layer RNN
consisting of 100 hidden units with path-SGD, referred to as RNN-Path. We also train an RNN of
the same size with identity initialization, as was proposed in [12], using SGD as our baseline model,
referred to as IRNN. We performed grid search for the learning rates over {10?2 , 10?3 , 10?4 }
for both our model and the baseline. Non-recurrent weights were initialized from the uniform
distribution with range [?0.01, 0.01]. Similar to [1], we found the IRNN to be fairly unstable (with
SGD optimization typically diverging). Therefore for IRNN, we ran 10 different initializations and
picked the one that did not explode to show its performance.
In our first experiment, we evaluate Path-SGD on the addition problem. The results are shown in
Fig. 3 with increasing the length T of the sequence: {100, 400, 750}. We note that this problem
becomes much harder as T increases because the dependency between the output (the sum of two
numbers) and the corresponding inputs becomes more distant. We also compare RNN-Path with
the previously published results, including identity initialized RNN [12] (IRNN), unitary RNN [1]
(uRNN), and np-RNN4 introduced by [22]. Table 2 shows the effectiveness of using Path-SGD.
Perhaps more surprisingly, with the help of path-normalization, a simple RNN with the identity
initialization is able to achieve a 0% error on the sequences of length 750, whereas all the other
methods, including LSTMs, fail. This shows that Path-SGD may help stabilize the training and
alleviate the gradient problem, so as to perform well on longer sequence. We next tried to model
4
The original paper does not include any result for 750, so we implemented np-RNN for comparison.
However, in our implementation the np-RNN is not able to even learn sequences of length of 200. Thus we put
?>2? for length of 750.
7
Adding
100
Adding
400
Adding
750
sMNIST
IRNN [12]
uRNN [1]
LSTM [1]
np-RNN[22]
0
0
0
0
16.7
3
2
2
16.7
16.7
16.7
>2
5.0
4.9
1.8
3.1
IRNN
RNN-Path
0
0
0
0
16.7
0
7.1
3.1
Table 2: Test error (MSE) for the adding problem with
different input sequence lengths and test classification
error for the sequential MNIST.
PTB
text8
RNN+smoothReLU [20]
HF-MRNN [14]
RNN-ReLU[11]
RNN-tanh[11]
TRec,? = 500[11]
1.42
1.65
1.55
1.48
1.55
1.54
-
RNN-ReLU
RNN-tanh
RNN-Path
LSTM
1.55
1.58
1.47
1.41
1.65
1.70
1.58
1.52
Table 3: Test BPC for PTB and text8.
the sequences length of 1000, but we found that for such very long sequences RNNs, even with
Path-SGD, fail to learn.
Next, we evaluate Path-SGD on the Sequential MNIST problem. Table 2, right column, reports
test error rates achieved by RNN-Path compared to the previously published results. Clearly, using
Path-SGD helps RNNs achieve better generalization. In many cases, RNN-Path outperforms other
RNN methods (except for LSTMs), even for such a long-term dependency problem.
5.3
Language Modeling Tasks
In this section we evaluate Path-SGD on a language modeling task. We consider two datasets, Penn
Treebank (PTB-c) and text8 5 . PTB-c: We performed experiments on a tokenized Penn Treebank
Corpus, following the experimental protocol of [11]. The training, validations and test data contain
5017k, 393k and 442k characters respectively. The alphabet size is 50, and each training sequence is
of length 50. text8: The text8 dataset contains 100M characters from Wikipedia with an alphabet
size of 27. We follow the data partition of [14], where each training sequence has a length of 180.
Performance is evaluated using bits-per-character (BPC) metric, which is log2 of perplexity.
Similar to the experiments on the synthetic datasets, for both tasks, we train a single-layer RNN
consisting of 2048 hidden units with path-SGD (RNN-Path). Due to the large dimension of hidden
space, SGD can take a fairly long time to converge. Instead, we use Adam optimizer [8] to help speed
up the training, where we simply use the path-SGD gradient as input to the Adam optimizer.
We also train three additional baseline models: a ReLU RNN with 2048 hidden units, a tanh RNN
with 2048 hidden units, and an LSTM with 1024 hidden units, all trained using Adam. We performed
grid search for learning rate over {10?3 , 5 ? 10?4 , 10?4 } for all of our models. For ReLU RNNs,
we initialize the recurrent matrices from uniform[?0.01, 0.01], and uniform[?0.2, 0.2] for nonrecurrent weights. For LSTMs, we use orthogonal initialization [21] for the recurrent matrices and
uniform[?0.01, 0.01] for non-recurrent weights. The results are summarized in Table 3.
We also compare our results to an RNN that uses hidden activation regularizer [11] (TRec,? = 500),
Multiplicative RNNs trained by Hessian Free methods [14] (HF-MRNN), and an RNN with smooth
version of ReLU [20]. Table 3 shows that path-normalization is able to outperform RNN-ReLU and
RNN-tanh, while at the same time shortening the performance gap between plain RNN and other
more complicated models (e.g. LSTM by 57% on PTB and 54% on text8 datasets). This demonstrates
the efficacy of path-normalized optimization for training RNNs with ReLU activation.
6
Conclusion
We investigated the geometry of RNNs in a broader class of feedforward networks with shared
weights and showed how understanding the geometry can lead to significant improvements on
different learning tasks. Designing an optimization algorithm with a geometry that is well-suited
for RNNs, we closed over half of the performance gap between vanilla RNNs and LSTMs. This is
particularly useful for applications in which we seek compressed models with fast prediction time
that requires minimum storage; and also a step toward bridging the gap between LSTMs and RNNs.
Acknowledgments
This research was supported in part by NSF RI/AF grant 1302662, an Intel ICRI-CI award, ONR
Grant N000141310721, and ADeLAIDE grant FA8750-16C-0130-001. We thank Saizheng Zhang
for sharing a base code for RNNs.
5
http://mattmahoney.net/dc/textdata
8
References
[1] Martin Arjovsky, Amar Shah, and Yoshua Bengio. Unitary evolution recurrent neural networks. arXiv
preprint arXiv:1511.06464, 2015.
[2] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient
descent is difficult. Neural Networks, IEEE Transactions on, 5(2):157?166, 1994.
[3] Kyunghyun Cho, Bart Van Merri?nboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger
Schwenk, and Yoshua Bengio. Learning phrase representations using RNN encoder?decoder for statistical
machine translation. In Proceeding of the 2015 Conference on Empirical Methods in Natural Language
Processing (EMNLP), pages 1724?1734, 2014.
[4] Alex Graves and Navdeep Jaitly. Towards end-to-end speech recognition with recurrent neural networks.
In Proceeding of the International Conference on Machine Learning (ICML), pages 1764?1772, 2014.
[5] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with
pruning, trained quantization and huffman coding. In Proceeding of the International Conference on
Learning Representations, 2016.
[6] Sepp Hochreiter. The vanishing gradient problem during learning recurrent neural nets and problem
solutions. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 6(02), 1998.
[7] Sepp Hochreiter and J?rgen Schmidhuber. Long short-term memory. Neural computation, 9(8), 1997.
[8] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceeding of the
International Conference on Learning Representations, 2015.
[9] Ryan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. Unifying visual-semantic embeddings with
multimodal neural language models. Transactions of the Association for Computational Linguistics, 2015.
[10] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet classification with deep convolutional
neural networks. In Advances in neural information processing systems (NIPS), pages 1097?1105, 2012.
[11] David Krueger and Roland Memisevic. Regularizing RNNs by stabilizing activations. In Proceeding of
the International Conference on Learning Representations, 2016.
[12] Quoc V Le, Navdeep Jaitly, and Geoffrey E Hinton. A simple way to initialize recurrent networks of
rectified linear units. arXiv preprint arXiv:1504.00941, 2015.
[13] Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus
of english: The penn treebank. Computational linguistics, 19(2):313?330, 1993.
[14] Tom?? Mikolov, Ilya Sutskever, Anoop Deoras, Hai-Son Le, Stefan Kombrink, and J Cernocky. Subword
language modeling with neural networks. (http://www.fit.vutbr.cz/ imikolov/rnnlm/char.pdf), 2012.
[15] Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In
Proceedings of the International Conference on Machine Learning (ICML), pages 807?814, 2010.
[16] Behnam Neyshabur, Ruslan Salakhutdinov, and Nathan Srebro. Path-SGD: Path-normalized optimization
in deep neural networks. In Advanced in Neural Information Processsing Systems (NIPS), 2015.
[17] Behnam Neyshabur, Ryota Tomioka, Ruslan Salakhutdinov, and Nathan Srebro. Data-dependent path
normalization in neural networks. In the International Conference on Learning Representations, 2016.
[18] Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. Norm-based capacity control in neural networks.
In Proceeding of the 28th Conference on Learning Theory (COLT), 2015.
[19] Yann Ollivier. Riemannian metrics for neural networks ii: recurrent networks and learning symbolic data
sequences. Information and Inference, page iav007, 2015.
[20] Marius Pachitariu and Maneesh Sahani. Regularization and nonlinearities for neural language models:
when are they needed? arXiv preprint arXiv:1301.5650, 2013.
[21] Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of
learning in deep linear neural networks. In International Conference on Learning Representations, 2014.
[22] Sachin S. Talathi and Aniket Vartak. Improving performance of recurrent neural network with relu
nonlinearity. In the International Conference on Learning Representations workshop track, 2014.
[23] Saizheng Zhang, Yuhuai Wu, Tong Che, Zhouhan Lin, Roland Memisevic, Ruslan Salakhutdinov,
and Yoshua Bengio. Architectural complexity measures of recurrent neural networks. arXiv preprint
arXiv:1602.08210, 2016.
9
| 6214 |@word version:3 briefly:1 compression:1 seems:1 norm:12 stronger:1 hu:6 confirms:1 seek:1 tried:1 p0:1 sgd:37 harder:1 boundedness:1 recursively:2 contains:1 efficacy:1 fa8750:1 subword:1 outperforms:1 current:1 activation:23 diederik:1 written:2 fn:3 chicago:2 distant:1 partition:1 enables:1 cheap:1 asymptote:2 plot:1 designed:1 update:11 v:1 bart:1 half:1 fewer:1 steepest:5 vanishing:5 short:2 ith:1 node:27 toronto:2 simpler:2 zhang:2 unbounded:1 along:4 shorthand:1 consists:1 inside:1 introduce:2 vartak:1 mask:1 bneyshabur:1 themselves:1 kiros:1 ptb:7 salakhutdinov:5 unfolded:1 considering:1 increasing:1 becomes:2 spain:1 notation:7 linearity:1 bounded:3 moreover:1 what:1 eigenvector:1 transformation:11 safely:1 rm:1 hit:3 demonstrates:1 control:1 unit:21 grant:3 penn:4 positive:1 negligible:1 t1:2 local:1 path:72 abuse:1 might:3 rnns:40 initialization:8 studied:1 challenging:2 range:3 directed:2 acknowledgment:1 implement:1 digit:2 empirical:3 rnn:45 maneesh:1 significantly:2 hyperbolic:2 word:3 refers:1 suggest:1 symbolic:1 get:2 put:1 storage:1 optimize:1 function2:1 map:1 equivalent:1 www:1 go:2 attention:1 starting:1 sepp:2 jimmy:1 stabilizing:1 rule:5 hd:1 wrec:14 notion:2 coordinate:4 merri:1 exact:1 us:4 designing:2 jaitly:2 element:2 hdt:2 expensive:2 particularly:2 h0t:1 recognition:1 textdata:1 ft:1 preprint:4 hv:8 capture:1 vbias:1 calculate:6 compressing:1 connected:3 richness:1 technological:2 ran:1 complexity:5 hi0:1 dynamic:3 imikolov:1 trained:4 depend:1 solving:2 basis:1 easily:2 multimodal:1 schwenk:1 various:1 represented:1 regularizer:10 alphabet:2 train:5 forced:1 fast:1 shortcoming:1 kp:1 bdt:1 zemel:1 saizheng:2 supplementary:1 solve:1 larger:1 say:4 ramp:1 deoras:1 compressed:1 encoder:1 ability:1 amar:1 reshaped:1 itself:1 patrice:1 sequence:21 advantage:1 intelligently:1 net:20 interaction:1 product:1 adaptation:1 achieve:2 amplified:1 tout:2 getting:1 sutskever:2 extending:1 adam:4 converges:1 object:1 help:5 recurrent:26 develop:2 andrew:1 strong:1 p2:2 implemented:5 c:2 implies:1 come:1 indicate:2 direction:1 closely:1 annotated:1 stochastic:3 saxe:1 char:1 material:1 require:3 beatrice:1 fix:1 generalization:1 marcinkiewicz:1 alleviate:1 ryan:1 extension:1 mapping:9 rgen:1 major:1 optimizer:2 ruslan:5 yuhuai:2 tanh:4 talathi:1 stefan:1 clearly:2 modified:1 rather:3 mobile:1 broader:1 derived:1 focus:1 improvement:1 baseline:3 helpful:3 inference:1 ganguli:1 dependent:1 typically:3 hidden:16 hu0:2 issue:2 classification:4 colt:1 development:1 platform:1 special:2 initialize:3 fairly:2 equal:2 having:1 frasconi:1 sampling:1 represents:2 kw:1 look:1 holger:1 icml:2 report:1 t2:3 np:4 piecewise:1 few:2 richard:1 yoshua:4 familiar:1 geometry:15 consisting:2 william:1 interest:1 investigate:4 highly:1 multiply:1 certainly:2 bpc:2 we2:1 activated:1 edge:12 explosion:3 orthogonal:1 unless:1 filled:1 euclidean:2 loosely:1 initialized:2 e0:2 aniket:1 minimal:1 column:1 modeling:7 phrase:1 cost:2 subset:2 entry:2 hundred:1 uniform:6 krizhevsky:1 successful:1 delay:1 characterize:1 dependency:9 synthetic:3 combined:1 cho:1 grus:2 lstm:5 sensitivity:1 international:9 memisevic:2 zhouhan:1 ilya:2 squared:2 opposed:1 emnlp:1 derivative:4 leading:2 rescaling:4 simard:1 nonlinearities:1 coding:1 summarized:1 stabilize:1 includes:2 depends:1 performed:3 view:1 picked:1 multiplicative:1 closed:1 dally:1 relus:2 len:7 hf:2 vin:6 complicated:1 contribution:4 minimize:1 text8:6 convolutional:5 efficiently:2 correspond:2 yield:1 vout:6 rectified:2 published:2 sharing:4 definition:2 james:1 e2:6 associated:1 proof:1 riemannian:1 sampled:1 dataset:1 popular:1 mitchell:1 recall:1 knowledge:1 focusing:2 feed:3 appears:1 dt:4 supervised:1 follow:2 tom:1 huizi:1 done:2 evaluated:1 furthermore:2 just:1 hand:2 horizontal:2 lstms:6 ei:8 nonlinear:1 propagation:4 minibatch:2 indicated:2 perhaps:2 icri:1 irnn:9 mary:1 building:1 effect:3 normalized:4 true:1 contain:1 mattmahoney:1 hence:3 evolution:1 kyunghyun:1 regularization:1 nonzero:1 semantic:1 deal:1 fethi:1 during:2 backpropagating:1 pdf:1 image:1 wise:8 recently:1 krueger:1 sigmoid:2 wikipedia:1 functional:1 ji:3 extend:2 discussed:2 association:1 bougares:1 mellon:1 refer:4 significant:2 vanilla:1 grid:3 pm:1 hp:2 similarly:2 nonlinearity:1 language:8 han:1 longer:1 add:1 base:1 recent:2 showed:1 optimizing:1 perplexity:3 schmidhuber:1 wv:1 success:2 discussing:1 onr:1 processsing:1 minimum:1 additional:3 arjovsky:1 determine:1 converge:2 exploding:2 signal:1 dashed:1 multiple:2 u0:2 ii:1 smooth:1 technical:1 calculation:1 af:1 long:13 lin:1 equally:1 e1:7 bigger:2 award:1 roland:2 impact:1 prediction:1 involving:1 variant:1 cmu:1 expectation:1 metric:2 arxiv:8 navdeep:2 represent:2 normalization:3 p0i:1 cz:1 achieved:1 hochreiter:2 addition:6 whereas:1 huffman:1 rescalings:3 extra:1 rest:2 pass:2 subject:1 bahdanau:1 seem:1 effectiveness:1 unitary:2 feedforward:20 split:1 vinod:1 easy:1 concerned:1 variety:1 affect:1 relu:20 iterate:2 bengio:4 architecture:1 embeddings:1 fit:1 simplifies:1 inactive:1 whether:1 urnn:2 assist:1 bridging:1 song:1 speech:1 hessian:1 cause:1 deep:6 ignored:1 generally:1 useful:2 involve:1 shortening:1 mcclelland:1 sachin:1 http:2 outperform:2 problematic:1 nsf:1 per:2 track:1 mrnn:2 carnegie:1 affected:1 paolo:1 changing:2 ht:1 ollivier:1 backward:2 graph:2 monotone:1 sum:4 parameterized:1 uncertainty:1 place:1 throughout:1 wu:8 yann:1 architectural:1 prefer:2 scaling:1 bit:1 capturing:1 layer:10 ki:2 undeniable:1 quadratic:4 adapted:1 infinity:1 alex:2 ri:1 explode:1 nathan:4 speed:1 extremely:1 min:2 performing:1 nboer:1 mikolov:1 martin:1 marius:1 alternate:1 kd:1 remain:2 beneficial:1 slightly:1 character:3 son:1 partitioned:1 rsalakhu:1 quoc:1 invariant:12 restricted:1 computationally:1 equation:2 previously:2 discus:2 fail:2 reverting:1 needed:1 serf:1 end:2 gulcehre:1 pachitariu:1 neyshabur:4 away:1 appropriate:1 v2:1 batch:5 shah:1 original:1 denotes:1 remaining:1 include:2 linguistics:2 log2:1 unifying:1 calculating:6 build:1 strategy:1 fa:2 diagonal:1 hai:1 che:1 gradient:21 win:9 thank:1 capacity:1 decoder:1 unstable:1 toward:1 dzmitry:1 marcus:1 tokenized:1 length:20 besides:1 index:1 modeled:1 mini:5 ratio:3 minimizing:1 code:2 equivalently:1 difficult:1 mostly:1 potentially:1 ryota:2 negative:1 ba:1 design:1 implementation:1 boltzmann:1 gated:1 contributed:1 perform:1 n000141310721:1 datasets:4 finite:1 enabling:1 descent:9 caglar:1 cernocky:1 extended:1 hinton:3 precise:1 santorini:1 trec:4 distracted:1 dc:1 ttic:2 introduced:2 david:1 required:1 specified:1 imagenet:1 distinction:1 barcelona:1 kingma:1 nip:3 beyond:1 suggested:1 able:3 below:1 fp:8 including:7 memory:2 max:1 power:1 natural:1 turning:1 advanced:1 representing:1 scheme:1 improve:2 wout:8 historically:1 library:1 disappears:1 sahani:1 prior:1 understanding:3 epoch:5 nati:1 tangent:2 nonrecurrent:1 graf:1 loss:2 fully:2 proportional:1 rnnlm:1 srebro:4 acyclic:1 geoffrey:3 validation:2 vutbr:1 attuned:1 treebank:4 share:2 pi:6 translation:1 kombrink:1 surprisingly:1 last:1 we1:1 free:2 supported:1 english:1 bias:1 side:2 understand:4 exponentiated:1 institute:2 taking:1 van:1 plain:8 calculated:6 depth:1 vocabulary:1 dimension:1 rnn4:1 forward:7 transaction:2 approximate:3 pruning:1 ignore:1 ances:1 active:1 incoming:1 corpus:3 surya:1 search:5 table:10 learn:5 inherently:2 improving:2 mse:2 investigated:1 complex:4 necessarily:1 protocol:2 diag:1 did:1 significance:1 motivation:2 allowed:1 p2i:3 x1:1 fig:5 referred:2 intel:1 ff:1 tong:1 tomioka:2 mao:1 exponential:1 tied:3 vanish:1 toyota:2 tin:2 theorem:7 xt:2 specific:1 behnam:4 exists:1 workshop:1 mnist:5 quantization:1 adding:7 effectively:1 importance:1 sequential:5 ci:1 magnitude:3 te:3 gap:3 easier:1 suited:1 simply:1 visual:1 saturating:6 scalar:1 u2:2 corresponds:4 determines:1 insisting:1 nair:1 ywu:1 goal:2 viewed:1 identity:4 fuzziness:1 ann:1 towards:1 shared:26 feasible:7 fw:1 change:3 hard:1 specifically:2 except:2 lemma:3 called:1 invariance:6 experimental:2 trainability:1 diverging:1 formally:2 internal:5 e6:1 adelaide:1 anoop:1 evaluate:3 outgoing:1 regularizing:1 |
5,764 | 6,215 | On Multiplicative Integration with
Recurrent Neural Networks
Yuhuai Wu1,? , Saizheng Zhang2,? , Ying Zhang2 , Yoshua Bengio2,4 and Ruslan Salakhutdinov3,4
1
University of Toronto, 2 MILA, Universit? de Montr?al, 3 Carnegie Mellon University, 4 CIFAR
[email protected],2 {firstname.lastname}@umontreal.ca,[email protected]
Abstract
We introduce a general and simple structural design called ?Multiplicative Integration? (MI) to improve recurrent neural networks (RNNs). MI changes the way in
which information from difference sources flows and is integrated in the computational building block of an RNN, while introducing almost no extra parameters.
The new structure can be easily embedded into many popular RNN models, including LSTMs and GRUs. We empirically analyze its learning behaviour and conduct
evaluations on several tasks using different RNN models. Our experimental results
demonstrate that Multiplicative Integration can provide a substantial performance
boost over many of the existing RNN models.
1
Introduction
Recently there has been a resurgence of new structural designs for recurrent neural networks (RNNs)
[1, 2, 3]. Most of these designs are derived from popular structures including vanilla RNNs, Long
Short Term Memory networks (LSTMs) [4] and Gated Recurrent Units (GRUs) [5]. Despite of their
varying characteristics, most of them share a common computational building block, described by the
following equation:
?(Wx + Uz + b),
(1)
where x ? Rn and z ? Rm are state vectors coming from different information sources, W ? Rd?n
and U ? Rd?m are state-to-state transition matrices, and b is a bias vector. This computational
building block serves as a combinator for integrating information flow from the x and z by a sum
operation ?+?, followed by a nonlinearity ?. We refer to it as the additive building block. Additive
building blocks are widely implemented in various state computations in RNNs (e.g. hidden state
computations for vanilla-RNNs, gate/cell computations of LSTMs and GRUs.
In this work, we propose an alternative design for constructing the computational building block by
changing the procedure of information integration. Specifically, instead of utilizing sum operation
?+", we propose to use the Hadamard product ?? to fuse Wx and Uz:
?(Wx Uz + b)
(2)
The result of this modification changes the RNN from first order to second order [6], while introducing
no extra parameters. We call this kind of information integration design a form of Multiplicative
Integration. The effect of multiplication naturally results in a gating type structure, in which Wx
and Uz are the gates of each other. More specifically, one can think of the state-to-state computation
Uz (where for example z represents the previous state) as dynamically rescaled by Wx (where
for example x represents the input). Such rescaling does not exist in the additive building block, in
which Uz is independent of x. This relatively simple modification brings about advantages over the
additive building block as it alters RNN?s gradient properties, which we discuss in detail in the next
section, as well as verify through extensive experiments.
?
Equal contribution.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
In the following sections, we first introduce a general formulation of Multiplicative Integration. We
then compare it to the additive building block on several sequence learning tasks, including character
level language modelling, speech recognition, large scale sentence representation learning using a
Skip-Thought model, and teaching a machine to read and comprehend for a question answering
task. The experimental results (together with several existing state-of-the-art models) show that
various RNN structures (including vanilla RNNs, LSTMs, and GRUs) equipped with Multiplicative
Integration provide better generalization and easier optimization. Its main advantages include: (1) it
enjoys better gradient properties due to the gating effect. Most of the hidden units are non-saturated;
(2) the general formulation of Multiplicative Integration naturally includes the regular additive
building block as a special case, and introduces almost no extra parameters compared to the additive
building block; and (3) it is a drop-in replacement for the additive building block in most of the
popular RNN models, including LSTMs and GRUs. It can also be combined with other RNN training
techniques such as Recurrent Batch Normalization [7]. We further discuss its relationship to existing
models, including Hidden Markov Models (HMMs) [8], second order RNNs [6] and Multiplicative
RNNs [9].
2
Structure Description and Analysis
2.1
General Formulation of Multiplicative Integration
The key idea behind Multiplicative Integration is to integrate different information flows Wx and Uz,
by the Hadamard product ??. A more general formulation of Multiplicative Integration includes
two more bias vectors ?1 and ?2 added to Wx and Uz:
?((Wx + ?1 ) (Uz + ?2 ) + b)
(3)
d
where ?1 , ?2 ? R are bias vectors. Notice that such formulation contains the first order terms as
in a additive building block, i.e., ?1 Uht?1 + ?2 Wxt . In order to make the Multiplicative
Integration more flexible, we introduce another bias vector ? ? Rd to gate2 the term Wx Uz,
obtaining the following formulation:
?(? Wx Uz + ?1 Uz + ?2 Wx + b),
(4)
Note that the number of parameters of the Multiplicative Integration is about the same as that of the
additive building block, since the number of new parameters (?, ?1 and ?2 ) are negligible compared
to total number of parameters. Also, Multiplicative Integration can be easily extended to LSTMs
and GRUs3 , that adopt vanilla building blocks for computing gates and output states, where one can
directly replace them with the Multiplicative Integration. More generally, in any kind of structure
where k information flows (k ? 2) are involved (e.g. residual networks [10]), one can implement
pairwise Multiplicative Integration for integrating all k information sources.
2.2 Gradient Properties
The Multiplicative Integration has different gradient properties compared to the additive building
block. For clarity of presentation, we first look at vanilla-RNN and RNN with Multiplicative
Integration embedded, referred to as MI-RNN. That is, ht = ?(Wxt + Uht?1 + b) versus
?ht
ht = ?(Wxt Uht?1 + b). In a vanilla-RNN, the gradient ?h
can be computed as follows:
t?n
t
Y
?ht
=
UT diag(?0k ),
(5)
?ht?n
k=t?n+1
?0k
0
where
= ? (Wxk + Uhk?1 + b). The equation above shows that the gradient flow through time
heavily depends on the hidden-to-hidden matrix U, but W and xk appear to play a limited role: they
?ht
only come in the derivative of ?0 mixed with Uhk?1 . On the other hand, the gradient ?h
of a
t?n
MI-RNN is4 :
t
Y
?ht
=
UT diag(Wxk )diag(?0k ),
(6)
?ht?n
k=t?n+1
2
If ? = 0, the Multiplicative Integration will degenerate to the vanilla additive building block.
See exact formulations in the Appendix.
4
Here we adopt the simplest formulation of Multiplicative Integration for illustration. In the more general
case (Eq. 4), diag(Wxk ) in Eq. 6 will become diag(? Wxk + ?1 ).
3
2
where ?0k = ?0 (Wxk Uhk?1 + b). By looking at the gradient, we see that the matrix W and
the current input xk is directly involved in the gradient computation by gating the matrix U, hence
more capable of altering the updates of the learning system. As we show in our experiments, with
Wxk directly gating the gradient, the vanishing/exploding problem is alleviated: Wxk dynamically
reconciles U, making the gradient propagation easier compared to the regular RNNs. For LSTMs
and GRUs with Multiplicative Integration, the gradient propagation properties are more complicated.
But in principle, the benefits of the gating effect also persists in these models.
3
Experiments
In all of our experiments, we use the general form of Multiplicative Integration (Eq. 4) for any hidden
state/gate computations, unless otherwise specified.
3.1 Exploratory Experiments
To further understand the functionality of Multiplicative Integration, we take a simple RNN for
illustration, and perform several exploratory experiments on the character level language modeling
task using Penn-Treebank dataset [11], following the data partition in [12]. The length of the
training sequence is 50. All models have a single hidden layer of size 2048, and we use Adam
optimization algorithm [13] with learning rate 1e?4 . Weights are initialized to samples drawn from
uniform[?0.02, 0.02]. Performance is evaluated by the bits-per-character (BPC) metric, which is
log2 of perplexity.
3.1.1 Gradient Properties
To analyze the gradient flow of the model, we divide the gradient in Eq. 6 into two parts: 1. the
gated matrix products: UT diag(Wxk ), and 2. the derivative of the nonlinearity ?0 , We separately
analyze the properties of each term compared to the additive building block. We first focus on the
gating effect brought by diag(Wxk ). In order to separate out the effect of nonlinearity, we chose ?
to be the identity map, hence both vanilla-RNN and MI-RNN reduce to linear models, referred to as
lin-RNN and lin-MI-RNN.
For each model we monitor the log-L2-norm of the gradient log ||?C/?ht ||2 (averaged over the
training set) after every training epoch, where ht is the hidden state at time step t, and C is the
negative log-likelihood of the single character prediction at the final time step (t = 50). Figure. 1
shows the evolution of the gradient norms for small t, i.e., 0, 5, 10, as they better reflect the gradient
propagation behaviour. Observe that the norms of lin-MI-RNN (orange) increase rapidly and soon
exceed the corresponding norms of lin-RNN by a large margin. The norms of lin-RNN stay close to
zero (? 10?4 ) and their changes over time are almost negligible. This observation implies that with
the help of diag(Wxk ) term, the gradient vanishing of lin-MI-RNN can be alleviated compared to
lin-RNN. The final test BPC (bits-per-character) of lin-MI-RNN is 1.48, which is comparable to a
vanilla-RNN with stabilizing regularizer [14], while lin-RNN performs rather poorly, achieving a test
BPC of over 2.
Next we look into the nonlinearity ?. We chose ? = tanh for both vanilla-RNN and MI-RNN.
Figure 1 (c) and (d) shows a comparison of histograms of hidden activations over all time steps on
the validation set after training. Interestingly, in (c) for vanilla-RNN, most activations are saturated
with values around ?1, whereas in (d) for MI-RNN, most activations are non-saturated with values
around 0. This has a direct consequence in gradient propagation: non-saturated activations imply
that diag(?0k ) ? 1 for ? = tanh, which can help gradients propagate, whereas saturated activations
imply that diag(?0k ) ? 0, resulting in gradients vanishing.
3.1.2 Scaling Problem
When adding two numbers at different order of magnitude, the smaller one might be negligible for the
sum. However, when multiplying two numbers, the value of the product depends on both regardless
of the scales. This principle also applies when comparing Multiplicative Integration to the additive
building blocks. In this experiment, we test whether Multiplicative Integration is more robust to the
scales of weight values. Following the same models as in Section 3.1.1, we first calculated the norms
of Wxk and Uhk?1 for both vanilla-RNN and MI-RNN for different k after training. We found that
in both structures, Wxk is a lot smaller than Uhk?1 in magnitude. This might be due to the fact that
xk is a one-hot vector, making the number of updates for (columns of) W be smaller than U. As a
result, in vanilla-RNN, the pre-activation term Wxk + Uhk?1 is largely controlled by the value of
Uhk?1 , while Wxk becomes rather small. In MI-RNN, on the other hand, the pre-activation term
Wxk Uhk?1 still depends on the values of both Wxk and Uhk?1 , due to multiplication.
3
validation BPC
?4
?7
5
0.5
lin-RNN, t=0
lin-RNN, t=5
lin-RNN, t=10
lin-MI-RNN, t=0
lin-MI-RNN, t=5
lin-MI-RNN, t=10
10
15
20
number of epochs
2.1
1.8
0
(d)
0.12
0.3
0.2
0.1
?0.5
0.0
0.5
activation values of h_t
2.4
1.5
25
0.4
0.0
?1.0
vanilla-RNN
MI-RNN-simple
MI-RNN-general
2.7
?3
?5
(b)
3.0
?2
?6
normalized fequency
(a)
normalized fequency
log||dC / dh_t||_2
?1
10
15
20
number of epochs
25
(d)
0.10
0.08
0.06
0.04
0.02
0.00
?1.0
1.0
5
?0.5
0.0
0.5
activation values of h_t
1.0
Figure 1: (a) Curves of log-L2-norm of gradients for lin-RNN (blue) and lin-MI-RNN (orange). Time gradually
changes from {1, 5, 10}. (b) Validation BPC curves for vanilla-RNN, MI-RNN-simple using Eq. 2, and MIRNN-general using Eq. 4. (c) Histogram of vanilla-RNN?s hidden activations over the validation set, most
activations are saturated. (d) Histogram of MI-RNN?s hidden activations over the validation set, most activations
are not saturated.
We next tried different initialization of W and U to test their sensitivities to the scaling. For each
model, we fix the initialization of U to uniform[?0.02, 0.02] and initialize W to uniform[?rW , rW ]
where rW varies in {0.02, 0.1, 0.3, 0.6}. Table 1, top left panel, shows results. As we increase
the scale of W, performance of the vanilla-RNN improves, suggesting that the model is able to
better utilize the input information. On the other hand, MI-RNN is much more robust to different
initializations, where the scaling has almost no effect on the final performance.
3.1.3 On different choices of the formulation
In our third experiment, we evaluated the performance of different computational building blocks,
which are Eq. 1 (vanilla-RNN), Eq. 2 (MI-RNN-simple) and Eq. 4 (MI-RNN-general)5 . From the
validation curves in Figure 1 (b), we see that both MI-RNN, simple and MI-RNN-general yield much
better performance compared to vanilla-RNN, and MI-RNN-general has a faster convergence speed
compared to MI-RNN-simple. We also compared our results to the previously published models
in Table 1, bottom left panel, where MI-RNN-general achieves a test BPC of 1.39, which is to our
knowledge the best result for RNNs on this task without complex gating/cell mechanisms.
3.2
Character Level Language Modeling
In addition to the Penn-Treebank dataset, we also perform character level language modeling on two
larger datasets: text86 and Hutter Challenge Wikipedia7 . Both of them contain 100M characters from
Wikipedia while text8 has an alphabet size of 27 and Hutter Challenge Wikipedia has an alphabet
size of 205. For both datasets, we follow the training protocols in [12] and [1] respectively. We use
Adam for optimization with the starting learning rate grid-searched in {0.002, 0.001, 0.0005}. If the
validation BPC (bits-per-character) does not decrease for 2 epochs, we half the learning rate.
We implemented Multiplicative Integration on both vanilla-RNN and LSTM, referred to as MIRNN and MI-LSTM. The results for the text8 dataset are shown in Table 1, bottom middle panel.
All five models, including some of the previously published models, have the same number of
5
We perform hyper-parameter search for the initialization of {?, ?1 , ?2 , b} in MI-RNN-general.
http://mattmahoney.net/dc/textdata
7
http://prize.hutter1.net/
6
4
rW =
0.02 0.1 0.3 0.6
std
RNN 1.69 1.65 1.57 1.54 0.06
MI-RNN 1.39 1.40 1.40 1.41 0.008
WSJ Corpus
CER WER
DRNN+CTCbeamsearch [15]
Encoder-Decoder [16]
LSTM+CTCbeamsearch [17]
Eesen [18]
LSTM+CTC+WFST (ours)
MI-LSTM+CTC+WFST (ours)
10.0 14.1
6.4 9.3
9.2 8.7
7.3
6.5 8.7
6.0 8.2
Penn-Treebank
BPC
text8
BPC
RNN [12]
HF-MRNN [12]
RNN+stabalization [14]
MI-RNN (ours)
linear MI-RNN (ours)
1.42
1.41
1.48
1.39
1.48
RNN+smoothReLu [19]
HF-MRNN [12]
MI-RNN (ours)
LSTM (ours)
MI-LSTM(ours)
1.55
1.54
1.52
1.51
1.44
HutterWikipedia
BPC
stacked-LSTM [20]
GF-LSTM [1]
grid-LSTM [2]
MI-LSTM (ours)
1.67
1.58
1.47
1.44
Table 1: Top: test BPCs and the standard deviation of models with different scales of weight initializations. Top
right: test CERs and WERs on WSJ corpus. Bottom left: test BPCs on character level Penn-Treebank dataset.
Bottom middle: test BPCs on character level text8 dataset. Bottom right: test BPCs on character level Hutter
Prize Wikipedia dataset.
parameters (?4M). For RNNs without complex gating/cell mechanisms (the first three results), our
MI-RNN (with {?, ?1 , ?2 , b} initialized as {2, 0.5, 0.5, 0}) performs the best, our MI-LSTM (with
{?, ?1 , ?2 , b} initialized as {1, 0.5, 0.5, 0}) outperforms all other models by a large margin8 .
On Hutter Challenge Wikipedia dataset, we compare our MI-LSTM (single layer with 2048 unit,
?17M, with {?, ?1 , ?2 , b} initialized as {1, 1, 1, 0}) to the previous stacked LSTM (7 layers,
?27M) [20], GF-LSTM (5 layers, ?20M) [1], and grid-LSTM (6 layers, ?17M) [2]. Table 1, bottom
right panel, shows results. Despite the simple structure compared to the sophisticated connection
designs in GF-LSTM and grid-LSTM, our MI-LSTM outperforms all other models and achieves the
new state-of-the-art on this task.
3.3
Speech Recognition
We next evaluate our models on Wall Street Journal (WSJ) corpus (available as LDC corpus
LDC93S6B and LDC94S13B), where we use the full 81 hour set ?si284? for training, set ?dev93? for
validation and set ?eval92? for test. We follow the same data preparation process and model setting
as in [18], and we use 59 characters as the targets for the acoustic modelling. Decoding is done with
the CTC [21] based weighted finite-state transducers (WFSTs) [22] as proposed by [18].
Our model (referred to as MI-LSTM+CTC+WFST) consists of 4 bidirectional MI-LSTM layers, each with 320 units for each direction. CTC is performed on top to resolve the alignment
issue in speech transcription. For comparison, we also train a baseline model (referred to as
LSTM+CTC+WFST) with the same size but using vanilla LSTM. Adam with learning rate 0.0001
is used for optimization and Gaussian weight noise with zero mean and 0.05 standard deviation
is injected for regularization. We evaluate our models on the character error rate (CER) without
language model and the word error rate (WER) with extended trigram language model.
Table 1, top right panel, shows that MI-LSTM+CTC+WFST achieves quite good results on both CER
and WER compared to recent works, and it has a clear improvement over the baseline model. Note
that we did not conduct a careful hyper-parameter search on this task, hence one could potentially
obtain better results with better decoding schemes and regularization techniques.
3.4
Learning Skip-Thought Vectors
Next, we evaluate our Multiplicative Integration on the Skip-Thought model of [23]. Skip-Thought is
an encoder-decoder model that attempts to learn generic, distributed sentence representations. The
model produces sentence representation that are robust and perform well in practice, as it achieves
excellent results across many different NLP tasks. The model was trained on the BookCorpus dataset
that consists of 11,038 books with 74,004,228 sentences. Not surprisingly, a single pass through
8
[7] reports better results but they use much larger models (?16M) which is not directly comparable.
5
Semantic-Relatedness
r
?
MSE
Paraphrase detection Acc F1
uni-skip [23]
bi-skip [23]
combine-skip [23]
0.8477 0.7780 0.2872
0.8405 0.7696 0.2995
0.8584 0.7916 0.2687
uni-skip [23]
bi-skip [23]
combine-skip [23]
73.0 81.9
71.2 81.2
73.0 82.0
uni-skip (ours)
MI-uni-skip (ours)
0.8436 0.7735 0.2946
0.8588 0.7952 0.2679
uni-skip (ours)
MI-uni-skip (ours)
74.0 81.9
74.0 82.1
Classification
MR CR SUBJ MPQA
uni-skip [23]
75.5 79.3 92.1
bi-skip [23]
73.9 77.9 92.5
combine-skip [23] 76.5 80.1 93.6
86.9
83.3
87.1
uni-skip (ours)
75.9 80.1 93.0
MI-uni-skip (ours) 77.9 82.3 93.3
87.0
88.1
Attentive Reader
Val. Err.
LSTM [7]
BN-LSTM [7]
BN-everywhere [7]
LSTM (ours)
MI-LSTM (ours)
MI-LSTM+BN (ours)
MI-LSTM+BN-everywhere (ours)
0.5033
0.4951
0.5000
0.5053
0.4721
0.4685
0.4644
Table 2: Top left: skip-thought+MI on Semantic-Relatedness task. Top Right: skip-thought+MI on Paraphrase
Detection task. Bottom left: skip-thought+MI on four different classification tasks. Bottom right: Multiplicative
Integration (with batch normalization) on Teaching Machines to Read and Comprehend task.
the training data can take up to a week on a high-end GPU (as reported in [23]). Such training
speed largely limits one to perform careful hyper-parameter search. However, with Multiplicative
Integration, not only the training time is shortened by a factor of two, but the final performance is
also significantly improved.
We exactly follow the authors? Theano implementation of the skip-thought model9 : Encoder and
decoder are single-layer GRUs with hidden-layer size of 2400; all recurrent matrices adopt orthogonal
initialization while non-recurrent weights are initialized from uniform distribution. Adam is used
for optimization. We implemented Multiplicative Integration only for the encoder GRU (embedding
MI into decoder did not provide any substantial gains). We refer our model as MI-uni-skip, with
{?, ?1 , ?2 , b} initialized as {1, 1, 1, 0}. We also train a baseline model with the same size, referred
to as uni-skip(ours), which essentially reproduces the original model of [23].
During the course of training, we evaluated the skip-thought vectors on the semantic relatedness
task, using SICK dataset, every 2500 updates for both MI-uni-skip and the baseline model (each
iteration processes a mini-batch of size 64). The results are shown in Figure 2a. Note that MI-uni-skip
significantly outperforms the baseline, not only in terms of speed of convergence, but also in terms
of final performance. At around 125k updates, MI-uni-skip already exceeds the best performance
achieved by the baseline, which takes about twice the number of updates.
We also evaluated both models after one week of training, with the best results being reported on six
out of eight tasks reported in [23]: semantic relatedness task on SICK dataset, paraphrase detection
task on Microsoft Research Paraphrase Corpus, and four classification benchmarks: movie review
sentiment (MR), customer product reviews (CR), subjectivity/objectivity classification (SUBJ), and
opinion polarity (MPQA). We also compared our results with the results reported on three models in
the original skip-thought paper: uni-skip, bi-skip, combine-skip. Uni-skip is the same model as our
baseline, bi-skip is a bidirectional model of the same size, and combine-skip takes the concatenation
of the vectors from uni-skip and bi-skip to form a 4800 dimension vector for task evaluation. Table
2 shows that MI-uni-skip dominates across all the tasks. Not only it achieves higher performance
than the baseline model, but in many cases, it also outperforms the combine-skip model, which has
twice the number of dimensions. Clearly, Multiplicative Integration provides a faster and better way
to train a large-scale Skip-Thought model.
3.5
Teaching Machines to Read and Comprehend
In our last experiment, we show that the use of Multiplicative Integration can be combined with
other techniques for training RNNs, and the advantages of using MI still persist. Recently, [7]
introduced Recurrent Batch-Normalization. They evaluated their proposed technique on a uni9
https://github.com/ryankiros/skip-thoughts
6
MSE
0.34
(a)
0.32
0.30
0.28
0.26
0
(b)
0.70
uni-skip (ours)
MI-uni-skip (ours)
validation error
0.36
50
100
150
200
number of iterations (2.5k)
0.60
0.55
0.50
0.45
0
250
LSTM [7]
BN-LSTM [7]
MI-LSTM (ours)
MI-LSTM+BN (ours)
0.65
200
400
600
number of iterations (1k)
800
Figure 2: (a) MSE curves of uni-skip (ours) and MI-uni-skip (ours) on semantic relatedness task on SICK
dataset. MI-uni-skip significantly outperforms baseline uni-skip. (b) Validation error curves on attentive reader
models. There is a clear margin between models with and without MI.
directional Attentive Reader Model [24] for the question answering task using the CNN corpus10 . To
test our approach, we evaluated the following four models: 1. A vanilla LSTM attentive reader model
with a single hidden layer size 240 (same as [7]) as our baseline, referred to as LSTM (ours), 2. A
multiplicative integration LSTM with a single hidden size 240, referred to as MI-LSTM, 3. MILSTM with Batch-Norm, referred to as MI-LSTM+BN, 4. MI-LSTM with Batch-Norm everywhere
(as detailed in [7]), referred to as MI-LSTM+BN-everywhere. We compared our models to results
reported in [7] (referred to as LSTM, BN-LSTM and BN-LSTM everywhere) 11 .
For all MI models, {?, ?1 , ?2 , b} were initialized to {1, 1, 1, 0}. We follow the experimental
protocol of [7]12 and use exactly the same settings as theirs, except we remove the gradient clipping
for MI-LSTMs. Figure. 2b shows validation curves of the baseline (LSTM), MI-LSTM, BN-LSTM,
and MI-LSTM+BN, and the final validation errors of all models are reported in Table 2, bottom right
panel. Clearly, using Multiplicative Integration results in improved model performance regardless
of whether Batch-Norm is used. However, the combination of MI and Batch-Norm provides the
best performance and the fastest speed of convergence. This shows the general applicability of
Multiplication Integration when combining it with other optimization techniques.
4
Relationship to Previous Models
4.1
Relationship to Hidden Markov Models
One can show that under certain constraints, MI-RNN is effectively implementing the forward
algorithm of the Hidden Markov Model(HMM). A direct mapping can be constructed as follows (see
[25] for a similar derivation). Let U ? Rm?m be the state transition probability matrix with Uij =
Pr[ht+1 = i|ht = j], W ? Rm?n be the observation probability matrix with Wij = Pr[xt =
i|ht = j]. When xt is a one-hot vector (e.g., in many of the language modelling tasks), multiplying
it by W is effectively choosing a column of the observation matrix. Namely, if the j th entry of xt
is one, then Wxt = Pr[xt |ht = j]. Let h0 be the initial state distribution with h0 = Pr[h0 ] and
{ht }t?1 be the alpha values in the forward algorithm of HMM, i.e., ht = Pr[x1 , ..., xt , ht ]. Then
Uht = Pr[x1 , ..., xt , ht+1 ]. Thus ht+1 = Wxt+1 Uht = Pr[xt+1 |ht+1 ] ? Pr[x1 , ..., xt , ht+1 ] =
Pr[x1 , ..., xt+1 , ht+1 ]. To exactly implement the forward algorithm using Multiplicative Integration,
the matrices W and U have to be probability matrices, and xt needs to be a one-hot vector. The
function ? needs to be linear, and we drop all the bias terms. Therefore, RNN with Multiplicative
Integration can be seen as a nonlinear extension of HMMs. The extra freedom in parameter values
and nonlinearity makes the model more flexible compared to HMMs.
4.2
Relations to Second Order RNNs and Multiplicative RNNs
MI-RNN is related to the second order RNN [6] and the multiplicative RNN (MRNN) [9]. We first
describe the similarities with these two models:
The second order RNN involves a second order term st in a vanilla-RNN, where the ith element
st,i is computed by the bilinear form: st,i = xTt T (i) ht?1 , where T (i) ? Rn?m (1 ? i ? m) is
10
Note that [7] used a truncated version of the original dataset in order to save computation.
Learning curves and the final result number are obtained by emails correspondence with authors of [7].
12
https://github.com/cooijmanstim/recurrent-batch-normalization.git.
11
7
the ith slice of a tensor T ? Rm?n?m . Multiplicative Integration also involve a second order term
st = ? Wxt Uht?1 , but in our case st,i = ?i (wi ? xt )(ui ? ht?1 ) = xTt (?wi ? ui )ht?1 ,
where wi and ui are ith row in W and U, and ?i is the ith element of ?. Note that the outer product
?i wi ? ui is a rank-1 matrix. The Multiplicative RNN is also a second order RNN, but which
P (i) (i)
xt T
= Pdiag(Vxt )Q. For MI-RNN, we can
approximates T by a tensor decomposition
also think of the second order term as a tensor decomposition: ? Wxt Uht?1 = U(xt )ht?1 =
[diag(?)diag(Wxt )U]ht?1 .
There are however several differences that make MI a favourable model: (1) Simpler Parametrization:
MI uses a rank-1 approximation compared to the second order RNNs, and a diagonal approximation
compared to Multiplicative RNN. Moreover, MI-RNN shares parameters across the first and second
order terms, whereas the other two models do not. As a result, the number of parameters are largely
reduced, which makes our model more practical for large scale problems, while avoiding overfitting.
(2) Easier Optimization: In tensor decomposition methods, the products of three different (low-rank)
matrices generally makes it hard to optimize [9]. However, the optimization problem becomes
easier in MI, as discussed in section 2 and 3. (3) General structural design vs. vanilla-RNN design:
Multiplicative Integration can be easily embedded in many other RNN structures, e.g. LSTMs and
GRUs, whereas the second order RNN and MRNN present a very specific design for modifying
vanilla-RNNs.
Moreover, we also compared MI-RNN?s performance to the previous HF-MRNN?s results (Multiplicative RNN trained by Hessian-free method) in Table 1, bottom left and bottom middle panels, on
Penn-Treebank and text8 datasets. One can see that MI-RNN outperforms HF-MRNN on both tasks.
4.3
General Multiplicative Integration
Multiplicative Integration can be viewed as a general way of combining information flows from
two different sources. In particular, [26] proposed the ladder network that achieves promising
results on semi-supervised learning. In their model, they combine the lateral connections and the
backward connections via the ?combinator? function by a Hadamard product. The performance would
severely degrade without this product as empirically shown by [27]. [28] explored neural embedding
approaches in knowledge bases by formulating relations as bilinear and/or linear mapping functions,
and compared a variety of embedding models on the link prediction task. Surprisingly, the best
results among all bilinear functions is the simple weighted Hadamard product. They further carefully
compare the multiplicative and additive interactions and show that the multiplicative interaction
dominates the additive one.
5
Conclusion
In this paper we proposed to use Multiplicative Integration (MI), a simple Hadamard product to
combine information flow in recurrent neural networks. MI can be easily integrated into many popular
RNN models, including LSTMs and GRUs, while introducing almost no extra parameters. Indeed,
the implementation of MI requires almost no extra work beyond implementing RNN models. We also
show that MI achieves state-of-the-art performance on four different tasks or 11 datasets of varying
sizes and scales. We believe that the Multiplicative Integration can become a default building block
for training various types of RNN models.
Acknowledgments
The authors acknowledge the following agencies for funding and support: NSERC, Canada Research
Chairs, CIFAR, Calcul Quebec, Compute Canada, Disney research and ONR Grant N000141310721.
The authors thank the developers of Theano [29] and Keras [30], and also thank Jimmy Ba for many
thought-provoking discussions.
References
[1] Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. Gated feedback recurrent neural
networks. arXiv preprint arXiv:1502.02367, 2015.
[2] Nal Kalchbrenner, Ivo Danihelka, and Alex Graves. Grid long short-term memory. arXiv preprint
arXiv:1507.01526, 2015.
8
[3] Saizheng Zhang, Yuhuai Wu, Tong Che, Zhouhan Lin, Roland Memisevic, Ruslan Salakhutdinov,
and Yoshua Bengio. Architectural complexity measures of recurrent neural networks. arXiv preprint
arXiv:1602.08210, 2016.
[4] Sepp Hochreiter and J?rgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?1780,
1997.
[5] Kyunghyun Cho, Bart Van Merri?nboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger
Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical
machine translation. arXiv preprint arXiv:1406.1078, 2014.
[6] Mark W Goudreau, C Lee Giles, Srimat T Chakradhar, and D Chen. First-order versus second-order
single-layer recurrent neural networks. Neural Networks, IEEE Transactions on, 5(3):511?513, 1994.
[7] Tim Cooijmans, Nicolas Ballas, C?sar Laurent, and Aaron Courville. Recurrent batch normalization.
http://arxiv.org/pdf/1603.09025v4.pdf, 2016.
[8] LE Baum and JA Eagon. An inequality with application to statistical estimation for probabilistic functions
of markov processes and to a model for ecology. Bulletin of the American Mathematical Society, 73:360?
363, 1967.
[9] Ilya Sutskever, James Martens, and Geoffrey E Hinton. Generating text with recurrent neural networks. In
Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 1017?1024,
2011.
[10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition.
arXiv preprint arXiv:1512.03385, 2015.
[11] Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus
of english: The penn treebank. Computational linguistics, 19(2):313?330, 1993.
[12] Tom?? Mikolov, Ilya Sutskever, Anoop Deoras, Hai-Son Le, and Stefan Kombrink. Subword language
modeling with neural networks. preprint, (http://www.fit.vutbr.cz/imikolov/rnnlm/char.pdf), 2012.
[13] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
[14] David Krueger and Roland Memisevic. Regularizing rnns by stabilizing activations. arXiv preprint
arXiv:1511.08400, 2015.
[15] Awni Y Hannun, Andrew L Maas, Daniel Jurafsky, and Andrew Y Ng. First-pass large vocabulary
continuous speech recognition using bi-directional recurrent dnns. arXiv preprint arXiv:1408.2873, 2014.
[16] Dzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, Philemon Brakel, and Yoshua Bengio. End-to-end
attention-based large vocabulary speech recognition. arXiv preprint arXiv:1508.04395, 2015.
[17] Alex Graves and Navdeep Jaitly. Towards end-to-end speech recognition with recurrent neural networks.
In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1764?1772,
2014.
[18] Yajie Miao, Mohammad Gowayyed, and Florian Metze. Eesen: End-to-end speech recognition using deep
rnn models and wfst-based decoding. arXiv preprint arXiv:1507.08240, 2015.
[19] Marius Pachitariu and Maneesh Sahani. Regularization and nonlinearities for neural language models:
when are they needed? arXiv preprint arXiv:1301.5650, 2013.
[20] Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
[21] Alex Graves, Santiago Fern?ndez, Faustino Gomez, and J?rgen Schmidhuber. Connectionist temporal
classification: labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the
23rd international conference on Machine learning, pages 369?376. ACM, 2006.
[22] Mehryar Mohri, Fernando Pereira, and Michael Riley. Weighted finite-state transducers in speech recognition. Computer Speech & Language, 16(1):69?88, 2002.
[23] Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba,
and Sanja Fidler. Skip-thought vectors. In Advances in Neural Information Processing Systems, pages
3276?3284, 2015.
[24] Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman,
and Phil Blunsom. Teaching machines to read and comprehend. In Advances in Neural Information
Processing Systems, pages 1684?1692, 2015.
[25] T. Wessels and C. W. Omlin. Refining hidden markov models with recurrent neural networks. In Neural
Networks, 2000. IJCNN 2000, Proceedings of the IEEE-INNS-ENNS International Joint Conference on,
volume 2, pages 271?276 vol.2, 2000.
[26] Antti Rasmus, Harri Valpola, Mikko Honkala, Mathias Berglund, and Tapani Raiko. Semi-supervised
learning with ladder network. arXiv preprint arXiv:1507.02672, 2015.
[27] Mohammad Pezeshki, Linxi Fan, Philemon Brakel, Aaron Courville, and Yoshua Bengio. Deconstructing
the ladder network architecture. arXiv preprint arXiv:1511.06430, 2015.
[28] Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Embedding entities and relations
for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575, 2014.
[29] Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, and et al. Theano: A python framework for fast
computation of mathematical expressions, 2016.
[30] Fran?ois Chollet. Keras. GitHub repository: https://github.com/fchollet/keras, 2015.
9
| 6215 |@word cnn:1 version:1 middle:3 repository:1 norm:11 propagate:1 tried:1 bn:12 git:1 decomposition:3 yih:1 initial:1 ndez:1 contains:1 daniel:1 ours:26 interestingly:1 subword:1 outperforms:6 existing:3 err:1 current:1 comparing:1 com:3 amjad:1 activation:14 diederik:1 gpu:1 additive:16 partition:1 wx:11 remove:1 drop:2 update:5 v:1 bart:1 half:1 ivo:1 xk:3 parametrization:1 ith:4 vanishing:3 short:3 prize:2 wfst:6 provides:2 toronto:2 org:1 simpler:1 zhang:2 five:1 mathematical:2 constructed:1 direct:2 become:2 transducer:2 consists:2 combine:8 introduce:3 pairwise:1 indeed:1 kiros:1 uz:12 salakhutdinov:2 resolve:1 equipped:1 becomes:2 spain:1 moreover:2 panel:7 kind:2 developer:1 temporal:1 every:2 exactly:3 universit:1 rm:4 unit:4 penn:6 grant:1 appear:1 danihelka:1 negligible:3 persists:1 limit:1 consequence:1 severely:1 despite:2 shortened:1 bilinear:3 laurent:1 might:2 rnns:17 chose:2 initialization:6 twice:2 blunsom:1 dynamically:2 jurafsky:1 hmms:3 limited:1 fastest:1 bi:7 averaged:1 practical:1 acknowledgment:1 practice:1 block:21 implement:2 procedure:1 jan:1 rnn:95 maneesh:1 thought:14 significantly:3 alleviated:2 pre:2 integrating:2 regular:2 word:1 close:1 optimize:1 www:1 map:1 customer:1 marten:1 baum:1 phil:1 sepp:1 regardless:2 starting:1 jimmy:2 attention:1 stabilizing:2 tomas:1 utilizing:1 kay:1 embedding:4 exploratory:2 merri:1 sar:1 target:1 play:1 heavily:1 exact:1 us:1 mikko:1 jaitly:1 element:2 recognition:8 std:1 textdata:1 persist:1 bottom:11 role:1 preprint:16 sun:1 decrease:1 rescaled:1 ramus:1 substantial:2 agency:1 ui:4 complexity:1 imikolov:1 trained:2 easily:4 joint:1 schwenk:1 various:3 regularizer:1 harri:1 alphabet:2 stacked:2 train:3 derivation:1 pezeshki:1 describe:1 fast:1 zemel:1 objectivity:1 hyper:3 choosing:1 h0:3 jianfeng:1 kalchbrenner:1 quite:1 saizheng:2 widely:1 larger:2 deoras:1 dmitriy:1 otherwise:1 encoder:5 deconstructing:1 think:2 final:7 advantage:3 sequence:4 net:2 inn:1 propose:2 interaction:2 coming:1 product:11 hadamard:5 combining:2 rapidly:1 degenerate:1 poorly:1 description:1 sutskever:2 convergence:3 produce:1 generating:2 adam:5 wsj:3 help:2 tim:1 recurrent:20 andrew:2 eq:9 edward:1 implemented:3 c:2 skip:49 come:1 implies:1 involves:1 ois:1 direction:1 hermann:1 annotated:1 functionality:1 modifying:1 stochastic:1 char:1 opinion:1 implementing:2 ja:1 espeholt:1 behaviour:2 dnns:1 beatrice:1 fix:1 generalization:1 wall:1 marcinkiewicz:1 f1:1 ryan:1 awni:1 extension:1 zhang2:2 around:3 rfou:1 mapping:2 week:2 rgen:2 trigram:1 achieves:7 adopt:3 torralba:1 ruslan:3 estimation:1 faustino:1 yuhuai:2 tanh:2 honkala:1 almahairi:1 weighted:3 stefan:1 brought:1 clearly:2 gaussian:1 rather:2 cr:2 varying:2 derived:1 focus:1 refining:1 improvement:1 modelling:3 likelihood:1 rank:3 linxi:1 baseline:11 inference:1 integrated:2 hidden:17 uij:1 relation:3 wij:1 chakradhar:1 enns:1 issue:1 classification:5 flexible:2 among:1 art:3 integration:44 special:1 orange:2 initialize:1 equal:1 ng:1 represents:2 cer:4 look:2 holger:1 icml:2 yoshua:6 report:1 connectionist:1 richard:1 wen:1 replacement:1 microsoft:1 attempt:1 freedom:1 detection:3 montr:1 ecology:1 evaluation:2 saturated:7 bpc:10 introduces:1 alignment:1 wers:1 behind:1 capable:1 orthogonal:1 unless:1 conduct:2 divide:1 initialized:7 hutter:4 column:2 modeling:4 giles:1 altering:1 clipping:1 applicability:1 introducing:3 deviation:2 entry:1 phrase:1 bookcorpus:1 uniform:4 riley:1 reported:6 varies:1 combined:2 cho:2 st:6 grus:9 lstm:48 sensitivity:1 international:4 stay:1 memisevic:2 lee:1 v4:1 zhouhan:1 decoding:3 probabilistic:1 michael:1 together:1 ilya:2 reflect:1 berglund:1 book:1 american:1 derivative:2 chung:1 rescaling:1 li:1 chorowski:1 suggesting:1 nonlinearities:1 de:1 includes:2 eesen:2 santiago:1 depends:3 multiplicative:49 performed:1 lot:1 analyze:3 hf:4 complicated:1 contribution:1 text8:5 characteristic:1 largely:3 yield:1 directional:2 fern:1 ren:1 multiplying:2 published:2 acc:1 email:1 attentive:4 involved:2 james:1 subjectivity:1 naturally:2 mi:89 gain:1 dataset:12 popular:4 mitchell:1 knowledge:3 ut:3 improves:1 sophisticated:1 carefully:1 bidirectional:2 higher:1 miao:1 supervised:2 follow:4 tom:1 improved:2 formulation:9 evaluated:6 done:1 hand:3 lstms:10 nonlinear:1 unsegmented:1 propagation:4 brings:1 believe:1 mary:1 xiaodong:1 building:22 effect:6 verify:1 normalized:2 contain:1 evolution:1 hence:3 mattmahoney:1 regularization:3 read:4 kyunghyun:2 moritz:1 fidler:1 semantic:5 comprehend:4 fethi:1 during:1 lastname:1 pdf:3 mpqa:2 demonstrate:1 mohammad:2 performs:2 image:1 recently:2 umontreal:1 funding:1 common:1 wikipedia:4 krueger:1 ctc:7 empirically:2 ballas:1 volume:1 discussed:1 he:2 approximates:1 theirs:1 bougares:1 mellon:1 refer:2 rd:4 vanilla:25 grid:5 teaching:4 nonlinearity:5 language:10 sanja:1 similarity:1 base:2 sick:3 wxt:8 recent:1 perplexity:1 schmidhuber:2 certain:1 inequality:1 onr:1 seen:1 tapani:1 florian:1 mr:2 deng:1 xiangyu:1 fernando:1 exploding:1 semi:2 full:1 exceeds:1 faster:2 long:3 cifar:2 lin:18 roland:2 controlled:1 prediction:2 essentially:1 cmu:1 navdeep:1 metric:1 arxiv:31 histogram:3 normalization:5 wxk:16 iteration:3 cz:1 achieved:1 cell:3 hochreiter:1 bishan:1 whereas:4 addition:1 separately:1 source:4 jian:1 suleyman:1 extra:6 bahdanau:2 quebec:1 flow:8 call:1 structural:3 kera:3 yang:1 exceed:1 bengio:5 uht:7 variety:1 fit:1 architecture:1 wu1:1 reduce:1 idea:1 whether:2 six:1 expression:1 sentiment:1 speech:9 hessian:1 shaoqing:1 deep:2 antonio:1 generally:2 clear:2 detailed:1 involve:1 simplest:1 rw:4 http:7 reduced:1 exist:1 alters:1 notice:1 hutter1:1 per:3 blue:1 mrnn:6 carnegie:1 vol:1 key:1 four:4 monitor:1 drawn:1 achieving:1 changing:1 clarity:1 nal:1 ht:27 utilize:1 backward:1 fuse:1 chollet:1 sum:3 everywhere:5 wer:3 injected:1 raquel:1 almost:6 reader:4 wu:1 architectural:1 fran:1 appendix:1 scaling:3 comparable:2 bit:3 layer:10 followed:1 gomez:1 courville:2 correspondence:1 fan:1 ijcnn:1 subj:2 constraint:1 alex:4 speed:4 chair:1 formulating:1 nboer:1 mikolov:1 relatively:1 marius:1 combination:1 smaller:3 across:3 son:1 character:14 wi:4 rsalakhu:1 modification:2 making:2 uhk:9 gradually:1 pr:9 theano:3 equation:2 previously:2 hannun:1 discus:2 mechanism:2 needed:1 serf:1 end:7 gulcehre:2 available:1 operation:2 pachitariu:1 eight:1 observe:1 generic:1 eagon:1 save:1 alternative:1 batch:10 gate:4 original:3 top:7 include:1 nlp:1 linguistics:1 log2:1 society:1 tensor:4 question:2 added:1 already:1 diagonal:1 che:1 hai:1 gradient:24 thank:2 separate:1 link:1 lateral:1 concatenation:1 decoder:5 street:1 hmm:2 wfsts:1 outer:1 degrade:1 entity:1 urtasun:1 dzmitry:2 marcus:1 length:1 relationship:3 illustration:2 mini:1 polarity:1 rasmus:1 ying:1 potentially:1 negative:1 resurgence:1 ba:2 design:9 implementation:2 gated:3 perform:5 observation:3 n000141310721:1 markov:5 datasets:4 benchmark:1 finite:2 acknowledge:1 caglar:2 philemon:2 truncated:1 extended:2 looking:1 hinton:1 disney:1 dc:2 rn:2 santorini:1 paraphrase:4 canada:2 kocisky:1 introduced:1 david:1 namely:1 gru:1 specified:1 extensive:1 sentence:4 connection:3 acoustic:1 boost:1 barcelona:1 nip:1 hour:1 kingma:1 able:1 beyond:1 firstname:1 challenge:3 provoking:1 including:8 memory:3 tau:1 ldc:1 hot:3 omlin:1 residual:2 zhu:1 scheme:1 improve:1 movie:1 github:4 imply:2 ladder:3 raiko:1 gf:3 sahani:1 text:1 epoch:4 review:2 l2:2 calcul:1 val:1 multiplication:3 xtt:2 graf:4 python:1 embedded:3 mixed:1 rnnlm:1 versus:2 geoffrey:1 validation:12 vutbr:1 integrate:1 principle:2 treebank:6 share:2 translation:1 karl:1 row:1 course:1 maas:1 mohri:1 kombrink:1 surprisingly:2 last:1 soon:1 free:1 enjoys:1 english:1 bias:5 antti:1 understand:1 alain:1 bulletin:1 combinator:2 benefit:1 distributed:1 curve:7 calculated:1 dimension:2 transition:2 slice:1 default:1 feedback:1 vocabulary:2 author:4 forward:3 transaction:1 brakel:2 alpha:1 uni:24 relatedness:5 transcription:1 reproduces:1 overfitting:1 mustafa:1 corpus:6 cooijmans:1 vxt:1 search:3 continuous:1 table:10 promising:1 learn:1 robust:3 ca:1 nicolas:1 obtaining:1 serdyuk:1 mse:3 excellent:1 complex:2 mehryar:1 constructing:1 protocol:2 diag:12 reconciles:1 did:2 main:1 fchollet:1 noise:1 drnn:1 x1:4 lasse:1 referred:11 mila:1 junyoung:1 tong:1 pereira:1 valpola:1 answering:2 third:1 xt:13 specific:1 gating:8 favourable:1 explored:1 dominates:2 goudreau:1 adding:1 effectively:2 magnitude:2 labelling:1 margin:2 chen:1 easier:4 gao:1 nserc:1 kaiming:1 van:1 applies:1 acm:1 grefenstette:1 ywu:1 yukun:1 identity:1 presentation:1 bpcs:4 viewed:1 careful:2 ann:1 towards:1 replace:1 change:4 hard:1 specifically:2 except:1 called:1 total:1 pas:2 mathias:1 experimental:3 aaron:2 guillaume:1 searched:1 support:1 mark:1 anoop:1 preparation:1 evaluate:3 regularizing:1 avoiding:1 |
5,765 | 6,216 | Minimizing Regret on Reflexive Banach Spaces and
Nash Equilibria in Continuous Zero-Sum Games
Maximilian Balandat, Walid Krichene, Claire Tomlin, Alexandre Bayen
Electrical Engineering and Computer Sciences, UC Berkeley
[balandat,walid,tomlin]@eecs.berkeley.edu, [email protected]
Abstract
We study a general adversarial online learning problem, in which we are given a
decision set X in a reflexive Banach space X and a sequence of reward vectors
in the dual space of X. At each iteration, we choose an action from X , based on
the observed sequence of previous rewards. Our goal is to minimize regret. Using
results from infinite dimensional convex analysis, we generalize the method of
Dual Averaging to our setting and obtain upper bounds on the worst-case regret that
generalize many previous results. Under the assumption of uniformly continuous
rewards, we obtain explicit regret bounds in a setting where the decision set is the
set of probability distributions on a compact metric space S. Importantly, we make
no convexity assumptions on either S or the reward functions. We also prove a
general lower bound on the worst-case regret for any online algorithm. We then
apply these results to the problem of learning in repeated two-player zero-sum
games on compact metric spaces. In doing so, we first prove that if both players play
a Hannan-consistent strategy, then with probability 1 the empirical distributions
of play weakly converge to the set of Nash equilibria of the game. We then show
that, under mild assumptions, Dual Averaging on the (infinite-dimensional) space
of probability distributions indeed achieves Hannan-consistency.
1
Introduction
Regret analysis is a general technique for designing and analyzing algorithms for sequential decision
problems in adversarial or stochastic settings (Shalev-Shwartz, 2012; Bubeck and Cesa-Bianchi,
2012). Online learning algorithms have applications in machine learning (Xiao, 2010), portfolio
optimization (Cover, 1991), online convex optimization (Hazan et al., 2007) and other areas. Regret
analysis also plays an important role in the study of repeated play of finite games (Hart and MasColell, 2001). It is well known, for example, that in a two-player zero-sum finite game, if both
players play according to a Hannan-consistent strategy (Hannan, 1957), their (marginal) empirical
distributions of play almost surely converge to the set of Nash equilibria of the game (Cesa-Bianchi
and Lugosi, 2006). Moreover, it can be shown that playing a strategy that achieves sublinear regret
almost surely guarantees Hannan-consistency.
A natural question then is whether a similar result holds for games with infinite action sets. In this
article we provide a positive answer. In particular, we prove that in a continuous two-player zero sum
game over compact (not necessarily convex) metric spaces, if both players follow a Hannan-consistent
strategy, then with probability 1 their empirical distributions of play weakly converge to the set of
Nash equilibria of the game. This in turn raises another important question: Do algorithms that
ensure Hannan-consistency exist in such a setting? More generally, can one develop algorithms that
guarantee sub-linear growth of the worst-case regret? We answer these questions affirmatively as well.
To this end, we develop a general framework to study the Dual Averaging (or Follow the Regularized
Leader) method on reflexive Banach spaces. This framework generalizes a wide range of existing
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
results in the literature, including algorithms for online learning on finite sets (Arora et al., 2012) and
finite-dimensional online convex optimization (Hazan et al., 2007).
Given a convex subset X of a reflexive Banach space X, the generalized Dual Averaging (DA)
method maximizes, at each iteration, the cumulative past rewards (which are elements of X ? , the dual
space of X) minus a regularization term h. We show that under certain conditions, the maximizer in
the DA update is the Fr?chet gradient Dh? of the regularizer?s conjugate function. In doing so, we
develop a novel characterization of the duality between essential strong convexity of h and essential
Fr?chet differentiability of h? in reflexive Banach spaces, which is of independent interest.
We apply these general results to the problem of minimizing regret when the rewards are uniformly
continuous functions over a compact metric space S. Importantly, we do not assume convexity of
either S or the rewards, and show that it is possible to achieve sublinear regret under a mild geometric
condition on S (namely, the existence of a locally Q-regular Borel measure). We provide explicit
bounds for a class of regularizers, which guarantee sublinear worst-case regret. We also prove a
general lower bound
?on the regret for any online algorithm and show that DA asymptotically achieves
this bound up to a log t factor.
Our results are related to work by Lehrer (2003) and Sridharan and Tewari (2010); Srebro et al.
(2011). Lehrer (2003) gives necessary geometric conditions for Blackwell approachability in infinitedimensional spaces, but no implementable algorithm guaranteeing Hannan-consistency. Sridharan
and Tewari (2010) derive general regret bounds for Mirror Descent (MD) under the assumption that
the strategy set is uniformly bounded in the norm of the Banach space. We do not make such an
assumption here. In fact, this assumption does not hold in general for our applications in Section 3.
The paper is organized as follows: In Section 2 we introduce and provide a general analysis of Dual
Averaging in reflexive Banach spaces. In Section 3 we apply these results to obtain explicit regret
bounds on compact metric spaces with uniformly continuous reward functions. We use these results
in Section 4 in the context of learning Nash equilibria in continuous two-player zero sum games, and
provide a numerical example in Section 4. All proofs are given in the supplementary material.
2
Regret Minimization on Reflexive Banach Spaces
Consider a sequential decision problem in which we are to choose a sequence (x1 , x2 , . . . ) of actions
from some feasible subset X of a reflexive Banach space X, and seek to maximize a sequence
(u1 (x1 ), u2 (x2 ), . . . ) of rewards, where the u? : X ? R are elements of a given subset U ? X ? ,
with X ? the dual space of X. We assume that xt , the action chosen at time t, may only depend
on the sequence of previously observed reward vectors (u1 , . . . , ut?1 ). We call any such algorithm
an online algorithm. We consider the adversarial setting, i.e., we do not make any distributional
assumptions on the rewards. In particular, they could be picked maliciously by some adversary.
The notion of regret is a standard measure of performance for such a sequential decision problem. For
a sequence (u1 , . . . , ut ) of reward vectors, and a sequence of decisions (x1 , . . . , xt ) produced by an
algorithm, the regret of the algorithm w.r.t. a (fixed)
? X is the gap between the realized
Pt decision xP
t
reward and the reward under x, i.e., Rt (x) := ? =1 u? (x) ? ? =1 u? (x? ). The regret is defined
as Rt := supx?X Rt (x). An algorithm is said to have sublinear regret if for any sequence (ut )t?1
in the set of admissible reward functions U, the regret grows sublinearly, i.e. lim supt Rt /t ? 0.
Example 1. Consider a finite action set S = {1, . . . , n}, let X = X ? = Rn , and let X = ?n?1 ,
the probability simplex in Rn . A reward function can be identified with a vector u ? Rn , such that
the i-th element ui is the reward of action i. A choice x ? X corresponds to a randomization over
the n actions in S. This is the classic setting of many regret-minimizing algorithms in the literature.
Example 2. Suppose S is a compact metric space with ? a finite measure on S. Consider X =
X ? = L2 (S, ?) and let X = {x ? X : x ? 0 a.e., kxk1 = 1}. A reward function is an L2 integrable function on S, and each choice x ? X corresponds to a probability distribution (absolutely
continuous w.r.t. ?) over S. We will explore a more general variant of this problem in Section 3.
In this Section, we prove a general bound on the worst-case regret for DA. DA was introduced
by Nesterov (2009) for (finite dimensional) convex optimization, and has also been applied to online
learning, e.g. by Xiao (2010). In the finite dimensional
case, the
Pt
method solves, at each iteration, the
optimization problem xt+1 = arg maxx?X ?t ? =1 u? , x ? h(x), where h is a strongly convex
2
regularizer defined on X ? Rn and (?t )t?0 is a sequence of learning rates. The regret analysis of
the method relies on the duality between strong convexity and smoothness (Nesterov, 2009, Lemma
1). In order to generalize DA to our Banach space setting, we develop an analogous duality result in
Theorem 1. In particular, we show that the correct notion of strong convexity is (uniform) essential
strong convexity. Equipped with this duality result, we analyze the regret of the Dual Averaging
method and derive a general bound in Theorem 2.
2.1
Preliminaries
Let (X, k ? k) be a reflexive Banach space, and denote by h ? , ? i : X ? X ? ? R the canonical
pairing between X and its dual space X ? , so that hx, ?i := ?(x) for all x ? X, ? ? X ? . By
the effective domain of an extended real-valued function f : X ? [??, +?] we mean the set
dom f = {x ? X : f (x) < +?}. A function f is proper if f > ?? and dom f is non-empty.
The conjugate or Legendre-Fenchel transform of f is the function f ? : X ? ? [??, +?] given by
f ? (?) = sup hx, ?i ? f (x)
(1)
x?X
for all ? ? X ? . If f is proper,
lower semicontinuous and convex, its subdifferential
?f is the
set-valued mapping ?f (x) = ? ? X ? : f (y) ? f (x) + hy ? x, ?i for all y ? X . We define
dom ?f := {x ? X : ?f (x) 6= ?}. Let ? denote the set of all convex, lower semicontinuous
functions ? : [0, ?) ? [0, ?] such that ?(0) = 0, and let
?U := ? ? ? : ?r > 0, ?(r) > 0
?L := ? ? ? : ?(r)/r ? 0, as r ? 0
(2)
We now introduce some definitions. Additional results are reviewed in the supplementary material.
Definition 1 (Str?mberg, 2011). A proper convex lower semicontinuous function f : X ? (??, ?]
is essentially strongly convex if
(i) f is strictly convex on every convex subset of dom ?f
(ii) (?f )?1 is locally bounded on its domain
(iii) for every x0 ? dom ?f there exists ?0 ? X ? and ? ? ?U such that
f (x) ? f (x0 ) + hx ? x0 , ?0 i + ?(kx ? x0 k), ?x ? X.
(3)
If (3) holds with ? independent of x0 , f is uniformly essentially strongly convex with modulus ?.
Definition 2 (Str?mberg, 2011). A proper convex lower semicontinuous function f : X ? (??, ?]
is essentially Fr?chet differentiable if int dom f 6= ?, f is Fr?chet differentiable on int dom f with
Fr?chet derivative Df , and kDf (xj )k? ? ? for any sequence (xj )j in int dom f converging to
some boundary point of dom f .
Definition 3. A proper Fr?chet differentiable function f : X ? (??, ?] is essentially strongly
smooth if ? x0 ? dom ?f, ? ?0 ? X ? , ? ? ?L such that
f (x) ? f (x0 ) + h?0 , x ? x0 i + ?(kx ? x0 k), ? x ? X.
(4)
If (4) holds with ? independent of x0 , f is uniformly essentially strongly smooth with modulus ?.
With this we are now ready to give our main duality result:
Theorem 1. Let f : X ? (??, +?] be proper, lower semicontinuous and uniformly essentially
strongly convex with modulus ? ? ?U . Then
(i) f ? is proper and essentially Fr?chet differentiable with Fr?chet derivative
Df ? (?) = arg max hx, ?i ? f (x).
(5)
x?X
If, in addition, ?? (r) := ?(r)/r is strictly increasing, then
?
kDf ? (?1 ) ? Df ? (?2 )k ? ?? ?1 k?1 ? ?2 k? /2 .
In other words, Df is uniformly continuous with modulus of continuity ?(r) = ??
?
(6)
?1
(r/2).
?
(ii) f is uniformly essentially smooth with modulus ? .
1/?
Corollary 1. If ?(r) ? C r1+? , ? r ? 0 then kDf ? (?1 ) ? Df ? (?2 )k ? (2C)?1/? k?1 ? ?2 k? .
2
In particular, with ?(r) = K
2 r , Definition 1 becomes the classic definition of K-strong convexity,
and (6) yields the result familiar from the finite-dimensional case that the gradient Df ? is 1/K
Lipschitz with respect to the dual norm (Nesterov, 2009, Lemma 1).
3
2.2
Dual Averaging in Reflexive Banach Spaces
We call a proper convex function h : X ? (??, +?] a regularizer function on a set X ? X if
h is essentially strongly convex and dom h = X . We emphasize that we do not assume h to be
Fr?chet-differentiable. Definition 1 in conjunction with Lemma S.1 (supplemental material) implies
that for any regularizer h, the supremum of any function of the form h ? , ?i ? h( ? ) over X, where
? ? X ? , will be attained at a unique element of X , namely Dh? (?), the Fr?chet gradient of h? at ?.
DA with regularizer h and a sequence of learning rates (?t )t?1 generates a sequence of decisions
Pt
using the simple update rule xt+1 = Dh? (?t Ut ), where Ut = ? =1 u? and U0 := 0.
Theorem 2. Let h be a uniformly essentially strongly convex regularizer on X with modulus ? and
let (?t )t?1 be a positive non-increasing sequence of learning rates. Then, for any sequence of payoff
functions (ut )t?1 in X ? for which there exists M < ? such that supx?X |hut , xi| ? M for all t, the
sequence of plays (xt )t?0 given by
Pt
xt+1 = Dh? ?t ? =1 u?
(7)
ensures that
t
t
t
?
X
X
h(x) ? h X
? ?1
Rt (x) :=
hu? , xi ?
hu? , x? i ?
+
ku? k? ?? ?1
ku? k?
(8)
?t
2
? =1
? =1
? =1
where h = inf x?X h(x), ?? (r) := ?(r)/r and ?0 := ?1 .
It is possible to obtain a regret bound similar to (8) also in a continuous-time setting. In fact,
following Kwon and Mertikopoulos (2014), we derive the bound (8) by first proving a bound on
a suitably defined notion of continuous-time regret, and then bounding the difference between the
continuous-time and discrete-time regrets. This analysis is detailed in the supplementary material.
Note that the condition that supx?X |hut , xi| ? M in Theorem 2 is weaker than the one in Sridharan
and Tewari (2010), as it does not imply a uniformly bounded strategy set (e.g., if X = L2 (R) and X
is the set of distributions on X, then X is unbounded in L2 , but the condition may still hold).
Theorem 2 provides a regret bound for a particular choice x ? X . Recall that Rt := supx?X Rt (x).
In Example 1 the set X is compact, so any continuous regularizer h will be bounded, and hence
taking the supremum over x in (8) poses no issue. However, this is not the case in our general
setting, as the regularizer may
R be unbounded on X . For instance, consider Example 2 with the
entropy regularizer h(x) = S x(s) log(x(s))ds, which is easily seen to be unbounded on X . As a
consequence, obtaining a worst-case bound will in general require additional assumptions on the
reward functions and the decision set X . This will be investigated in detail in Section 3.
Corollary 2. Suppose that ?(r) ? C r1+? , ? r ? 0 for some C > 0 and ? > 0. Then
t
X
h(x) ? h
1/?
1+1/?
Rt (x) ?
+ (2C)?1/?
?? ?1 ku? k?
.
(9)
?t
? =1
In particular, if kut k? ? M for all t and ?t = ? t?? , then
? ? 1/? 1+1/? 1??/?
h(x) ? h ?
t +
M
t
.
Rt (x) ?
?
? ? ? 2C
(10)
?
Assuming h is bounded, optimizing over ? yields a rate of Rt (x) = O(t 1+? ). In particular,
? if
K 2
?(r) = 2 r , which corresponds to the classic definition of strong convexity, then Rt (x) = O( t).
For non-vanishing u? we will need that ?t & 0 for the sum in (9) to converge. Thus we could get
potentially tighter control over the rate of this term for ? < 1, at the expense of larger constants.
3
Online Optimization on Compact Metric Spaces
We now apply the above results to the problem minimizing regret on compact metric spaces under
the additional assumption of uniformly continuous reward functions. We make no assumptions on
convexity of either the feasible set or the rewards. Essentially, we lift the non-convex problem of
minimizing a sequence of functions over the (possibly non-convex) set S to the convex (albeit infinitedimensional) problem of minimizing a sequence of linear functionals over a set X of probability
measures (a convex subset of the vector space of measures on S).
4
3.1
An Upper Bound on the Worst-Case Regret
Let (S, d) be a compact metric space, and let ? be a Borel measure on S. Suppose that the reward
vectors u? are given by elements in Lq (S, ?), where q > 1. Let X = Lp (S, ?), where p and q are
H?lder conjugates, i.e., p1 + 1q = 1. Consider X = {x ? X : x ? 0 a.e., kxk1 = 1}, the set of
probability measures on S that are absolutely continuous w.r.t. ? with p-integrable Radon-Nikodym
derivatives. Moreover, denote by Z the class of non-decreasing ? : [0, ?) ? [0, ?] such that
limr?0 ?(r) = ?(0) = 0. The following assumption will be made throughout this section:
Assumption 1. The reward vectors ut have modulus of continuity ? on S, uniformly in t. That is,
there exists ? ? Z such that |ut (s) ? ut (s0 )| ? ?(d(s, s0 )) for all t and for all s, s0 ? S.
Let B(s, r) = {s0 ? S : d(s, s0 ) < r} and denote by B(s, ?) ? X the elements of X with support
contained in B(s, ?). Furthermore, let DS := sups,s0 ?S d(s, s0 ). Then we have the following:
Theorem 3. Let (S, d) be compact, and suppose that Assumption 1 holds. Let h be a uniformly
essentially strongly convex regularizer on X with modulus ?, and let (?t )t?1 be a positive nonincreasing sequence of learning rates. Then, under (7), for any positive sequence (?t )t?1 ,
t
?
X
sups?S inf x?B(s,?t ) h(x) ? h
? ?1
Rt ?
+ t ?(?t ) +
ku? k? ?? ?1
ku? k? .
(11)
?t
2
? =1
Remark 1. The sequence (?t )t?1 in Theorem 3 is not a parameter of the algorithm, but rather a
parameter in the regret bound. In particular, (11) holds true for any such sequence, and we will use
this fact later on to obtain explicit bounds by instantiating (11) with a particular choice of (?t )t?1 .
It is important to realize that the infimum over B(s, ?t ) in (11) may be infinite, in which case the
bound is meaningless. This happens for example if s is an isolated point of some S ? Rn and ? is the
Lebesgue measure, in which case B(s, ?t ) = ?. However, under an additional regularity assumption
on the measure ? we can avoid such degenerate situations.
Definition 4 (Heinonen. et al., 2015). A Borel measure ? on a metric space (S, d) is (Ahlfors)
Q-regular if there exist 0 < c0 ? C0 < ? such that for any open ball B(s, r)
c0 rQ ? ?(B(s, r)) ? C0 rQ .
(12)
We say that ? is r0 -locally Q-regular if (12) holds for all 0 < r ? r0 .
Intuitively, under an r0 -locally Q-regular measure, the mass in the neighborhood of any point of S is
uniformly bounded from above and below. This will allow, at each iteration t, to assign sufficient
probability mass around the maximizer(s) of the cumulative reward function.
Example 3. The canonical example for a Q-regular measure is the Lebesgue measure ? on Rn . If d
is the metric induced by the Euclidean norm, then Q = n and the bound (12) is tight with c0 = C0 ,
a dimensional constant. However, for general sets S ? Rn , ? need not be locally Q-regular. A
sufficient condition for local regularity of ? is that S is v-uniformly fat (Krichene et al., 2015).
Assumption 2. The measure ? is r0 -locally Q-regular on (S, d).
Under Assumption 2, B(s, ?t ) 6= ? for all s ? S and ?t > 0, hence we may hope for a bound on
inf x?B(s,?t ) h(x) uniform in s. To obtain explicit convergence rates, we have to consider a more
specific class of regularizers.
3.2
Explicit Rates for f -Divergences on Lp (S)
We consider a particular class of regularizers called f -divergences or Csisz?r divergences (Csisz?r,
1967). Following Audibert et al. (2014), we define ?-potentials and the associated f -divergence.
Definition 5. Let ? ? 0 and a ? (??, +?]. A continuous increasing diffeomorphism
? : (??, a) ? (?, ?), is an ?-potential if limz??? ?(z) = ?, limz?a ?(z) R= +? and
x ?1
?(0) ? 1. Associated to ? is the convex function
R f? : [0, ?) ? R defined by f? (x) = 1 ? (z) dz
and the f? -divergence, defined by h? (x) = S f? x(s) d?(s) + ?X (x), where ?X is the indicator
function of X (i.e. ?X (x) = 0 if x ? X and ?X (x) = +? if x ?
/ X ).
A remarkable fact is that for regularizers based on ? potentials, the DA update (7) can be computed
efficiently. More precisely, it can be shown (see Proposition 3 in Krichene (2015)) that the maximizer
in this case has aPsimple expression in terms of the dual problem, and the problem of computing
t
xt+1 = Dh? (?t ? =1 u? ) reduces to computing a scalar dual variable ?t? .
5
Proposition 1. Suppose that ?(S) = 1, and that Assumption 2 holds with constants r0 > 0 and
0 < c0 ? C0 < ?. Under the Assumptions of Theorem 3, with h = h? the regularizer associated to
an ?-potential ?, we have that, for any positive sequence (?t )t?1 with ?t ? r0 ,
t
Rt
min(C0 ?Q
1X
?Q
t , ?(S))
?1 ?? ?1
?
f? c?1
?
+
?(?
)
+
ku
k
?
?
ku
k
(13)
t
? ?
? ? .
t
0
t
t ?t
t ? =1
2
For particular choices of the sequences (?t )t?1 and (?t )t?1 , we can derive explicit regret rates.
3.3
Analysis for Entropy Dual Averaging (The Generalized Hedge Algorithm)
Rx
Taking ?(z) = ez?1 , we have that f? (x) = 1 ??1 (z)dz = x log x, and hence the regularizer is
R
exp ?(s)
h? (x) = S x(s) log x(s)d?(s). Then Dh? (?)(s) = k exp
?(s)k1 . This corresponds to a generalized
Hedge algorithm (Arora et al., 2012; Krichene et al., 2015) or the entropic barrier of Bubeck and
Eldan (2014) for Euclidean spaces. The regularizer h? can be shown to be essentially strongly convex
with modulus ?(r) = 12 r2 .
Corollary 3. Suppose that ?(S) = 1, that ? is r0 -locally Q-regular with constants c0 , C0 , that
kut k? ? M for all t, and that ?(r) = C? r? for 0 < ? ? 1 (that is, theprewards are
?-H?lder continuous). Then, under Entropy Dual Averaging, choosing ?t = ? log t/t with
Q 1/2
?1 ?Q/?
1 C0 Q
?=M
) + 2?
and ? > 0, we have that
2c0 log(c0 ?
r
r
Rt
2C0
log t
Q
?Q/? ) +
? 2M
log(c?1
?
+
C
?
(14)
?
0
t
c0
2?
t
p
whenever log t/t < r0? ??1 .
One can now further optimize over the choice of ? to obtain the best constant in the bound. Note also
that the case ? = 1 corresponds to Lipschitz continuity.
3.4
A General Lower Bound
Theorem 4. Let (S, d) be compact, suppose that Assumption 2 holds, and let w : R ? R be any
function with modulus of continuity ? ? Z such that kw(d( ? , s0 ))kq ? M for some s0 ? S for which
there exists s ? S with d(s, s0 ) = DS . Then for any online algorithm, there exist a sequence (u? )t? =1
of reward vectors u? ? X ? with ku? k? ? M and modulus of continuity ?? < ? such that
w(DS ) ?
Rt ? ?
t,
(15)
2 2
Maximizing the constant in (15) is of interest in order to benchmark the bound against the upper
bounds obtained in the previous sections. This problem is however quite challenging, and we will
defer this analysis to future work. For H?lder-continuous functions, we have the following result:
Proposition 2. In the setting of Theorem 4, suppose that ?(S) = 1 and that ?(r) = C? r? for some
0 < ? ? 1. Then
1/?
min C? DS? , M ?
?
Rt ?
t.
(16)
2 2
?
Observe that, up to a log t factor, the asymptotic rate of this general lower bound for any online
algorithm matches that of the upper bound (14) of Entropy Dual Averaging.
4
Learning in Continuous Two-Player Zero-Sum Games
Consider a two-player zero sum game G = (S1 , S2 , u), in which the strategy spaces S1 and S2 of
player 1 and 2, respectively, are Hausdorff spaces, and u : S1 ? S2 ? R is the payoff function of
player 1 (as G is zero-sum, the payoff function of player 2 is ?u). For each i, denote by Pi := P(Si )
the set of Borel probability measures on Si . Denote S := S1 ? S2 and P := P1 ? P2 . For a
(joint)
mixed strategy x ? P, we define the natural extension u
? : P ? R by u
?(x) := Ex [u] =
R
u(s1 , s2 ) dx(s1 , s2 ), which is the expected payoff of player 1 under x.
S
6
A continuous zero-sum game G is said to have value V if
?(x1 , x2 ) = V.
?(x1 , x2 ) = inf sup u
sup inf u
x1 ?P1 x2 ?P2
(17)
x2 ?P2 x1 ?P1
The elements x1 ? x2 ? P at which (17) holds are the (mixed) Nash Equilibria of G. We denote the
set of Nash equilibria of G by N (G). In the case of finite games, it is well known that every two-player
zero-sum game has a value. This is not true in general for continuous games, and additional conditions
on strategy sets and payoffs are required, see e.g. (Glicksberg, 1950).
4.1
Repeated Play
We consider repeated play of the continuous two-player zero-sum game. Given a game G and a
sequence of plays (s1t )t?1 and (s2t )t?1 , we say that player i has sublinear (realized) regret if
t
t
X
X
1
i ?i
i
?i
lim sup
sup
ui (s , s? ) ?
ui (s? , s? ) ? 0
(18)
t?? t
si ?Si ? =1
? =1
where we use ?i to denote the other player.
A strategy ? i for player i is, loosely speaking, a (possibly random) mapping from past observations
to its actions. Of primary interest to us are Hannan-consistent strategies:
Definition 6 (Hannan, 1957). A strategy ? i of player i is Hannan consistent if, for any sequence
(st?i )t?1 , the sequence of plays (sti )t?1 generated by ? i has sublinear regret almost surely.
Note that the almost sure statement in Definition 6 is with respect to the randomness in the strategy ? i .
The following result is a generalization of its counterpart for discrete games (e.g. Corollary 7.1 in
(Cesa-Bianchi and Lugosi, 2006)):
Proposition 3. Suppose G has value V and consider a sequence of plays (s1t )t?1 , (s2t )t?1 and
Pt
assume that both players have sublinear realized regret. Then limt?? 1t ? =1 u(s1? , s2? ) = V .
As in the discrete case (Cesa-Bianchi and Lugosi, 2006), we can also say something about convergence
of the empirical distributions of play to the set of Nash Equilibria. Since these distributions have
finite support for every t, we can at best hope for convergence in the weak sense as follows:
Theorem 5. Suppose that in a repeated two-player zero sum game
Pt G that has a value both players
follow a Hannan-consistent strategy, and denote by x
?it = 1t ? =1 ?si? the marginal empirical
distribution of play of player i at iteration t. Let x
?t := (?
x1t , x
?2t ). Then x
?t * N (G) almost surely,
that is, with probability 1 the sequence (?
xt )t?1 weakly converges to the set of Nash equilibria of G.
Corollary 4. If G has a unique Nash equilibrium x? , then with probability 1, x
?t * x? .
4.2
Hannan-Consistent Strategies
By Theorem 5, if each player follows a Hannan-consistent strategy, then the empirical distributions
of play weakly converge to the set of Nash equilibria of the game. But do such strategies exist?
Regret minimizing strategies are intuitive candidates, and the intimate connection between regret
minimization and learning in games is well studied in many cases, e.g. for finite games (CesaBianchi and Lugosi, 2006) or potential games (Monderer and Shapley, 1996). Using our results from
Section 3, we will show that, under the appropriate assumption on the information revealed to the
player, no-regret learning based on Dual Averaging leads to Hannan consistency in our setting.
Specifically, suppose that after each iteration t, each player i observes a partial payoff function
u
?it : Si ? R describing their payoff as a function of only their own action, si , holding the action
played by the other player fixed. That is, u
?1t (s1 ) := u(s1 , s2t ) and u
?2t (s2 ) := ?u(s1t , s2 ).
Remark 2. Note that we do not assume that the players have knowledge of the joint utility function u.
However, we do assume that the player has full information feedback, in the sense that they observe
partial reward functions u( ? , s?i
? ) on their entire action set, as opposed to only observing the reward
u(s1? , s2? ) of the action played (the latter corresponds to the bandit setting).
?ti = (?
ui? )t? =1 the sequence of partial payoff functions observed by player i. We use
We denote by U
i
Ut to denote the set of all possible such histories, and define U0i := ?. A strategy ? i of player i is a
i
i
collection (?ti )?
t=1 of (possibly random) mappings ?t : Ut?1 ? Si , such that at iteration t, player i
i
plays sit = ?ti (Ut?1
). We make the following assumption on the payoff function:
7
Assumption 3. The payoff function u is uniformly continuous in si with modulus of continuity
independent of s?i for i = 1, 2. That is, for each i there exists ?i ? Z such that |u(s, s?i ) ?
u(s0 , s?i )| ? ?i (di (s, s0 )) for all s?i ? S?i .
It is easy to see that Assumption 3 implies that the game has a value (see supplementary material).
It also makes our setting compatible with that of Section 3. Suppose now that each player randomizes their play according to the sequence of probability distributions on Si generated by DA with
regularizer hi . That is, suppose that each ?ti is a random variable with the following distribution:
Pt?1 i
?ti ? Dh?i ?t?1 ? =1 u
?? .
(19)
Theorem 6. Suppose that player i uses strategy ? i according to (19), and that the DA algorithm
ensures sublinear regret (i.e. lim supt Rt /t ? 0). Then ? i is Hannan-consistent.
Corollary 5. If both players use strategies according to (19) with the respective Dual Averaging ensuring that lim supt Rt /t ? 0, then with probability 1 the sequence (?
xt )t?1 of empirical distributions
of play weakly converges to the set of Nash equilibria of G.
Example Consider a zero-sum game G1 between two players on the unit interval with payoff func1
2
tion u(s1 , s2 ) = s1 s2 ? a1 s1 ? a2 s2 , where a1 = e?2
e?1 and a = e?1 . It is easy to verify that the pair
exp(1?s)
x1 , x2 = exp(s)
is a mixed-strategy Nash equilibrium of G1 . For sequences (s1? )t? =1
e?1 ,
e?1
2 t
and (s? )? =1 , the cumulative payoff functions for fixed action s ? [0, 1] are given, respectively, by
Ut1 (s1 ) = ?t? =1 s2? ? a1 t s1 ? a2 ?t? =1 s2?
Ut2 (s2 ) = a2 t ? ?t? =1 s1? s2 ? a1 ?t? =1 s1?
If each player i uses the Generalized Hedge Algorithm with learning rates (?? )t? =1 , their strategy in
period t is to sample from the distribution xit (s) ? exp(?ti s), where ?t1 = ?t (?t? =1 s2? ? a1 t) and
?t2 = ?t (a2 t ? ?t? =1 s1? ). Interestingly, in this case the sum of the opponent?s past plays is a sufficient
statistic, in the sense that it completely determines the mixed strategy at time t.
2.5
2.0
1.5
1.0
0.5
0.0
2.5
2.0
1.5
1.0
0.5
0.0
0.0
player 1, t=5000
player 1, t=50000
x1 (s)
player 1, t=500000
x1 (s)
player 2, t=5000
x1 (s)
player 2, t=50000
player 2, t=500000
x2 (s)
0.2
0.4
0.6
0.8
x2 (s)
1.0
0.0
0.2
0.4
0.6
0.8
x2 (s)
1.0
0.0
0.2
0.4
0.6
0.8
1.0
Figure 1: Normalized histograms of the empirical distributions of play in G (100 bins)
Figure 1 shows normalized histograms of the empirical distributions of play at different iterations t.
As t grows the histograms approach the equilibrium densities x1 and x2 , respectively. However, this
does not mean that the individual strategies xit converge. Indeed, Figure 2 shows the ?ti oscillating
around the equilibrium parameters 1 and ?1, respectively, even for very large t. We do, however,
observe that the time-averaged parameters ?
? ti converge to the equilibrium values 1 and ?1.
2
?t1
?
? t1
?t2
?
? t2
1
0
?1
100
101
102
103
104
Figure 2: Evolution of parameters ?ti and ?
? ti :=
105
Pt
1
t
? =1
106
??i in G1
In the supplementary material we provide additional numerical examples, including one that illustrates
how our algorithms can be utilized as a tool to compute approximate Nash equilibria in continuous
zero-sum games on non-convex domains.
8
References
Sanjeev Arora, Elad Hazan, and Satyen Kale. The multiplicative weights update method: a metaalgorithm and applications. Theory of Computing, 8(1):121?164, 2012.
Jean-Yves Audibert, S?bastien Bubeck, and G?bor Lugosi. Regret in online combinatorial optimization. Mathematics of Operations Research, 39(1):31?45, 2014.
S. Bubeck and R. Eldan. The entropic barrier: a simple and optimal universal self-concordant barrier.
ArXiv e-prints, December 2014.
S?bastien Bubeck and Nicol? Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multiarmed bandit problems. Foundations and Trends in Machine Learning, 5(1):1?122, 2012.
Nicolo Cesa-Bianchi and Gabor Lugosi. Prediction, Learning, and Games. Cambridge UP, 2006.
Thomas M. Cover. Universal portfolios. Mathematical Finance, 1(1):1?29, 1991.
Imre Csisz?r. Information-type measures of difference of probability distributions and indirect
observations. Studia Scientiarum Mathematicarum Hungarica, 2:299?318, 1967.
Irving L. Glicksberg. Minimax theorem for upper and lower semicontinuous payoffs. Research
Memorandum RM-478, The RAND Corporation, Oct 1950.
James Hannan. Approximation to Bayes risk in repeated play. In Contributions to the Theory of
Games, vol III of Annals of Mathematics Studies 39. Princeton University Press, 1957.
Sergiu Hart and Andreu Mas-Colell. A general class of adaptive strategies. Journal of Economic
Theory, 98(1):26 ? 54, 2001.
Elad Hazan, Amit Agarwal, and Satyen Kale. Logarithmic regret algorithms for online convex
optimization. Machine Learning, 69(2-3):169?192, 2007.
Juha Heinonen., Pekka Koskela, Nageswari Shanmugalingam, and Jeremy T. Tyson. Sobolev
Spaces on Metric Measure Spaces: An Approach Based on Upper Gradients. New Mathematical
Monographs. Cambridge University Press, 2015.
Walid Krichene. Dual averaging on compactly-supported distributions and application to no-regret
learning on a continuum. CoRR, abs/1504.07720, 2015.
Walid Krichene, Maximilian Balandat, Claire Tomlin, and Alexandre Bayen. The Hedge Algorithm
on a Continuum. In 32nd International Conference on Machine Learning, pages 824?832, 2015.
Joon Kwon and Panayotis Mertikopoulos. A continuous-time approach to online optimization. ArXiv
e-prints, January 2014.
Ehud Lehrer. Approachability in infinite dimensional spaces. International Journal of Game Theory,
31(2):253?268, 2003.
Dov Monderer and Lloyd S. Shapley. Potential games. Games and Economic Behavior, 14(1):124 ?
143, 1996.
Yurii Nesterov. Primal-dual subgradient methods for convex problems. Mathematical Programming,
120(1):221?259, 2009.
Shai Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in
Machine Learning, 4(2):107?194, 2012.
Nati Srebro, Karthik Sridharan, and Ambuj Tewari. On the universality of online mirror descent. In
Advances in Neural Information Processing Systems 24 (NIPS), pages 2645?2653. 2011.
Karthik Sridharan and Ambuj Tewari. Convex games in banach spaces. In COLT 2010 - The 23rd
Conference on Learning Theory,, pages 1?13, Haifa, Israel, June 2010.
Thomas Str?mberg. Duality between Fr?chet differentiability and strong convexity. Positivity, 15(3):
527?536, 2011.
Lin Xiao. Dual averaging methods for regularized stochastic learning and online optimization. J.
Mach. Learn. Res., 11:2543?2596, December 2010.
9
| 6216 |@word mild:2 norm:3 approachability:2 c0:16 suitably:1 open:1 nd:1 hu:2 semicontinuous:6 seek:1 minus:1 interestingly:1 past:3 existing:1 si:10 universality:1 dx:1 realize:1 numerical:2 update:4 vanishing:1 characterization:1 provides:1 unbounded:3 mathematical:3 pairing:1 s2t:3 prove:5 shapley:2 introduce:2 x0:10 sublinearly:1 expected:1 indeed:2 p1:4 behavior:1 decreasing:1 equipped:1 str:3 increasing:3 becomes:1 spain:1 moreover:2 bounded:6 maximizes:1 mass:2 israel:1 supplemental:1 corporation:1 guarantee:3 berkeley:3 every:4 ti:10 growth:1 finance:1 fat:1 rm:1 control:1 unit:1 positive:5 t1:3 engineering:1 local:1 consequence:1 randomizes:1 mach:1 analyzing:1 lugosi:6 studied:1 challenging:1 lehrer:3 range:1 averaged:1 unique:2 regret:44 area:1 universal:2 empirical:9 maxx:1 gabor:1 word:1 pekka:1 regular:8 get:1 context:1 risk:1 optimize:1 dz:2 maximizing:1 kale:2 limr:1 convex:31 rule:1 maliciously:1 importantly:2 mberg:3 classic:3 proving:1 notion:3 memorandum:1 analogous:1 annals:1 pt:8 play:23 suppose:14 programming:1 us:2 designing:1 element:7 trend:2 utilized:1 distributional:1 observed:3 role:1 kxk1:2 electrical:1 worst:7 ensures:2 observes:1 rq:2 monograph:1 nash:14 convexity:10 ui:4 reward:27 nesterov:4 chet:11 dom:11 depend:1 weakly:5 raise:1 tight:1 completely:1 compactly:1 easily:1 joint:2 indirect:1 regularizer:14 effective:1 lift:1 shalev:2 neighborhood:1 choosing:1 quite:1 jean:1 supplementary:5 valued:2 larger:1 say:3 elad:2 lder:3 satyen:2 statistic:1 g1:3 tomlin:3 transform:1 online:19 sequence:33 differentiable:5 fr:11 degenerate:1 achieve:1 intuitive:1 csisz:3 x1t:1 kdf:3 empty:1 regularity:2 r1:2 convergence:3 oscillating:1 guaranteeing:1 converges:2 derive:4 develop:4 pose:1 p2:3 solves:1 bayen:3 strong:7 implies:2 correct:1 stochastic:3 material:6 bin:1 require:1 hx:4 assign:1 generalization:1 preliminary:1 randomization:1 proposition:4 tighter:1 mathematicarum:1 strictly:2 extension:1 hold:11 hut:2 around:2 exp:5 equilibrium:17 mapping:3 achieves:3 entropic:2 a2:4 continuum:2 combinatorial:1 tool:1 minimization:2 hope:2 supt:3 rather:1 avoid:1 imre:1 conjunction:1 corollary:6 xit:2 june:1 adversarial:3 sense:3 entire:1 bandit:2 arg:2 dual:22 issue:1 colt:1 uc:1 marginal:2 s1t:3 u0i:1 kw:1 future:1 simplex:1 t2:3 tyson:1 kwon:2 divergence:5 individual:1 familiar:1 lebesgue:2 karthik:2 ab:1 interest:3 primal:1 regularizers:4 nonincreasing:1 dov:1 partial:3 necessary:1 respective:1 euclidean:2 loosely:1 haifa:1 re:1 isolated:1 fenchel:1 instance:1 cover:2 reflexive:10 subset:5 uniform:2 kq:1 colell:1 answer:2 supx:4 eec:1 st:1 density:1 international:2 kut:2 sanjeev:1 cesa:6 opposed:1 choose:2 possibly:3 positivity:1 derivative:3 potential:6 jeremy:1 lloyd:1 int:3 audibert:2 multiplicative:1 later:1 tion:1 picked:1 observing:1 doing:2 hazan:4 sup:7 bayes:1 analyze:1 shai:1 defer:1 contribution:1 minimize:1 yves:1 efficiently:1 yield:2 generalize:3 weak:1 bor:1 produced:1 rx:1 randomness:1 history:1 whenever:1 definition:12 against:1 panayotis:1 james:1 proof:1 associated:3 di:1 studia:1 recall:1 lim:4 ut:12 knowledge:1 organized:1 alexandre:2 attained:1 metaalgorithm:1 follow:3 rand:1 strongly:10 furthermore:1 d:5 mertikopoulos:2 maximizer:3 scientiarum:1 continuity:6 infimum:1 grows:2 balandat:3 modulus:12 verify:1 true:2 normalized:2 hausdorff:1 counterpart:1 regularization:1 hence:3 evolution:1 krichene:6 game:33 self:1 irving:1 generalized:4 novel:1 banach:13 multiarmed:1 cambridge:2 smoothness:1 rd:1 consistency:5 mathematics:2 portfolio:2 nicolo:1 something:1 own:1 optimizing:1 inf:5 certain:1 integrable:2 seen:1 additional:6 r0:8 surely:4 converge:7 maximize:1 cesabianchi:1 period:1 ii:2 u0:1 full:1 hannan:17 reduces:1 smooth:3 match:1 lin:1 hart:2 a1:5 ensuring:1 converging:1 variant:1 instantiating:1 prediction:1 essentially:13 metric:12 df:6 arxiv:2 iteration:8 histogram:3 limt:1 agarwal:1 subdifferential:1 addition:1 interval:1 meaningless:1 sure:1 koskela:1 induced:1 december:2 sridharan:5 call:2 revealed:1 iii:2 easy:2 xj:2 nonstochastic:1 identified:1 economic:2 whether:1 expression:1 utility:1 speaking:1 action:13 remark:2 generally:1 tewari:5 detailed:1 locally:7 differentiability:2 exist:4 canonical:2 discrete:3 vol:1 ut2:1 asymptotically:1 subgradient:1 sum:16 sti:1 almost:5 throughout:1 sobolev:1 decision:9 sergiu:1 radon:1 bound:27 hi:1 played:2 precisely:1 x2:12 hy:1 generates:1 u1:3 min:2 according:4 ball:1 conjugate:3 legendre:1 lp:2 happens:1 s1:19 intuitively:1 previously:1 turn:1 describing:1 end:1 ut1:1 yurii:1 generalizes:1 operation:1 opponent:1 apply:4 observe:3 appropriate:1 existence:1 thomas:2 ensure:1 k1:1 amit:1 question:3 realized:3 print:2 strategy:26 primary:1 rt:18 md:1 said:2 gradient:4 monderer:2 assuming:1 minimizing:7 potentially:1 statement:1 expense:1 holding:1 proper:8 bianchi:6 upper:6 observation:2 benchmark:1 finite:12 implementable:1 descent:2 juha:1 affirmatively:1 january:1 payoff:13 extended:1 situation:1 andreu:1 rn:7 introduced:1 namely:2 blackwell:1 required:1 pair:1 connection:1 barcelona:1 nip:2 adversary:1 below:1 ambuj:2 including:2 max:1 natural:2 regularized:2 indicator:1 minimax:1 imply:1 arora:3 ready:1 hungarica:1 literature:2 geometric:2 l2:4 nati:1 nicol:1 asymptotic:1 sublinear:8 mixed:4 srebro:2 remarkable:1 foundation:2 sufficient:3 xp:1 consistent:9 xiao:3 article:1 s0:12 nikodym:1 playing:1 pi:1 claire:2 compatible:1 eldan:2 supported:1 weaker:1 allow:1 wide:1 taking:2 barrier:3 boundary:1 feedback:1 cumulative:3 infinitedimensional:2 made:1 collection:1 adaptive:1 functionals:1 approximate:1 compact:12 emphasize:1 supremum:2 heinonen:2 leader:1 shwartz:2 xi:3 continuous:24 reviewed:1 ku:8 learn:1 obtaining:1 investigated:1 necessarily:1 domain:3 da:10 ehud:1 main:1 joon:1 bounding:1 s2:18 repeated:6 x1:13 borel:4 sub:1 explicit:7 lq:1 candidate:1 intimate:1 admissible:1 limz:2 theorem:15 xt:9 specific:1 bastien:2 r2:1 sit:1 essential:3 exists:5 albeit:1 sequential:3 corr:1 mirror:2 illustrates:1 maximilian:2 kx:2 gap:1 entropy:4 logarithmic:1 explore:1 bubeck:5 ez:1 contained:1 scalar:1 u2:1 corresponds:6 determines:1 relies:1 dh:7 hedge:4 ma:1 oct:1 goal:1 diffeomorphism:1 lipschitz:2 feasible:2 infinite:5 specifically:1 uniformly:17 averaging:14 walid:4 lemma:3 called:1 duality:6 concordant:1 player:43 support:2 latter:1 absolutely:2 princeton:1 ex:1 |
5,766 | 6,217 | Density Estimation via Discrepancy Based
Adaptive Sequential Partition
Dangna Li
ICME,
Stanford University
Stanford, CA 94305
[email protected]
Kun Yang
Google
Mountain View, CA 94043
[email protected]
Wing Hung Wong
Department of Statistics
Stanford University
Stanford, CA 94305
[email protected]
Abstract
Given iid observations from an unknown absolute continuous distribution defined
on some domain ?, we propose a nonparametric method to learn a piecewise
constant function to approximate the underlying probability density function. Our
density estimate is a piecewise constant function defined on a binary partition of
?. The key ingredient of the algorithm is to use discrepancy, a concept originates
from Quasi Monte Carlo analysis, to control the partition process. The resulting
algorithm is simple, efficient, and has a provable convergence rate. We empirically
demonstrate its efficiency as a density estimation method. We also show how it can
be utilized to find good initializations for k-means.
1
Introduction
Density estimation is one of the fundamental problems in statistics. Once an explicit estimate of the
density function is constructed, various kinds of statistical inference tasks follow naturally. Given iid
observations, our goal in this paper is to construct an estimate of their common density function via a
nonparametric domain partition approach.
As pointed out in [1], for density estimation, the bias due to the limited approximation power of a
parametric family will become dominant in the over all error as the sample size grows. Hence it is
necessary to adopt a nonparametric approach to handle this bias. The kernel density estimation [2]
is a popular nonparametric density estimation method. Although in theory it can achieve optimal
convergence rate when the kernel and the bandwidth are appropriately chosen, its result can be
sensitive to the choice of bandwidth, especially in high dimension. In practice, kernel density
estimation is typically not applicable to problems of dimension higher than 6.
Another widely used nonparametric density estimation method in low dimension is the histogram. But
similarly with kernel density estimation, it can not be scaled easily to higher dimensions. Motivated
by the usefulness of histogram and the need for a method to handle higher dimensional cases, we
propose a novel nonparametric density estimation method which learns a piecewise constant density
function defined on a binary partition of domain ?.
A key ingredient for any partition based method is the decision for stopping. Based on the observation
that for any piecewise constant density, the distribution conditioned on each sub-region is uniform,
we propose to use star discrepancy, which originates from analysis of Quasi-Monte Carlo methods,
to formally measure the degree of uniformity. We will see in section 4 that this allows our density
estimator to have near optimal convergence rate.
In summary, we highlight our contribution as follows:
? To the best of our knowledge, our method is the first density estimation method that utilizes
Quasi-Monte Carlo technique in density estimation.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
? We provide an error analysis on binary partition based density estimation method. We
1
establish an O(n? 2 ) error bound for the density estimator. The result is optimal in the sense
that essentially all Monte Carlo methods have the same convergence rate. Our simulation
results support the tightness of this bound.
? One of the advantage of our method over existing ones is its efficiency. We demonstrate in
section 5 that our method has comparable accuracy with other methods in terms of Hellinger
distance while achieving an approximately 102 -fold speed up.
? Our method is a general data exploration tool and is readily applicable to many important
learning tasks. Specifically, we demonstrate in section 5.3 how it can be used to find good
initializations for k-means.
2
Related work
Existing domain partition based density estimators can be divided into two categories: the first
category belongs to the Bayesian nonparametric framework. Optional P?lya Tree (OPT) [3] is a
class of nonparametric conjugate priors on the set of piecewise constant density functions defined on
some partition of ?. Bayesian Sequential Partitioning (BSP) [1] is introduced as a computationally
more attractive alternative to OPT. Inferences for both methods are performed by sampling from the
posterior distribution of density functions. Our improvement over these two methods is two-fold.
First, we no longer restrict the binary partition to be always at the middle. By introducing a new
statistic called the ?gap?, we allow the partitions to be adaptive to the data. Second, our method
does not stem from a Bayesian origin and proceeds in a top down, greedy fashion. This makes our
method computationally much more attractive than OPT and BSP, whose inference can be quite
computationally intensive.
The second category is tree based density estimators [4] [5]. As an example, Density Estimation
Trees [5] is generalization of classification trees and regression trees for the task of density estimation.
Its tree based origin has led to a loss minimization perspective: the learning of the tree is done by
minimizing the integrated squared error. However, the true loss function can only be approximated by
a surrogate and the optimization problem is difficult to solve. The objective of our method is much
simpler and leads to an intuitive and efficient algorithm.
3
Main algorithm
3.1 Notations and definitions
In this paper we consider the problem of estimating a joint density function f from a given set of
observations. Without loss of generality, we assume the data domain ? = [0, 1]d , a hyper-rectangle
Qd
in Rd . We use the short hand notation [a, b] = j=1 [aj , bj ] to denote a hyper-rectangle in Rd , where
a = (a1 , ? ? ? , ad ), b = (b1 , ? ? ? , bd ) ? [0, 1]d . Each (aj , bj ) pair specifies the lower and upper bound
of the hyper-rectangle along dimension j.
We restrict our attention to the class of piecewise constant functions after balancing the trade-off
between simplicity and representational power: Ideally, we would like the function class to have
concise representation while at the same time allowing for efficient evaluation. On the other hand,
we would like to be able to approximate any continuous density function arbitrarily well (at least as
the sample size goes to infinity). This trade-off has led us to choose the set of piecewise constant
functions supported on binary partitions: First, we only need 2d + 1 floating point numbers to
uniquely define a sub-rectangle (2d for its location and 1 for its density value). Second, it is well
known that the set of positive, integrable, piesewise constant functions is dense in Lp for p ? [1, ?).
The binary partition we consider can be defined in the following recursive way: starting with
P0 = ?. Suppose we have a binary partition Pt = {?(1) , ? ? ? , ?(t) } at level t, where ?ti=1 ?(i) = ?,
?(i) ? ?(j) = ?, i 6= j, a level t + 1 partition Pt+1 is obtained by dividing one sub-rectangle ?(i) in
Pt along one of its coordinates, parallel to one of the dimension. See Figure 1 for an illustration.
3.2 Adaptive partition and discrepancy control
The above recursive build up has two key steps. The first is to decide whether to further split a subrectangle. One helpful intuition is that for piecewise constant densities, the distribution conditioned
on each sub-rectangle is uniform. Therefore the partition should stop when the points inside a subrectangle are approximatly uniformly scattered. In other words, we stop the partition when further
2
A:1/60 B:1/60
?
?
?
?
?
?
?
?
?
?
?
D:7/60
?
?
?
?
?
?
?
?
?
C:2/60
?
?
?
?
?
?
? ?
?
?
?
?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
?
Figure 1: Left: a sequence of binary partition and the corresponding tree representation; if we encode
partitioning information (e.g., the location where the split occurs) in the nodes, there is a one to one
mapping between the tree representations and the partitions. Right: the gaps with m = 3, we split
the rectangle at location D, which corresponds to the largest gap (Assuming it does not satisfy (2),
see the text for more details)
.
partitioning does not reveal much additional information about the underlying density landscape. We
propose to use star discrepancy, which is a concept originates from the analysis of Quasi-Monte Carlo
methods, to formally measure the degree of uniformity of points in a sub-rectangle. Star discrepancy
is defined as:
Definition 1. Given n points Xn = {x1 , ..., xn } in [0, 1]d . The star discrepancy D? (Xn ) is defined
as:
D? (Xn ) =
n
d
X
Y
1
sup
1{xi ? [0, a)} ?
aj
a?[0,1]d n i=1
j=1
(1)
The supremum is taken over all d-dimensional sub-rectangles [0, a). Given star discrepancy D? (Xn ),
we have the following error bound for Monte Carlo integration (See [6] for a proof):
Theorem 2. (Koksma-Hlawka inequality) Let Xn = {x1 , x2 , ..., xn } be a set of points in [0, 1]d with
discrepancy D? (Xn ); Let f be a function on [0, 1]d of bounded variation V(f ). Then,
Z
[0,1]d
f (x)dx ?
n
1X
f (xi ) ? V(f )D? (Xn )
n i=1
where V(f ) is the total variation in the sense of Hardy and Krause (See [7] for its precise definition).
The above theorem implies if the star discrepancy D? (Xn ) is under control, the empirical distribution
will be a good approximation to the true distribution. Therefore, we may decide to keep partitioning
a sub-rectangle until its discrepancy is lower than some threshold. We shall see in section 4 that this
provably guarantees our density estimate is a good approximation to the true density function.
Another important ingredient of all partition based methods is the choice of splitting point. In order
Qd
to find a good location to split for [a, b] = j=1 [aj , bj ], we divide j th dimension into m equal-sized
bins: [aj , aj + (bj ? aj )/m], ..., [aj + (bj ? aj )(m ? 2)/m, aj + (bj ? aj )(m ? 1)/m] and keep
track ofP
the gaps at aj + (bj ? aj )/m, ..., aj + (bj ? aj )(m ? 1)/m, where the gap gjk is defined as
n
|(1/n) i=1 1(xij < aj + (bj ? aj )k/m) ? k/m| for k = 1, ..., (m ? 1), there are total (m ? 1)d
gaps recorded (Figure 1). Here m is a hyper-parameter chosen by the user. [a, b] is split into two
sub-rectangles along the dimension and location corresponding to maximum gap (Figure 1). The
pseudocode for the complete algorithm is given in Algorithm 1. We refer to this algorithm as DSP in
the sequel. One distinct feature of DSP is it only requires the user to specify two parameters: m, ?,
where m is the number of bins along each dimension; ? is the parameter for discrepancy control (See
theorem 2 for more details). In some applications, the user may prefer putting an upper bound on
the number of total partitions. In that case, there is typically no need to specify ?. Choices for these
parameters are discussed in Section 5.
The resulting density estimates p? is a piecewise constant function defined on a binary partition
PL
of ?: p?(x) = i=1 d(ri )1{x ? ri } where 1 is the indicator function; L is the total number of
sub-rectangles in the final partition; {ri , d(ri )}L
i=1 are the sub-rectangle and density pairs. We
demonstrate in section 5 how p?(x) can be leveraged to find good initializations for k-means. In the
following section, we establish a convergence result of our density estimator.
3
Algorithm 1 Density Estimation via Discrepancy Based Sequential Partition (DSP)
Input: XN , m, ?
Output: A piecewise constant function Pr(?) defined on a binary partition R
Let Pr(r) denote the probability mass of region r ? ?; let XN (r) denote the points in XN that lie
within r, where r ? ?. ni denotes the size of set X (i) .
1: procedure DSP(XN , m, ?)
2:
B = {[0, 1]d }, Pr([0, 1]d ) = 1
3:
while true do
4:
R0 = ?
5:
for each ri = [a(i) , b(i) ] in R do
6:
Calculate gaps {gjk }j=1,...,d,k=1,...,m?1
(i)
xi ,1 ?a
n
(i)
xi ,d ?a
n
i
i
? (i) = {?
Scale X(ri ) = {xil }l=1
to X
xil = ( l (i) 1 , ..., l (i) d )}l=1
b1
bd
?
? (i) ) > ? N /ni then
if X(ri ) 6= ? and D? (X
. Condition (2) in Theorem 4
Split ri into ri1 = [a(i1 ) , b(i1 ) ] and ri2 = [a(i2 ) , b(i2 ) ] along the max gap (Figure 1).
7:
8:
9:
10:
11:
12:
13:
14:
|P (ri )|
Pr(ri1 ) = Pr(ri ) n 1 , Pr(ri2 ) = Pr(ri ) ? Pr(ri1 )
i
R0 = R0 ? {ri1 , ri2 }
else R0 = R0 ? {ri }
if R0 6= R then R = R0
else return R, Pr(?)
4
Theoretical results
Before we establish our main theorem, we need the following lemma:1
Lemma 3. Let Dn? = inf {x1 ,...,xn }?[0,1]d D? (x1 , ..., xn ), then we have
r
Dn?
?c
d
n
for all n, d ? R+ , where c is some positive constant.
We now state our main theorem:
Theorem 4. Let f be a function defined on ? = [0, 1]d with bounded variation. Let XN =
{x1 , ..., xN ? ?} and {[a(i) , b(i) ], i = 1, ? ? ? , L} be a level L binary partition of ?. Further denote
by X (i) = {xj = (xj1 , ..., xjd ), xj ? [a(i) , b(i) ] and } ? XN , i.e. the part of XN in sub-rectangle i.
ni = |X (i) |. Suppose in each sub-rectangle [a(i) , b(i) ], X (i) satisfies
? (i) ) ? ?(i) Dn?
D? (X
i
? (i) = {?
where X
xj = (
(i)
xjd ?ad
(i)
bd
(i)
xj1 ?a1
(i)
b1
, ...,
Z
f (x)?
p(x)dx ?
), xj ? X (i) } , ?(i) =
(2)
q
N ?
ni d c
for some positive
constant ?, Dn? i is defined as in lemma 3. Then
[0,1]d
N
1 X
?
f (xi ) ? ? V(f )
N i=1
N
(3)
where p?(x) is a piecewise constant density estimator given by
p?(x) =
L
X
di 1{x ? [a(i) , b(i) ]}
i=1
Qd
(i)
(i)
with di = ( j=1 (bj ? aj ))?1 ni /N , i.e., the empirical density.
In the above theorem, ?(i) controls the relative uniformity of the points and is adaptive to X (i) . It
imposes more restrictive constraints on regions containing larget proportion of the sample (ni /N ).
Although our density estimate is not the only estimator which satisfies (3), (for example, both the
empirical distribution in the asymptotic limit and kernel density estimator with sufficiently small
bandwidth meet the criterion), one advantage of our density estimator is that it provides a very concise
1
The proof for Lemma 3 can be found in [8]. Theorem 4 and Corollary 5 are proved in the supplementary
material.
4
summary of the data while at the same time capturing the landscape of the underlying distribution. In
addition, the piecewise constant function does not suffer from having too many ?local bumps?, which
is a common problem for kernel density estimator. Moreover, under certain regularity conditions
PN
(e.g. bounded second moments), the convergence rate of Monte Carlo methods for N1 i=1 f (xi ) to
R
1
f (x)p(x)dx is of order O(N ? 2 ). Our density estimate is optimal in the sense that it achieves
[0,1]d
the same rate of convergence. Given theorem 4, we have the following convergence result:
density function as
Corollary 5. Let p?(x) be the estimated
R
R in theorem 4. For any hyper-rectangle
A = [a, b] ? [0, 1]d , let P? (A) = A p?(x)dx and P (A) = A p(x)dx, then
sup |P? (A) ? P (A)| ? 0
A?[0,1]d
1
at the order O(n? 2 ).
Remark 4.1. It is worth pointing out that the total variation distance between two probability
measures P? and P is defined as ?(P? , P ) = supA?B |P? (A) ? P (A)|, where B is the Borel ?-algebra
of [0, 1]d . In contrast, Corollary 5 restricts A to be hyper-rectangles.
5
5.1
Experimental results
Implementation details
In some applications, we find it helpful to first estimate the marginal densities for each component
variables x.j (j = 1, ..., d), then make a copula transformation z.j = F?j (x.j ), where F?j is the estimated
cdf of x.j . After such a transformation, we can take the domain to be [0, 1]d . Also we find this can
save the number of partition needed by DSP. Unless otherwise stated, we use copula transform in our
experiments whenever the dimension exceeds 3.
We make the following observations to improve the efficiency of DSP: 1) First observe that
maxj=1,...,d D? ({xij }ni=1 ) ? D? ({xi }ni=1 ). Let x(i)j be the ith smallest element in {xij }ni=1 ,
1
then D? ({xij }ni=1 ) = 2n
+ maxi |x(i)j ? 2i?1
complexity O(n log n). Hence
2n | [9], which has ?
?
n
maxj=1,...,d D ({x?
}
)
can
be
used
to
compare
against
?
L/n first before calculating
ij i=1
D?? ({xi }ni=1 ); 2) ? N /n is large when n is small, but D? ({xi }ni=1 ) is bounded above by 1; 3)
? N /n is tiny when n is large and D? ({xi }ni=1 ) is bounded below by cd log(d?1)/2 n?1?with some
constant cd depending on d [10]; thus we can keep splitting without checking (2) when ? N /n ? ,
where is a small positive constant (say 0.001) specified by the user. This strategy has proved to be
effective in decreasing the runtime significantly at the cost of introducing a few more sub-rectangles.
Another approximation works well in practice is by
star discrepancy with computationally
R replacing
Pn
Qd
1
1
(2)
attractive L2 star discrepancy, i.e., D (Xn ) = ( [0,1]d | n i=1 1xi ?[0,a) ? i=1 ai |2 da) 2 ; in fact,
several statistics to test uniformity hypothesis based on D(2) are proposed in [11]; however, the
theoretical guarantee in Theorem 4 no longer holds. By Warnock?s formula [9],
[D(2) (Xn )]2 =
n
d
n
d
1
21?d X Y
1 X Y
2
?
(1
?
x
)
+
min{1 ? xij , 1 ? xlj }
ij
3d
n i=1 j=1
n2
j=1
i,l=1
D(2) can be computed in O(n logd?1 n) by K. Frank and S. Heinrich?s algorithm [9]. At each scan of
PL
PL
R in Algorithm 1, the total complexity is at most i=1 O(ni logd?1 ni ) ? i=1 O(ni logd?1 N ) ?
O(N logd?1 N ).
There are no closed form formulas for calculating D? (Xn ) and Dn? except for low dimensions. If
? (i) )
we replace
?(i) in (2) and apply Lemma 3, what we are actually trying to do is to control D? (X
?
?
by ? N /ni . There are many existing work on ways to approximate D (Xn ). In particular, a new
randomized algorithm based on threshold accepting is developed in [12]. Comprehensive numerical
tests indicate that it improves upon other algorithms, especially in when 20 ? d ? 50. We used
this algorithm in our experiments. The interested readers are referred to the original paper for more
details.
5
5.2 DSP as a density estimate
1) To demonstrate the method and visualize the results, we apply it on several 2-dimensional data sets
simulated from 3 distributions with different geometry:
1. Gaussian: x ? N (?, ?)1{x ? [0, 1]2 }, with ? = (.5, .5)T , ? = [0.08, 0.02; 0.02, 0.02]
P2
2. Mixture of Gaussians: x ? 12 i=1 N (?i , ?i )1{x ? [0, 1]2 } with ?1 = (.50, .25)T , and
?2 = (.50, .75)T , ?1 = ?2 = [0.04, 0.01; 0.01, 0.01];
3. Mixture of Betas: x ? 31 (beta(2, 5)beta(5, 2) + beta(4, 2)beta(2, 4) + beta(1, 3)beta(3, 1));
where N (?, ?) denotes multivariate Gaussian distribution and beta(?, ?) denotes beta distribution.
We simulated 105 points for each distribution. See the first row of Figure 2 for visualizations of the
estimated densities. The figure shows DSP accurately estimates the true density landscape in these
1
1
1
1
1
1
1
1
three
toy
examples.
1
0.8
1
0.8
1
0.8
1
0.8
1
0.8
1
0.8
1
0.6
0.8
0.6
0.8
0.6
0.8
0.6
0.8
0.6
0.8
0.6
0.8
0.4
0.6
0.4
0.6
0.4
0.6
0.4
0.6
0.4
0.6
0.4
0.6
0.2
0.4
0.2
0.4
0.2
0.4
0.2
0.4
0.2
0.4
0.2
0.4
0
0.20
0
0.200.2
0
0 0.4
0.20.2
0.4
0.6
0.2
0.6
0.8
0.4
0.81
0.6
1
0.8
0
0.20 1
0
0.200.2
0
0 0.4
0.20.2
0
0.2
0 0.4
0.2
0.4
0.6
0.4
0.6
0.8
0.6
0.81
0.8
1
0
0
0
-1
10
-2
10
-2
-3
-3
-3
10
-4
10
f1
f1
10 3
-2
10 f1
f2
f2
f3
f3
10 10
10 10 10
10 10 10
5
6
10 10 10
5
6
10 10
1
0.8
0.6
0.81
0.8
1
0.4
0.6
0.4
0.6
0.2
0.4
0.2
0.4
0
-1
-1
f1
f1
10 3
-2
10 f1
f2
f2
f2
f3
f3
f3
10
10
10
-2
-3
1010
10 10
10 10 10
3
4
5
10 10 10
4
5
6
10 10 10
5
0.2
0.4
0.6
0.4
0.6
0.8
0.6
0.81
0.8
1
1
f2
f3
f1
f1
f2
f2
f2
f2
f3
f3
f3
f3
10
6
10 10
10
f2
f3
f1
10
-3
-3
10
10
-3
-3
10
10
-4
1010
f1
-2
10
10
-2
10
-2
2 6
-1
10
-2
10
-4
4
*
-1
f1
-3
3
1
f3
10
2
1
0.8
f2
-3
-4
10
3
0.81
0.6
f1
10
-3
2
0.6
0.8
0.4
10
-1
10
10
-4
0.4
0.6
0.2
0
0.2
0 0.4
00.2
-1
-2
-3
-3
10
*
0
0 0.4
0.20.2
f2
10
10
0.2
0.4
f1
10
10
*
0.4
0.6
10
-1f
-2
10
2 6
-2
10
-2
10
* *
* *
0
*
0.6
0.8
0
0.200.2
0 1
10
f3
-1
0.8
1
* *
* *
0
0.20 1
f2
-4
4
0.4
0.6
0.8
0.81
0.6
0.6
0.8
f1
-3
5
0.2
0.4
0.6
*
0.6
0.8
f3
10
4
0.6
0.8
0.4
0.8
1
f2
-3
3
0.4
0.6
0.2
0.8
1
f1
10
4
* *
*
10
-1
10
3
* *
f2
-2
2
*
f1
-4
10
3
*
* *
0
0.2
0 0.4
00.2
-1
10
-1f
-3
2
-4
2
10
3
10 10
3
4
2
10 10 10
4
5
3
10 10 10
5
6
4
10 10 10
6
5
10 10
Figure
10 2: First row: estimated
10 densities
10
10 for 3 simulated 2D datasets.
10
10 The
10 modes are marked with
10 10 10 10 10 10 10 10 10 10 10 10 10
1010
10 10 10 10 10 10 10 10 10 10 10 10 10
1010
10 10 10 10 10 10 10 10 10 10 10 10 10
stars. The corresponding contours of true densities are embedded for comparison. Second row:
simulation of 2, 5 and 10 dimensional cases (from left to right) with reference functions f1 , f2 , f3 .
x-axis: sample size n. y-axis: error between the true integral and the estimated integral. The vertical
bars are standard error bars obtained from 10 replications. See section 5.2 2) for more details.
-4
2
10
f3
10
-4
10
f2
f3
-3
-3
2
f1
f2
10
10
-4
-2
0
0 1
f1
10
10
10
-1
-2
10
10
0
-1
10
10
-2
10
*
10
-2
10
* *
10
-1
10
*
-1
10
-1
10
00.2
-1
10
* *
* *
10
-4
2
3
-4
2
3
4
3
4
5
4
5
6
5
6
-4
2 6
-4
2
3
-4
2
3
4
3
4
5
4
5
6
5
6
-4
2 6
-4
2
3
2
3
4
3
4
5
4
5
6
5
6
2) To evaluate the theoretical bound (3), we choose the following three 3 reference functions with
1
Pn Pd
Pn Pd
2
dimension d = 2, 5 and 10 respectively: f1 (x) =
i=1
j=1 xij , f2 (x) =
i=1
j=1 xij ,
1
Pn Pd
f3 (x) = ( i=1 j=1 xij2 )2 . We generate n ? {102 , 103 , 104 , 105 , 106 } samples from p(x) =
Q
Qd
d
1
beta(x
,
15,
5)
+
beta(x
,
5,
15)
, where beta(?, ?, ?) is the density function of beta
j
j
j=1
j=1
2
distribution.
R
R
R
The error | [0,1]d fk (x)p(x)dx ? [0,1]d fk (x)?
p(x)dx| is bounded by | [0,1]d fk (x)p(x)dx ?
R
Pn
Pn
1
p(x)dx ? n1 j=1 fk (xj )| where p?(x) is the estimated density;
j=1 fk (xj )| + | [0,1]d fk (x)?
n
1
For almost all Monte Carlo methods, the first term is of order O(n? 2 ). The second term is controlled
1
by (3). Thus in total the error is of order O(n? 2 ). We have plot the error against the sample size
on log-log scale for each dimension in the second row of Figure 2. The linear trends in the plots
corroborate the bound in (3).
3) To show the efficiency and scalability of DSP, we compare it with KDE, OPT and BSP in terms
P4
of estimation error and running time. We simulate samples from x ? ( i=1 ?i N (?i , ?i ))1{x ?
[0, 1]d } with d = {2, 3, ? ? ? , 6} and N = {103 , 104 , 105 } respectively. The estimation error measured
in terms of Hellinger Distance is summarized in Table 1. We set m = 10, ? = 0.01 in our experiments.
We found the resulting Hellinger distance to be quite robust as m ranges from 3 to 20 (equally
6
6
10
6
10
spaced). The supplementary material includes the exact details about the parameters of the simulating
distributions, estimation of Hellinger distance and other implementation details for the algorithms.
The table shows DSP achieves comparable accuracy with the best of the other three methods. As
mentioned at the beginning of this paper, one major advantage of DSP?s is its speed. Table 2 shows
our method achieves a significant speed up over all other three algorithms.
Table 1: Error in Hellinger Distance between the true density and KDE, OPT, BSP, our method
for each (d, n) pair. The numbers in parentheses are standard errors from 20 replicas. The best of the
four method is highlighted in bold. Note that the simulations, being based on mixtures of Gaussians,
is unfavorable for methods based on domain partitions.
d
2
3
4
5
6
Hellinger Distance (n = 103 )
KDE
OPT
BSP
DSP
0.2331
0.2147
0.2533
0.2634
(0.0421) (0.0172) (0.0163) (0.0207)
0.2893
0.3279
0.2983
0.3072
(0.0227) (0.0128) (0.0133) (0.0265)
0.3913
0.3839
0.3872
0.3895
(0.0325) (0.0136) (0.0117) (0.0191)
0.4522
0.4748
0.4435
0.4307
(0.0317) (0.009) (0.0167) (0.0302)
0.5511
0.5508
0.5515
0.5527
(0.0318) (0.0307) (0.0354) (0.0381)
Hellinger Distance (n = 104 )
KDE
OPT
BSP
DSP
0.1104
0.0957
0.1222
0.0803
(0.0102) (0.0036) (0.0043) (0.0013)
0.2003
0.1722
0.1717
0.1721
(0.0199) (0.0028) (0.0083) (0.0073)
0.2466
0.2726
0.2882
0.2955
(0.0113) (0.0031) (0.0047) (0.0065)
0.3599
0.3562
0.3987
0.3563
(0.0199) (0.0025) (0.0022) (0.0031)
0.4833
0.4015
0.4093
0.3911
(0.0255) (0.0023) (0.0046) (0.0037)
Hellinger Distance (n = 105 )
KDE
OPT
BSP
DSP
0.0305
0.0376
0.0345
0.0312
(0.0021) (0.0021) (0.0025) (0.0027)
0.1466
0.1117
0.1323
0.1020
(0.0047) (0.0008) (0.0009) (0.004)
0.1900
0.1880
0.2100
0.1827
(0.0057) (0.0006) (0.0006) (0.0059)
0.2817
0.2822
0.2916
0.2910
(0.0088) (0.0005) (0.0003) (0.0002)
0.3697
0.3409
0.3693
0.3701
(0.0122) (0.0005) (0.0004) (0.0002)
Table 2: Average CPU time in seconds of KDE, OPT, BSP and our method for each (d, n) pair.
The numbers in parentheses are standard errors from 20 replicas. The speed-up is the fold speed-up
computed as the ratio between the minimum run time of the other three methods and the run time of
DSP. All methods are implemented in C++. See the supplementary material for more details.
d
2
3
4
5
6
KDE
2.445
(0.191)
2.655
(0.085)
3.540
(0.116)
4.107
(0.110)
4.986
(0.214)
Running time (n = 103 )
OPT
BSP
DSP
9.484
0.833
0.020
(0.029) (0.006) (0.002)
25.073
1.054
0.019
(0.056) (0.010) (0.002)
32.112
1.314
0.019
(0.072) (0.014) (0.002)
37.599
1.713
0.020
(0.088) (0.019) (0.002)
41.565
2.749
0.020
(0.147) (0.024) (0.001)
speed-up
41
55
69
85
137
KDE
21.903
(1.905)
26.964
(1.089)
37.141
(2.244)
45.580
(2.124)
53.291
(2.767)
Running time (n = 104 )
OPT
BSP
DSP
31.561
1.445
0.033
(0.079) (0.014) (0.002)
36.683
2.819
0.044
(0.076) 0.036) (0.001)
39.219
5.861
0.049
(0.221) (0.076) (0.002)
44.520 12.220
0.078
(0.587) (0.154) (0.002)
43.032 21.696
0.127
(0.413) (0.213) (0.004)
speed-up
43
64
119
157
170
KDE
230.179
(130.572)
278.075
(10.576)
347.501
(14.676)
412.828
(16.252)
519.298
(29.276)
Running time (n = 105 )
OPT
BSP
DSP
44.561
7.750
0.242
(0.639) (0.178) (0.015)
56.329
21.104
0.378
(0.911) (0.576) (0.011)
67.366
53.620
0.485
(3.018) (2.917) (0.018)
77.776 115.869
0.706
(2.215) (6.872) (0.051)
81.023 218.999
0.896
(3.703) (6.046) (0.071)
speed-up
33
55
108
110
90
5.3 DSP-kmeans
In addition to being a competitive density estimator, we demonstrate in this section how DSP can be
used to get good initializations for k-means. The resulting algorithm is referred to as DSP-kmeans.
Recall that given a fixed number of clusters K, the goal of k-means is to minimize the following
objective function:
K X
X
?
JK =
kxi ? mk k22
(4)
k=1 i?Ck
where Ck denote the set of points in cluster k; {mk }K
k=1 denote the cluster means. The original
k-means algorithms proceeds by alternating between assigning points to centers and recomputing
the means. As a result, the final clustering is usually only a local optima and can be sensitive to the
initializations. Finding a good initialization has attracted a lot of attention over the past decade and
now there is a descent number existing methods, each with their own perspectives. Below we review
a few representative types.
One type of methods look for good initial centers sequentially. The idea is once the first center is
picked, the second should be far away from the one that is already chosen. A similar argument applies
to the rest of the centers. [13] [14] fall under this category. Several studies [15] [16] borrow ideas
from hierarchical agglomerative clustering (HAC) to look for good initializations. In our experiments
we used the algorithm described in [15]. One essential ingredient of this type of algorithms is the inter
cluster distance, which could be problem dependent. Last but not least, there is a class of methods
that attempt to utilize the relationship between PCA and k-means. [17] proposes a PCA-guided
search for initial centers. [18] combines the relationship between PCA and k-means to look for
good initialization. The general idea is to recursively splitting a cluster according the first principal
component. We refer to this algorithm as PCA-REC.
7
DSP-kmeans is different from previous methods in that it tackles the initialization problem from
a density estimation point of view. The idea behind DSP-kmeans is that cluster centers should be
close to the modes of underlying probability density function. If a density estimator can accurately
locate the modes of the underlying true density function, it should also be able to find good cluster
centers. Due to its concise representation, DSP can be used for finding initializations for k-means
in the following way: Suppose we are trying to cluster a dataset Y with K clusters. We first apply
DSP on Y to find a partition with K non-empty sub-rectangles, i.e. sub-rectangles that have at least
one point from Y . The output of DSP will be K sub-rectangles.
P Denote the set of indices for the
points in sub-rectangle j by Sj , j = 1, . . . , K, let Ij = |S1j | i?Sj Yi , i.e. Ij is the sample average
of points fall into sub-rectangle j. We then use {I1 , ? ? ? , IK } to initialize k-means. We also explored
the following two-phase procedure: first over partition the space to build a more accurate density
estimate. Points in different sub-rectangles are considered to be in different clusters. Then we merge
the sub-rectangles hierarchically based on some measure of between cluster distance. We have found
this to be helpful when the number of clusters K is relatively small. For completeness, we have
included the details of this two-phase DSP-kmeans in the supplementary material.
We test DSP-kmeans on 4 real world datasets of various number of data points and dimensions. Two
of them are taken from the UCI machine learning repository [19]; the stem cell data set is taken from
the FlowCAP challenges [20]; the mouse bone marrow data set is a recently published single-cell
dataset measured using mass cytometry [21]. We use random initialization as the base case and
compare it with DSP-kmeans, k-means++, PCA-REC and HAC. The numbers in Table 3 are the
improvements in k-means objective function of a method over random initialization. The result
shows when the number of clusters is relatively large DSP-kmeans achieves lower objective value
in these four datasets. Although in theory almost all density estimator could be used to find good
Table 3: Comparison of different initialization methods. The number for method j is relative
J
?J
to random initialization: K,jJK,0K,0 , where JK,j is the k-means objective value of method j at
convergence. Here we use 0 as index for random initialization. Negative number means the method
perform worse than random initialization.
Road network
n 4.3e+04
d 3
Stem cell
n 9.9e+03
d 6
k
4
10
20
40
60
k
4
10
20
40
60
Improvement over random init.
k-means++ PCA-REC HAC DSP-kmeans
0.0
-0.02
0.01
0.0
0.0
-0.12
0.25
0.08
0.43
-0.46
1.68
2.04
11.7
-2.52
2.27
13.62
19.78
-3.45
18.69
20.91
k-means++ PCA-REC HAC DSP-kmeans
3.45
-2.1
3.67
3.96
3.82
-4.2
3.79
3.6
9.96
-3.59
9.91
9.39
9.95
-6.39
10.11
12.49
6.12
-7.29
8.19
13.7
Mouse bone marrow
n 8.7e+04
d 39
US census
n 2.4e+06
d 68
k
4
10
20
40
60
k
4
10
20
40
60
Improvement over random init.
k-means++ PCA-REC HAC DSP-kmeans
1.51
0.03
1.25
0.4
0.45
0.24
0.77
0.83
0.63
-1.2
0.68
0.79
1.99
-3.56
2.06
2.55
2.48
-5.25
2.57
2.65
k-means++ PCA-REC HAC DSP-kmeans
47.44
-2.33
46.72
40.44
40.52
-1.9
41.48
39.52
32.63
-1.97
29.49
32.55
32.66
-5.15
33.41
34.61
21.7
-1.19
16.28
21.68
initializations. Based on the comparison of Hellinger distance in Table 1, we would expect them to
have similar performances. However, for OPT and BSP, their runtime would be a major bottleneck for
their applicability The situation for KDE is slightly more complicated: not only it is computationally
quite intensive, its output can not be represented as concisely as partition based methods. Here we
see that the efficiency of DSP makes it possible to utilize it for other machine learning tasks.
6
Conclusion
In this paper we propose a novel density estimation method based on ideas from Quasi-Monte Carlo
1
analysis. We prove it achieves a O(n? 2 ) error rate. By comparing it with other density estimation
methods, we show DSP has comparable performance in terms of Hellinger distance while achieving
a significant speed-up. We also show how DSP can be used to find good initializations for k-means.
Due to space limitation, we were unable to include other interesting applications including mode
seeking, data visualization via level set tree and data compression [22].
Acknowledgements. This work was supported by NIH-R01GM109836, NSF-DMS1330132 and
NSF-DMS1407557. The second author?s work was done when the author was a graduate student at
Stanford University.
8
References
[1] Luo Lu, Hui Jiang, and Wing H Wong. Multivariate density estimation by bayesian sequential partitioning.
Journal of the American Statistical Association, 108(504):1402?1410, 2013.
[2] Emanuel Parzen. On estimation of a probability density function and mode. The annals of mathematical
statistics, 33(3):1065?1076, 1962.
[3] Wing H Wong and Li Ma. Optional p?lya tree and bayesian inference. The Annals of Statistics, 38(3):1433?
1459, 2010.
[4] Han Liu, Min Xu, Haijie Gu, Anupam Gupta, John Lafferty, and Larry Wasserman. Forest density
estimation. The Journal of Machine Learning Research, 12:907?951, 2011.
[5] Parikshit Ram and Alexander G Gray. Density estimation trees. In Proceedings of the 17th ACM SIGKDD
international conference on Knowledge discovery and data mining, pages 627?635. ACM, 2011.
[6] Lauwerens Kuipers and Harald Niederreiter. Uniform distribution of sequences. Courier Dover Publications,
2012.
[7] Art B Owen. Multidimensional variation for quasi-monte carlo. In International Conference on Statistics
in honour of Professor Kai-Tai Fang?s 65th birthday, pages 49?74, 2005.
[8] Stefan Heinrich, Erich Novak, Grzegorz W Wasilkowski, and Henryk Wozniakowski. The inverse of the
star-discrepancy depends linearly on the dimension. ACTA ARITHMETICA-WARSZAWA-, 96(3):279?302,
2000.
[9] Carola Doerr, Michael Gnewuch, and Magnus Wahlstr?m. Calculation of discrepancy measures and
applications. Preprint, 2013.
[10] Michael Gnewuch. Entropy, randomization, derandomization, and discrepancy. In Monte Carlo and
quasi-Monte Carlo methods 2010, pages 43?78. Springer, 2012.
[11] Jia-Juan Liang, Kai-Tai Fang, Fred Hickernell, and Runze Li. Testing multivariate uniformity and its
applications. Mathematics of Computation, 70(233):337?355, 2001.
[12] Michael Gnewuch, Magnus Wahlstr?m, and Carola Winzen. A new randomized algorithm to approximate
the star discrepancy based on threshold accepting. SIAM Journal on Numerical Analysis, 50(2):781?807,
2012.
[13] David Arthur and Sergei Vassilvitskii. k-means++: The advantages of careful seeding. In Proceedings
of the eighteenth annual ACM-SIAM symposium on Discrete algorithms, pages 1027?1035. Society for
Industrial and Applied Mathematics, 2007.
[14] Ioannis Katsavounidis, C-C Jay Kuo, and Zhen Zhang. A new initialization technique for generalized lloyd
iteration. Signal Processing Letters, IEEE, 1(10):144?146, 1994.
[15] Chris Fraley. Algorithms for model-based gaussian hierarchical clustering. SIAM Journal on Scientific
Computing, 20(1):270?281, 1998.
[16] Stephen J Redmond and Conor Heneghan. A method for initialising the k-means clustering algorithm
using kd-trees. Pattern recognition letters, 28(8):965?973, 2007.
[17] Qin Xu, Chris Ding, Jinpei Liu, and Bin Luo. Pca-guided search for k-means. Pattern Recognition Letters,
54:50?55, 2015.
[18] Ting Su and Jennifer G Dy. In search of deterministic methods for initializing k-means and gaussian
mixture clustering. Intelligent Data Analysis, 11(4):319?338, 2007.
[19] Manohar Kaul, Bin Yang, and Christian S Jensen. Building accurate 3d spatial networks to enable next
generation intelligent transportation systems. In Mobile Data Management (MDM), 2013 IEEE 14th
International Conference on, volume 1, pages 137?146. IEEE, 2013.
[20] Nima Aghaeepour, Greg Finak, Holger Hoos, Tim R Mosmann, Ryan Brinkman, Raphael Gottardo,
Richard H Scheuermann, FlowCAP Consortium, DREAM Consortium, et al. Critical assessment of
automated flow cytometry data analysis techniques. Nature methods, 10(3):228?238, 2013.
[21] Matthew H Spitzer, Pier Federico Gherardini, Gabriela K Fragiadakis, Nupur Bhattacharya, Robert T Yuan,
Andrew N Hotson, Rachel Finck, Yaron Carmi, Eli R Zunder, Wendy J Fantl, et al. An interactive reference
framework for modeling a dynamic immune system. Science, 349(6244):1259425, 2015.
[22] Robert M Gray and Richard A Olshen. Vector quantization and density estimation. In Compression and
Complexity of Sequences 1997. Proceedings, pages 172?193. IEEE, 1997.
9
| 6217 |@word repository:1 middle:1 compression:2 proportion:1 simulation:3 p0:1 concise:3 recursively:1 moment:1 initial:2 liu:2 hardy:1 past:1 existing:4 comparing:1 luo:2 assigning:1 dx:9 bd:3 readily:1 attracted:1 john:1 sergei:1 numerical:2 partition:32 christian:1 seeding:1 plot:2 greedy:1 beginning:1 runze:1 ith:1 dover:1 short:1 accepting:2 provides:1 completeness:1 node:1 location:5 simpler:1 zhang:1 mathematical:1 along:5 constructed:1 dn:5 become:1 beta:13 ik:1 replication:1 symposium:1 prove:1 yuan:1 combine:1 inside:1 hellinger:10 inter:1 gjk:2 derandomization:1 decreasing:1 cpu:1 kuiper:1 spain:1 estimating:1 underlying:5 notation:2 bounded:6 mass:2 moreover:1 spitzer:1 what:1 mountain:1 kind:1 developed:1 finding:2 transformation:2 guarantee:2 niederreiter:1 ti:1 multidimensional:1 tackle:1 interactive:1 runtime:2 scaled:1 control:6 originates:3 partitioning:5 positive:4 before:2 local:2 limit:1 jiang:1 meet:1 approximately:1 merge:1 birthday:1 initialization:19 acta:1 wozniakowski:1 limited:1 range:1 graduate:1 testing:1 practice:2 recursive:2 procedure:2 empirical:3 ri2:3 significantly:1 courier:1 word:1 road:1 consortium:2 get:1 close:1 wong:3 deterministic:1 center:7 eighteenth:1 transportation:1 go:1 attention:2 starting:1 simplicity:1 splitting:3 wasserman:1 estimator:13 borrow:1 fang:2 handle:2 coordinate:1 variation:5 annals:2 pt:3 suppose:3 user:4 exact:1 hypothesis:1 origin:2 element:1 trend:1 approximated:1 jk:2 utilized:1 rec:6 recognition:2 preprint:1 ding:1 initializing:1 calculate:1 region:3 trade:2 mentioned:1 intuition:1 pd:3 complexity:3 ideally:1 heinrich:2 dynamic:1 uniformity:5 algebra:1 upon:1 efficiency:5 f2:20 novak:1 gu:1 easily:1 joint:1 haijie:1 various:2 represented:1 distinct:1 effective:1 monte:12 hyper:6 whose:1 quite:3 stanford:8 widely:1 solve:1 supplementary:4 tightness:1 otherwise:1 say:1 federico:1 xij2:1 statistic:7 transform:1 highlighted:1 final:2 advantage:4 sequence:3 propose:5 raphael:1 p4:1 qin:1 uci:1 achieve:1 representational:1 intuitive:1 scalability:1 convergence:9 regularity:1 cluster:13 optimum:1 empty:1 xil:2 tim:1 depending:1 jjk:1 andrew:1 measured:2 ij:4 p2:1 dividing:1 implemented:1 implies:1 indicate:1 qd:5 guided:2 exploration:1 enable:1 larry:1 material:4 bin:4 f1:20 generalization:1 randomization:1 opt:13 ryan:1 manohar:1 pl:3 hold:1 gabriela:1 sufficiently:1 considered:1 magnus:2 mapping:1 bj:10 visualize:1 bump:1 pointing:1 matthew:1 major:2 achieves:5 adopt:1 smallest:1 estimation:27 applicable:2 sensitive:2 largest:1 tool:1 minimization:1 stefan:1 always:1 gaussian:4 ck:2 pn:7 mobile:1 publication:1 corollary:3 encode:1 dsp:37 improvement:4 contrast:1 sigkdd:1 industrial:1 sense:3 helpful:3 inference:4 dependent:1 stopping:1 typically:2 integrated:1 quasi:7 i1:3 interested:1 provably:1 classification:1 proposes:1 xjd:2 integration:1 copula:2 initialize:1 marginal:1 equal:1 once:2 construct:1 having:1 f3:18 sampling:1 holger:1 look:3 discrepancy:19 piecewise:12 intelligent:2 few:2 richard:2 comprehensive:1 parikshit:1 floating:1 maxj:2 geometry:1 phase:2 n1:2 attempt:1 mining:1 evaluation:1 mixture:4 behind:1 accurate:2 integral:2 necessary:1 arthur:1 unless:1 tree:13 divide:1 theoretical:3 mk:2 recomputing:1 modeling:1 conor:1 corroborate:1 wahlstr:2 cost:1 introducing:2 applicability:1 ri1:4 uniform:3 usefulness:1 s1j:1 too:1 kxi:1 density:69 fundamental:1 randomized:2 international:3 siam:3 sequel:1 off:2 michael:3 parzen:1 mouse:2 squared:1 recorded:1 management:1 containing:1 choose:2 leveraged:1 juan:1 worse:1 american:1 wing:3 return:1 li:3 toy:1 star:11 summarized:1 bold:1 includes:1 student:1 ioannis:1 lloyd:1 satisfy:1 ad:2 depends:1 performed:1 view:2 lot:1 closed:1 picked:1 bone:2 sup:2 competitive:1 parallel:1 complicated:1 yaron:1 jia:1 contribution:1 minimize:1 ni:17 accuracy:2 greg:1 spaced:1 landscape:3 bayesian:5 accurately:2 iid:2 lu:1 carlo:12 worth:1 published:1 whenever:1 definition:3 against:2 pier:1 naturally:1 proof:2 di:2 stop:2 emanuel:1 proved:2 dataset:2 popular:1 recall:1 knowledge:2 improves:1 actually:1 higher:3 follow:1 specify:2 done:2 generality:1 until:1 hand:2 replacing:1 su:1 kaul:1 bsp:12 google:1 assessment:1 icme:1 mode:5 aj:18 reveal:1 gray:2 scientific:1 grows:1 building:1 xj1:2 concept:2 true:9 k22:1 hence:2 alternating:1 i2:2 attractive:3 uniquely:1 criterion:1 generalized:1 trying:2 complete:1 demonstrate:6 logd:4 mdm:1 novel:2 recently:1 nih:1 common:2 pseudocode:1 empirically:1 volume:1 discussed:1 association:1 refer:2 significant:2 ai:1 rd:2 fk:6 erich:1 similarly:1 pointed:1 mathematics:2 immune:1 han:1 longer:2 base:1 dominant:1 posterior:1 multivariate:3 own:1 perspective:2 belongs:1 inf:1 flowcap:2 kai:2 carmi:1 certain:1 inequality:1 binary:11 arbitrarily:1 yi:1 integrable:1 minimum:1 additional:1 r0:7 lya:2 signal:1 stephen:1 stem:3 exceeds:1 calculation:1 ofp:1 divided:1 equally:1 a1:2 controlled:1 parenthesis:2 regression:1 essentially:1 histogram:2 kernel:6 iteration:1 harald:1 cell:3 addition:2 krause:1 else:2 appropriately:1 rest:1 hac:6 lafferty:1 flow:1 near:1 yang:2 split:6 automated:1 xj:6 bandwidth:3 restrict:2 idea:5 intensive:2 bottleneck:1 whether:1 motivated:1 pca:10 vassilvitskii:1 suffer:1 fraley:1 remark:1 nonparametric:8 category:4 generate:1 specifies:1 xij:7 restricts:1 nsf:2 katsavounidis:1 estimated:6 track:1 wendy:1 discrete:1 shall:1 key:3 putting:1 four:2 threshold:3 scheuermann:1 achieving:2 replica:2 rectangle:25 utilize:2 ram:1 run:2 inverse:1 letter:3 eli:1 rachel:1 family:1 reader:1 decide:2 almost:2 utilizes:1 decision:1 prefer:1 initialising:1 dy:1 comparable:3 capturing:1 bound:7 fold:3 annual:1 infinity:1 constraint:1 x2:1 ri:12 speed:9 simulate:1 min:2 argument:1 relatively:2 department:1 according:1 conjugate:1 kd:1 slightly:1 xlj:1 lp:1 honour:1 pr:9 census:1 taken:3 computationally:5 visualization:2 tai:2 jennifer:1 needed:1 gaussians:2 finak:1 apply:3 observe:1 hierarchical:2 away:1 simulating:1 save:1 alternative:1 anupam:1 bhattacharya:1 dangna:2 original:2 top:1 denotes:3 running:4 clustering:5 include:1 calculating:2 restrictive:1 ting:1 especially:2 establish:3 build:2 society:1 seeking:1 objective:5 already:1 occurs:1 parametric:1 strategy:1 surrogate:1 distance:13 unable:1 simulated:3 chris:2 agglomerative:1 provable:1 dream:1 assuming:1 index:2 relationship:2 illustration:1 ratio:1 minimizing:1 art:1 liang:1 kun:1 difficult:1 olshen:1 robert:2 frank:1 kde:10 stated:1 negative:1 implementation:2 unknown:1 perform:1 allowing:1 upper:2 vertical:1 observation:5 datasets:3 descent:1 optional:2 situation:1 precise:1 locate:1 supa:1 cytometry:2 grzegorz:1 introduced:1 david:1 pair:4 specified:1 concisely:1 barcelona:1 nip:1 able:2 bar:2 proceeds:2 below:2 usually:1 redmond:1 pattern:2 challenge:1 max:1 including:1 power:2 critical:1 doerr:1 indicator:1 brinkman:1 improve:1 axis:2 zhen:1 text:1 prior:1 review:1 l2:1 checking:1 acknowledgement:1 discovery:1 relative:2 asymptotic:1 embedded:1 loss:3 expect:1 highlight:1 interesting:1 limitation:1 generation:1 ingredient:4 degree:2 imposes:1 tiny:1 balancing:1 cd:2 row:4 summary:2 supported:2 last:1 bias:2 allow:1 fall:2 absolute:1 dimension:15 xn:24 world:1 wasilkowski:1 contour:1 fred:1 author:2 adaptive:4 far:1 sj:2 approximate:4 supremum:1 keep:3 sequentially:1 b1:3 xi:11 continuous:2 search:3 decade:1 table:8 nature:1 learn:1 robust:1 ca:3 init:2 forest:1 domain:7 da:1 marrow:2 main:3 dense:1 hierarchically:1 linearly:1 n2:1 x1:5 xu:2 referred:2 representative:1 borel:1 scattered:1 fashion:1 sub:20 explicit:1 lie:1 spatial:1 jay:1 learns:1 down:1 theorem:12 formula:2 jensen:1 maxi:1 explored:1 gupta:1 essential:1 quantization:1 sequential:4 hui:1 conditioned:2 gap:9 entropy:1 led:2 hoos:1 applies:1 springer:1 corresponds:1 satisfies:2 acm:3 cdf:1 ma:1 goal:2 sized:1 marked:1 kmeans:12 careful:1 replace:1 owen:1 professor:1 nima:1 included:1 specifically:1 except:1 uniformly:1 lemma:5 principal:1 called:1 total:7 kuo:1 experimental:1 unfavorable:1 formally:2 support:1 scan:1 alexander:1 evaluate:1 hung:1 |
5,767 | 6,218 | How Deep is the Feature Analysis underlying Rapid
Visual Categorization?
Sven Eberhardt?
Jonah Cader?
Thomas Serre
Department of Cognitive Linguistic & Psychological Sciences
Brown Institute for Brain Sciences
Brown University
Providence, RI 02818
{sven2,jonah_cader,thomas_serre}@brown.edu
Abstract
Rapid categorization paradigms have a long history in experimental psychology:
Characterized by short presentation times and speeded behavioral responses, these
tasks highlight the efficiency with which our visual system processes natural object
categories. Previous studies have shown that feed-forward hierarchical models
of the visual cortex provide a good fit to human visual decisions. At the same
time, recent work in computer vision has demonstrated significant gains in object
recognition accuracy with increasingly deep hierarchical architectures. But it is
unclear how well these models account for human visual decisions and what they
may reveal about the underlying brain processes.
We have conducted a large-scale psychophysics study to assess the correlation
between computational models and human behavioral responses on a rapid animal
vs. non-animal categorization task. We considered visual representations of varying
complexity by analyzing the output of different stages of processing in three stateof-the-art deep networks. We found that recognition accuracy increases with
higher stages of visual processing (higher level stages indeed outperforming human
participants on the same task) but that human decisions agree best with predictions
from intermediate stages.
Overall, these results suggest that human participants may rely on visual features of
intermediate complexity and that the complexity of visual representations afforded
by modern deep network models may exceed the complexity of those used by
human participants during rapid categorization.
1
Introduction
Our visual system is remarkably fast and accurate. The past decades of research in visual neuroscience
have demonstrated that visual categorization is possible for complex natural scenes viewed in
rapid presentations. Participants can reliably detect and later remember visual scenes embedded in
continuous streams of images with exposure times as low as 100 ms [see 15, for review]. Observers
can also reliably categorize animal vs. non-animal images (and other classes of objects) even when
flashed for 20 ms or less [see 6, for review].
Unlike normal everyday vision which involves eye movements and shifts of attention, rapid visual
categorization is assumed to involve a single feedforward sweep of visual information [see 19, for
review] and engages our core object recognition system [reviewed in 5]. Interestingly, incorrect
responses during rapid categorization tasks are not uniformly distributed across stimuli (as one would
?
These authors contributed equally.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
expect from random motor errors) but tend to follow a specific pattern reflecting an underlying
visual strategy [1]. Various computational models have been proposed to describe the underlying
feature analysis [see 2, for review]. In particular, a feedforward hierarchical model constrained by the
anatomy and the physiology of the visual cortex was shown to agree well with human behavioral
responses [16].
In recent years, however, the field of computer vision has championed the development of increasingly
deep and accurate models ? pushing the state of the art on a range of categorization problems from
speech and music to text, genome and image categorization [see 12, for a recent review]. From
AlexNet [11] to VGG [17] and Microsoft CNTK [8], over the years, the ImageNet Large Scale Visual
Recognition Challenge (ILSVRC) has been won by progressively deeper architectures. Some of the
ILSVRC best performing architectures now include 150 layers of processing [8] and even 1,000
layers for other recognition challenges [9] ? arguably orders of magnitude more than the visual
system (estimated to be O(10), see [16]). Despite the absence of neuroscience constraints on modern
deep learning networks, recent work has shown that these architectures explain neural data better
than earlier models [reviewed in 20] and are starting to match human level of accuracy for difficult
object categorization tasks [8].
It thus raises the question as to whether recent deeper network architectures better account for speeded
behavioral responses during rapid categorization tasks or whether they have actually become too
deep ? instead deviating from human responses. Here, we describe a rapid animal vs. non-animal
visual categorization experiment that probes this question. We considered visual representations of
varying complexity by analyzing the output of different stages of processing in state-of-the-art deep
networks [11, 17]. We show that while recognition accuracy increases with higher stages of visual
processing (higher level stages indeed outperforming human participants for the same task) human
decisions agreed best with predictions from intermediate stages.
2
Methods
Image dataset A large set of (target) animal and (distractor) non-animal stimuli was created by
sampling images from ImageNet [4]. We balanced the number of images across basic categories from
14 high-level synsets, to curb biases that are inherent in Internet images. (We used the invertebrate,
bird, amphibian, fish, reptile, mammal, domestic cat, dog, structure, instrumentation, consumer goods,
plant, geological formation, and natural object subtrees.) To reduce the prominence of low-level
visual cues, images containing animals and objects on a white background were discarded. All
pictures were converted to grayscale and normalized for illumination. Images less than 256 pixels in
either dimension were similarly removed and all other images were cropped to a square and scaled to
256 ? 256 pixels. All images were manually inspected and mislabeled images and images containing
humans were removed from the set (? 17% of all images). Finally, we drew stimuli uniformly
(without replacement) from all basic categories to create balanced sets of 300 images. Each set
contained 150 target images (half mammal and half non-mammal animal images) and 150 distractors
(half artificial objects and half natural scenes). We created 7 such sets for a total of 2,100 images
used for the psychophysics experiment described below.
Participants Rapid visual categorization data was gathered from 281 participants using the Amazon
Mechanical Turk (AMT) platform (www.mturk.com). AMT is a powerful tool that allows the
recruitment of massive trials of anonymous workers screened with a variety of criteria [3].
All participants provided informed consent electronically and were compensated $4.00 for their time
(? 20?30 min per image set, 300 trials). The protocol was approved by the University IRB and
was carried out in accordance with the provisions of the World Medical Association Declaration of
Helsinki.
Experimental procedure On each trial, the experiment ran as follows: On a white background (1)
a fixation cross appeared for a variable time (1,100?1,600 ms); (2) a stimulus was presented for 50
ms. The order of image presentations was randomized. Participants were instructed to answer as
fast and as accurately as possible by pressing either the ?S? or ?L? key depending on whether they
saw an animal (target) or non-animal (distractor) image. Key assignment was randomized for each
participant.
2
Pre-stimulus xation
(1,100-1,600ms)
Stimulus (50ms)
Max Response
Window (500ms)
...if window
is reached
Please
respond
faster!
Tim
e
Figure 1: Experimental paradigm and stimulus set: (top) Each trial began with a fixation cross
(1,100?1,600 ms), followed by an image presentation (? 50 ms). Participants were forced to answer
within 500 ms. A message appeared when participants failed to respond in the allotted time. (bottom)
Sample stimuli from the balanced set of animal and non-animal images (n=2,100). A fast answer
time paradigm was used in favor of masking to avoid possible performance biases between different
classes caused by the mask [6, 15].
Participants were forced to respond within 500 ms (a message was displayed in the absence of a
response past the response deadline). In past studies, this has been shown to yield reliable behavioral
data [e.g. 18]. We have also run a control to verify that the maximum response time did not affect
qualitatively our results.
An illustration of the experimental paradigm is shown in Figure 1. At the end of each block,
participants received feedback about their accuracy. An experiment started with a short practice
during which participants were familiarized with the task (stimulus presentation was slowed down
and participants were provided feedback on their response). No other feedback was provided to
participants during the experiment.
We used the psiTurk framework [13] combined with custom javascript functions. Each trial (i.e.,
fixation cross followed by the stimulus) was converted to a HTML5-compatible video format to
provide the fastest reliable presentation time possible in a web browser. Videos were generated to
include the initial fixation cross and the post-presentation answer screen with the proper timing as
described above. Videos were preloaded before each trial to ensure reliable image presentation times
over the Internet.
We used a photo-diode to assess the reliability of the timing on different machines including different
OS, browsers and screens and found the timing to be accurate to ? 10 ms. Images were shown at a
3
Figure 2: Model decision scores: A classifier (linear SVM) is trained on visual features corresponding to individual layers from representative deep networks. The classifier learns a decision
boundary (shown in red) that best discriminates target/animal and distractor/non-animal images.
Here, we consider the signed distance from this decision boundary (blue dotted lines) as a measure
of the model?s confidence on the classification of individual images. A larger distance indicates
higher confidence. For example, while images (a) and (b) are both correctly classified, the model?s
confidence for image (a) correctly classified as animal is higher than that of (b) correctly classified as
non-animal. Incorrectly classified images, such as (c) are assigned negative scores corresponding to
how far onto the wrong side of the boundary they fall.
resolution of 256? 256. We estimate this to correspond to a stimulus size between approximately
5o ? 11o visual angle depending on the participants? screen size and specific seating arrangement.
The subjects pool was limited to users connections from the United States using either the Firefox or
Chrome browser on a non-mobile device. Subjects also needed to have a minimal average approval
rating of 95% on past Mechanical Turk tasks.
As stated above, we ran 7 experiments altogether for a total of 2,100 unique images. Each experiment
lasted 20-30 min and contained a total of 300 trials divided into 6 blocks (50 image presentations /
trials each). Six of the experiments followed the standard experimental paradigm described above
(1,800 images and 204 participants). The other 300 images and 77 participants were reserved for a
control experiment in which the maximum reaction time per block was set to 500 ms, 1,000 ms, and
1,500 ms for two block each. (See below.)
Computational models We tested the accuracy of individual layers from state-of-the-art deep
networks including AlexNet [11], VGG16 and VGG19 [17]. Feature responses were extracted
from different processing stages (Caffe implementation [10] using pre-trained weights). For fully
connected layers, features were taken as is; for convolutional layers, a subset of 4,096 features
was extracted via random sampling. Model decisions were based on the output of a linear SVM
(scikit-learn [14] implementation) trained on 80,000 ImageNet images (C regularization parameter
optimized by cross-validation). Qualitatively similar results were obtained with regularized logistic
regression. Feature layer accuracy was computed from SVM performance.
Model confidence for individual test stimuli was defined as the estimated distance from the decision
boundary (see Figure 2). A similar confidence score was computed for human participants by
considering the fraction of correct responses for individual images. Spearman?s rho rank-order
correlations (rs ) was computed between classifier confidence outputs and human decision scores.
Bootstrapped 95% confidence intervals (CIs) were calculated on human-model correlation and
human classification scores. Bootstrap runs (n=300) were based on 180 participants sampled with
4
replacement from the subject pool. CIs were computed by considering the bottom 2.5% and top
97.5% values as upper and lower bounds.
3
Results
We computed the accuracy of individual layers from commonly used deep networks: AlexNet [11]
as well as VGG16 and VGG19 [17]. The accuracy of individual layers for networks pre-trained on
the ILSVRC 2012 challenge (1,000 categories) is shown in Figure 3 (a). The depth of individual
layers was normalized with respect to the maximum depth as layer depth varies across models. In
addition, we selected VGG16 as the most popular state-of-the-art model and fine-tuned it on the
animal vs. non-animal categorization task at hand. Accuracy for all models increased monotonically
(near linearly) as a function of depth to reach near perfect accuracy for the top layers for the best
networks (fine-tuned VGG16). Indeed, all models exceeded human accuracy on this rapid animal vs.
non-animal categorization task. Fine-tuning did improve test accuracy slightly from 95.0% correct to
97.0% correct on VGG16 highest layer, but the performance of all networks remained high in the
absence of any fine-tuning.
To benchmark these models, we assessed human participants? accuracy and reaction times (RTs) on
this animal vs. non-animal categorization task. On average, participants responded correctly with an
accuracy of 77.4% (? 1.4%). These corresponded to an average d? of 1.06 (? 0.06). Trials for which
participants failed to answer before the deadline were excluded from the evaluation (13.7% of the total
number of trials). The mean RT for correct responses was 429 ms (? 103 ms standard deviation). We
computed the minimum reaction time MinRT defined as the first time bin for which correct responses
start to significantly outnumber incorrect responses [6]. The MinRT is often considered a floor limit
for the entire visuo-motor sequence (feature analysis, decision making, and motor response) and
could be completed within a temporal window as short as 370 ms ? 75 ms. We computed this using
a binomial test (p < 0.05) on classification accuracy from per-subject RT data sorted into 20 ms bins
and found the median value of the corresponding distribution.
Confidence scores for each of the 1,800 (animal and non-animal) main experiment images were
calculated for human participants and all the computational models. The resulting correlation
coefficients are shown in Figure 3 (b). Human inter-subject agreement, measured as Spearman?s
rho correlation between 1,000 randomly selected pairs of bootstrap runs, is at ? = 0.74 (? 0.05%).
Unlike individual model layer accuracy which increases monotonically, the correlation between these
same model layers and human participants picked for intermediate layers and decreased for deeper
layers. This drop-off is stable across all tested architectures and started around at 70% of the relative
model depth. For comparison, we re-plotted the accuracy of the individual layers and correlation to
human participants for the fine-tuned VGG16 model in Figure 3 (c). The drop-off in correlation to
human responses begins after layer conv5_2, where the correlation peaks at 0.383 ? 0.026. Without
adjustment, i.e. correlating the answers including correctness, the peak lies at the same layer at
0.829 ? 0.008 (see supplement B for graph).
Example images in which humans and VGG16 top layer disagree are shown in Figure 4. The
model typically outperforms humans on elongated animals such as snakes and worms, as well as
camouflaged animals and when objects are presented in an atypical context. Human participants
outperform the model on typical, iconic illustrations such as a cat looking directly at the camera.
We verified that the maximum response time (500 ms) allowed did not qualitatively affect our results.
We ran a control experiment (77 participants) on a set of 300 images where we systematically varied
the maximum response time available (500 ms, 1,000 ms and 2,000 ms). We evaluated differences in
categorization accuracy using a one-way ANOVA with Tukey?s HSD for post-hoc correction. The
accuracy increased significantly from 500 to 1,000 ms (from 74 % to 84 %; p < 0.01). However,
no significant difference was found between 1,000 and 2,000 ms (both ? 84%; p > 0.05). Overall,
we found no qualitative difference in the observed pattern of correlation between human and model
decision scores for longer response times (results in supplement A). We found an overall slight
upward trend for both intermediate and higher layers for longer response times.
5
Figure 3: Comparison between models and human behavioral data: (a) Accuracy and (b) correlation between decision scores derived from various networks and human behavioral data plotted as a
function of normalized layers depth (normalized by the maximal depth of the corresponding deep net).
(c) Superimposed accuracy and correlation between decision scores derived from the best performing
network (VGG16 fine-tuned (ft) for animal categorization) and human behavioral data plotted as a
function of the raw layers depth. Lines are fitted polynomials of 2nd (accuracy) and 3rd (correlation)
degree order. Shaded red background corresponds to 95% CI estimated via bootstrapping shown
for fine-tuned VGG16 model only for readability. Gray curve corresponds to human accuracy (CIs
shown with dashed lines).
6
Figure 4: Sample images where human participants and model (VGG16 layer fc7) disagree. H:
Average human decision score (% correct). M: Model decision score (distance to decision boundary).
4
Discussion
The goal of this study was to perform a computational-level analysis aimed at characterizing the
visual representations underlying rapid visual categorization.
To this end, we have conducted a large-scale psychophysics study using a fast-paced animal vs.
non-animal categorization task. This task is ecologically significant and has been extensively used in
previous psychophysics studies [reviewed in 6]. We have considered 3 state-of-the-art deep networks:
AlexNet [11] as well as VGG16 and VGG19 [17]. We have performed a systematic analysis of the
accuracy of these models? individual layers for the same animal/non-animal categorization task. We
have also assessed the agreement between model and human decision scores for individual images.
Overall, we have found that the accuracy of individual layers consistently increased as a function
of depth for all models tested. This result confirms the current trend in computer vision that better
performance on recognition challenges is typically achieved by deeper networks. This result is also
consistent with an analysis by Yu et al. [21], who have shown that both sparsity of the representation
and distinctiveness of the matched features increase monotonously with the depth of the network.
However, the correlation between model and human decision scores peaked at intermediate layers and
decreased for deeper layers. These results suggest that human participants may rely on visual features
of intermediate complexity and that the complexity of visual representations afforded by modern
deep network models may exceed those used by human participants during rapid categorization. In
particular, the top layers (final convolutional and fully connected), while showing an improvement
in accuracy, no longer maximize correlation with human data. Whether this result is based on
the complexity of the representation or invariance properties of intermediate layers remains to be
investigated. It should be noted that a depth of ? 10 layers of processing has been suggested as an
estimate for the number of processing stages in the ventral stream of the visual cortex [16].
How then does the visual cortex achieve greater depth of processing when more time is allowed for
categorization? One possibility is that speeded categorization reflects partial visual processing up to
intermediate levels while longer response times would allow for deeper processing in higher stages.
We compared the agreement between model and human decisions scores for longer response times
(500 ms, 1,000 ms and 2,000 ms). While the overall correlation increased slightly for longer response
times, this higher correlation did not appear to differentially affect high- vs. mid-level layers.
An alternative hypothesis is that greater depth of processing for longer response times is achieved via
recurrent circuits and that greater processing depth is achieved through time. The fastest behavioral
responses would thus correspond to bottom-up / feed-forward processing. This would be followed by
re-entrant and other top-down signals [7] when more time is available for visual processing.
7
Acknowledgments
We would like to thank Matt Ricci for his early contribution to this work and further discussions.
This work was supported by NSF early career award [grant number IIS-1252951] and DARPA young
faculty award [grant number YFA N66001-14-1-4037]. Additional support was provided by the
Center for Computation and Visualization (CCV).
References
[1] M. Cauchoix, S. M. Crouzet, D. Fize, and T. Serre. Fast ventral stream neural activity enables
rapid visual categorization. Neuroimage, 125:280?290, 2016. ISSN 10538119. doi: 10.1016/j.
neuroimage.2015.10.012.
[2] S. M. Crouzet and T. Serre. What are the Visual Features Underlying Rapid Object Recognition?
Front. Psychol., 2(November):326, jan 2011. ISSN 1664-1078. doi: 10.3389/fpsyg.2011.00326.
[3] M. J. C. Crump, J. V. McDonnell, and T. M. Gureckis. Evaluating Amazon?s Mechanical Turk
as a tool for experimental behavioral research. PLoS ONE,8(3), 2001.
[4] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical
image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE
Conference on, pages 248?255. IEEE, 2009.
[5] J. J. Dicarlo, D. Zoccolan, and N. C. Rust. How does the brain solve visual object recognition?
Neuron, 73(3):415?34, feb 2012. ISSN 1097-4199. doi: 10.1016/j.neuron.2012.01.010.
[6] M. Fabre-Thorpe. The characteristics and limits of rapid visual categorization. Front. Psychol.,
2(October):243, jan 2011. ISSN 1664-1078. doi: 10.3389/fpsyg.2011.00243.
[7] C. D. Gilbert and W. Li. Top-down influences on visual processing. Nat. Rev. Neurosci., 14(5):
350?63, may 2013. ISSN 1471-0048. doi: 10.1038/nrn3476.
[8] K. He, X. Zhang, S. Ren, and J. Sun. Delving Deep into Rectifiers: Surpassing Human-Level
Performance on ImageNet Classification. feb 2015.
[9] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. CoRR,
abs/1603.05027, 2016.
[10] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrel.
Caffe: convolutional architecture for fast feature embedding. In Proceedings of the 2014 ACM
Conference on Multimedia (MM 2014), pages 10005?10014, 2014.
[11] A. Krizhevsky, I. Sutskever, and G. Hinton. ImageNet Classification with Deep Convolutional
Neural Networks. In Neural Inf. Process. Syst., Lake Tahoe, Nevada, 2012.
[12] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436?444, may 2015.
ISSN 0028-0836. doi: 10.1038/nature14539.
[13] J. V. McDonnell, J. B. Martin, D. B. Markant, A. Coenen, A. S. Rich, and T. M.
Gureckis. psiturk (version 1.02)[software]. new york, ny: New york university. available
from https://github.com/nyuccl/psiturk, 2012.
[14] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel,
P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher,
M. Perrot, and E. Duchesnay. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res.,
12:2825?2830, 2011.
[15] M. C. Potter. Recognition and memory for briefly presented scenes. Front. Psychol., 3:32, jan
2012. ISSN 1664-1078. doi: 10.3389/fpsyg.2012.00032.
[16] T. Serre, A. Oliva, and T. Poggio. A feedforward architecture accounts for rapid categorization.
PNAS, 104(15), pages 6424?6429, 2007.
8
[17] K. Simonyan and A. Zisserman. Very Deep Convolutional Networks for Large-Scale Image
Recognition. Technical report, sep 2014.
[18] I. Sofer, S. Crouzet, and T. Serre. Explaining the timing of natural scene understanding with a
computational model of perceptual categorization. PLoS Comput Biol, 2015.
[19] R. VanRullen. The power of the feed-forward sweep. Adv. Cogn. Psychol., 3(1-2):167?176,
2007.
[20] D. L. K. Yamins and J. J. DiCarlo. Using goal-driven deep learning models to understand sensory
cortex. Nat. Neurosci., 19(3):356?365, feb 2016. ISSN 1097-6256. doi: 10.1038/nn.4244.
[21] W. Yu, K. Yang, Y. Bai, H. Yao, and Y. Rui. Visualizing and comparing convolutional neural
networks. arXiv preprint arXiv:1412.6631, 2014.
9
| 6218 |@word trial:10 version:1 faculty:1 polynomial:1 briefly:1 approved:1 nd:1 confirms:1 r:1 prominence:1 irb:1 mammal:3 bai:1 initial:1 score:14 united:1 tuned:5 bootstrapped:1 interestingly:1 dubourg:1 past:4 reaction:3 outperforms:1 current:1 com:2 guadarrama:1 comparing:1 familiarized:1 enables:1 motor:3 drop:2 progressively:1 v:8 cue:1 half:4 device:1 selected:2 rts:1 short:3 core:1 readability:1 tahoe:1 zhang:2 become:1 incorrect:2 qualitative:1 fixation:4 behavioral:10 blondel:1 inter:1 mask:1 indeed:3 rapid:17 distractor:3 brain:3 approval:1 window:3 considering:2 domestic:1 spain:1 provided:4 underlying:6 begin:1 matched:1 circuit:1 alexnet:4 what:2 informed:1 bootstrapping:1 temporal:1 remember:1 scaled:1 classifier:3 wrong:1 control:3 medical:1 grant:2 appear:1 engages:1 arguably:1 before:2 accordance:1 timing:4 limit:2 despite:1 mach:1 analyzing:2 approximately:1 signed:1 championed:1 bird:1 shaded:1 fastest:2 limited:1 speeded:3 range:1 unique:1 camera:1 acknowledgment:1 lecun:1 practice:1 block:4 bootstrap:2 cogn:1 procedure:1 jan:3 physiology:1 significantly:2 pre:3 confidence:8 suggest:2 onto:1 context:1 influence:1 www:1 gilbert:1 elongated:1 demonstrated:2 compensated:1 vgg19:3 center:1 exposure:1 attention:1 starting:1 resolution:1 amazon:2 his:1 embedding:1 curb:1 target:4 inspected:1 massive:1 user:1 hypothesis:1 agreement:3 trend:2 recognition:12 database:1 bottom:3 observed:1 ft:1 preprint:1 connected:2 xation:1 sun:2 adv:1 plo:2 movement:1 removed:2 highest:1 ran:3 balanced:3 discriminates:1 complexity:8 trained:4 raise:1 passos:1 efficiency:1 mislabeled:1 sep:1 darpa:1 various:2 cat:2 forced:2 sven:1 fast:6 describe:2 doi:8 artificial:1 corresponded:1 formation:1 caffe:2 larger:1 rho:2 cvpr:1 solve:1 favor:1 simonyan:1 browser:3 final:1 hoc:1 sequence:1 pressing:1 karayev:1 net:1 nevada:1 maximal:1 consent:1 achieve:1 everyday:1 differentially:1 sutskever:1 categorization:28 perfect:1 object:11 tim:1 depending:2 recurrent:1 measured:1 received:1 diode:1 involves:1 anatomy:1 chrome:1 correct:6 human:41 bin:2 ricci:1 anonymous:1 varoquaux:1 correction:1 mm:1 around:1 considered:4 camouflaged:1 normal:1 mapping:1 ventral:2 early:2 prettenhofer:1 saw:1 correctness:1 create:1 tool:2 reflects:1 avoid:1 varying:2 mobile:1 linguistic:1 hsd:1 derived:2 iconic:1 consistently:1 rank:1 indicates:1 superimposed:1 lasted:1 grisel:1 improvement:1 detect:1 nn:1 entire:1 typically:2 snake:1 pixel:2 overall:5 classification:5 upward:1 stateof:1 development:1 animal:33 art:6 constrained:1 psychophysics:4 platform:1 fabre:1 field:1 gramfort:1 sampling:2 manually:1 yu:2 peaked:1 report:1 stimulus:12 inherent:1 thorpe:1 modern:3 randomly:1 individual:13 deviating:1 replacement:2 microsoft:1 ab:1 message:2 possibility:1 cournapeau:1 custom:1 evaluation:1 yfa:1 subtrees:1 accurate:3 worker:1 partial:1 poggio:1 re:3 plotted:3 girshick:1 minimal:1 fitted:1 psychological:1 increased:4 earlier:1 assignment:1 deviation:1 subset:1 krizhevsky:1 conducted:2 psiturk:3 too:1 monotonously:1 front:3 providence:1 answer:6 varies:1 combined:1 peak:2 randomized:2 systematic:1 off:2 dong:1 pool:2 sofer:1 yao:1 containing:2 cognitive:1 michel:1 li:3 syst:1 account:3 converted:2 coefficient:1 caused:1 stream:3 later:1 performed:1 observer:1 picked:1 tukey:1 reached:1 red:2 start:1 participant:33 masking:1 jia:1 contribution:1 ass:2 square:1 accuracy:27 convolutional:6 responded:1 reserved:1 who:1 characteristic:1 gathered:1 yield:1 correspond:2 raw:1 accurately:1 ren:2 ecologically:1 history:1 classified:4 explain:1 reach:1 turk:3 visuo:1 outnumber:1 gain:1 sampled:1 dataset:1 popular:1 cntk:1 distractors:1 provision:1 agreed:1 actually:1 reflecting:1 feed:3 exceeded:1 higher:9 follow:1 response:27 amphibian:1 wei:1 zisserman:1 evaluated:1 stage:11 correlation:17 hand:1 web:1 o:1 scikit:2 logistic:1 reveal:1 gray:1 matt:1 serre:5 brown:3 normalized:4 verify:1 regularization:1 assigned:1 excluded:1 flashed:1 white:2 visualizing:1 during:6 please:1 noted:1 won:1 m:29 criterion:1 image:43 began:1 rust:1 association:1 slight:1 he:2 surpassing:1 significant:3 tuning:2 rd:1 similarly:1 reliability:1 stable:1 cortex:5 longer:7 fc7:1 feb:3 recent:5 inf:1 instrumentation:1 driven:1 outperforming:2 minimum:1 greater:3 additional:1 floor:1 deng:1 paradigm:5 maximize:1 monotonically:2 dashed:1 vgg16:11 signal:1 ii:1 pnas:1 technical:1 match:1 characterized:1 faster:1 cross:5 long:2 divided:1 deadline:2 equally:1 post:2 award:2 prediction:2 basic:2 regression:1 oliva:1 vision:5 mturk:1 arxiv:2 achieved:3 background:3 remarkably:1 cropped:1 addition:1 interval:1 fine:7 decreased:2 median:1 unlike:2 subject:5 tend:1 near:2 yang:1 intermediate:9 exceed:2 feedforward:3 bengio:1 markant:1 variety:1 affect:3 fit:1 psychology:1 vanrullen:1 architecture:8 reduce:1 vgg:1 shift:1 whether:4 six:1 firefox:1 coenen:1 speech:1 york:2 deep:20 gureckis:2 involve:1 aimed:1 mid:1 extensively:1 category:4 http:1 outperform:1 nsf:1 fish:1 dotted:1 neuroscience:2 estimated:3 per:3 correctly:4 blue:1 brucher:1 key:2 anova:1 verified:1 fize:1 n66001:1 graph:1 fraction:1 year:2 screened:1 run:3 recruitment:1 powerful:1 respond:3 angle:1 lake:1 decision:20 layer:33 internet:2 bound:1 followed:4 paced:1 conv5_2:1 activity:1 constraint:1 fei:2 ri:1 afforded:2 scene:5 helsinki:1 software:1 invertebrate:1 min:2 performing:2 format:1 martin:1 department:1 mcdonnell:2 spearman:2 across:4 slightly:2 increasingly:2 rev:1 making:1 slowed:1 taken:1 agree:2 remains:1 visualization:1 thirion:1 needed:1 yamins:1 end:2 photo:1 available:3 probe:1 hierarchical:4 alternative:1 altogether:1 thomas:1 top:7 binomial:1 include:2 ensure:1 completed:1 music:1 pushing:1 sweep:2 perrot:1 question:2 arrangement:1 strategy:1 rt:2 unclear:1 distance:4 thank:1 geological:1 seating:1 consumer:1 potter:1 issn:8 dicarlo:2 illustration:2 difficult:1 october:1 javascript:1 negative:1 stated:1 implementation:2 reliably:2 proper:1 contributed:1 perform:1 upper:1 disagree:2 neuron:2 discarded:1 benchmark:1 november:1 displayed:1 incorrectly:1 hinton:2 looking:1 varied:1 rating:1 dog:1 mechanical:3 pair:1 vanderplas:1 connection:1 imagenet:6 optimized:1 barcelona:1 nip:1 suggested:1 below:2 pattern:3 appeared:2 sparsity:1 challenge:4 max:1 reliable:3 video:3 including:3 memory:1 power:1 natural:5 rely:2 regularized:1 residual:1 improve:1 github:1 eye:1 picture:1 created:2 carried:1 started:2 psychol:4 text:1 review:5 understanding:1 python:1 relative:1 embedded:1 plant:1 expect:1 highlight:1 fully:2 entrant:1 validation:1 shelhamer:1 degree:1 consistent:1 systematically:1 compatible:1 supported:1 electronically:1 synset:1 bias:2 side:1 deeper:6 allow:1 institute:1 fall:1 distinctiveness:1 characterizing:1 explaining:1 understand:1 distributed:1 feedback:3 dimension:1 boundary:5 evaluating:1 world:1 calculated:2 genome:1 depth:14 curve:1 forward:3 author:1 instructed:1 qualitatively:3 commonly:1 rich:1 sensory:1 reptile:1 far:1 correlating:1 assumed:1 grayscale:1 continuous:1 decade:1 reviewed:3 learn:3 nature:1 delving:1 career:1 eberhardt:1 investigated:1 complex:1 protocol:1 did:4 main:1 linearly:1 neurosci:2 allowed:2 representative:1 screen:3 ny:1 neuroimage:2 duchesnay:1 comput:1 lie:1 atypical:1 perceptual:1 learns:1 young:1 donahue:1 down:3 remained:1 specific:2 rectifier:1 showing:1 svm:3 socher:1 corr:1 drew:1 ci:4 supplement:2 magnitude:1 nat:2 illumination:1 rui:1 visual:39 failed:2 contained:2 adjustment:1 corresponds:2 amt:2 extracted:2 acm:1 declaration:1 viewed:1 presentation:9 sorted:1 goal:2 identity:1 absence:3 typical:1 uniformly:2 total:4 worm:1 multimedia:1 invariance:1 experimental:6 pedregosa:1 allotted:1 ilsvrc:3 support:1 assessed:2 categorize:1 tested:3 biol:1 |
5,768 | 6,219 | Constraints Based Convex Belief Propagation
Yaniv Tenzer
Department of Statistics
The Hebrew University
Alexander Schwing
Department of Electrical and Computer Engineering
University of Illinois at Urbana-Champaign
Kevin Gimpel
Toyota Technological Institute at Chicago
Tamir Hazan
Faculty of Industrial Engineering and Management
Technion - Israel Institute of Technology
Abstract
Inference in Markov random fields subject to consistency structure is a fundamental
problem that arises in many real-life applications. In order to enforce consistency,
classical approaches utilize consistency potentials or encode constraints over feasible instances. Unfortunately this comes at the price of a tremendous computational
burden. In this paper we suggest to tackle consistency by incorporating constraints
on beliefs. This permits derivation of a closed-form message-passing algorithm
which we refer to as the Constraints Based Convex Belief Propagation (CBCBP).
Experiments show that CBCBP outperforms the conventional consistency potential
based approach, while being at least an order of magnitude faster.
1
Introduction
Markov random fields (MRFs) [10] are widely used across different domains from computer vision
and natural language processing to computational biology, because they are a general tool to describe
distributions that involve multiple variables. The dependencies between variables are conveniently
encoded via potentials that define the structure of a graph.
Besides encoding dependencies, in a variety of real-world applications we often want consistent
solutions that are physically plausible, e.g., when jointly reasoning about multiple tasks or when
enforcing geometric constraints in 3D indoor scene understanding applications [18]. Therefore,
various methods [22, 13, 16, 12, 1] enforce consistency structure during inference by imposing
constraints on the feasible instances. This was shown to be effective in practice. However for each
new constraint we may need to design a specifically tailored algorithm. Therefore, the most common
approach to impose consistency is usage of PN-consistency potentials [9]. This permits reuse of
existing message passing solvers, however, at the expense of an additional computational burden, as
real-world applications may involve hundreds of additional factors.
Our goal in this work is to bypass this computational burden while being generally applicable. To
do so, we consider the problem of inference when probabilistic equalities are imposed over the
beliefs of the model rather than its feasible instances. As we show in Sec. 3, the adaptive nature of
message passing algorithms conveniently allows for such probabilistic equality constraints within its
framework. Since our method eliminates potentially many multivariate factors, inference is much
more scalable than using PN-consistency potentials [9].
In this paper, for notational simplicity, we illustrate the belief constraints based message passing rules
using a framework known as convex belief propagation (CBP). We refer to the illustrated algorithm
as constraints based CBP (CBCBP). However we note that the same derivation can be used to obtain,
e.g., a constraints based tree-reweighted message passing algorithm.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
We evaluate the benefits of our algorithm on semantic image segmentation and machine translation
tasks. Our results indicate that CBCBP improves accuracy while being at least an order of magnitude
faster than CBP.
2
Background
In this section we review the standard CBP algorithm. To this end we consider joint distributions defined over a set of discrete random variables X = (X1 , . . . , Xn ). The distribution p(x1 , . . . , xn ) is assumed
to factor into a product of non-negative potential functions, i.e.,
P
p(x1 , . . . , xn ) ? exp ( r ?r (xr )) , where r ? {1, ..., n} is a subset of variable indices, which we
use to restrict the domain via xr = (xi )i?r . The real-valued functions ?r (xr ) assign a preference to
each of the variables in the subset r. To visualize the factorization structure we use a region graph,
i.e., a generalization of factor graphs. In this graph, each real-valued function ?r (xr ) corresponds
to a node. Nodes ?r and ?p can be connected if r ? p. Hence the parent set P (r) of a region r
contains index sets p ? P (r) if r ? p. Conversely we define the set of children of region r as
C(r) = {c : r ? P (c)}.
P
An important inference task is computation of the marginal probabilities p(xr ) =
x\xr p(x).
Whenever the region graph has no cycles, marginals are easily computed using belief propagation.
Unfortunately, this algorithm may not converge in the presence of cycles. To fix convergence a variety
of approximations have been suggested, one of which is known as convex belief propagation (CBP).
CBP performs block-coordinate descent over the dual function of the following program:
P
X
X
?r
bP
r (xr ) ? 0,
xr br (xr ) = 1,
max
br (xr )?r (xr )+
H(br ) s.t.
?r,
p
?
P
(r),
x
b
(x
)
r
br
xp \xr p p = br (xr ).
r,xr
(1)
r
This program is defined over marginal distributions br (xr ) and incorporates their entropy H(br ) in
addition to the potential function ?r .
In many real world applications we require the solution to be consistent, i.e., hard constraints between
some of the involved variables exist. For example, consider the case where X1 , X2 are two binary
variables such that for every feasible joint assignment, x1 = x2 . To encourage consistency while
reusing general purpose solvers, a PN-consistency potential [9] is often incorporated into the model:
0
x1 = x2
?1,2 (x1 , x2 ) =
.
(2)
?c otherwise
Hereby c is a positive constant that is tuned to penalize for the violation of consistency. As c increases,
the following constraint holds:
b1 (X1 = x1 ) = b2 (X2 = x2 ).
(3)
However, usage of PN-potentials raises concerns: (i) increasing the number of pairwise constraints
decreases computational efficiency, (ii) enforcing consistency in a soft manner requires tuning of an
additional parameter c, (iii) large values of c reduce convergence, and (iv) large values of c result in
corresponding beliefs being assigned zero probability mass which is not desirable.
To alleviate these issues we suggest to enforce the equality constraints given in Eq. (3) directly during
optimization of the program given in Eq. (1). We refer to the additionally introduced constraints as
consistency constraints.
At this point two notes are in place. First we emphasize that utilizing consistency constraints instead
of PN-consistency potentials has a computational advantage, since it omits all pairwise beliefs that
correspond to consistency potentials. Therefore it results in an optimization problem with fewer
functions, which is expect to be more efficiently solvable.
Second we highlight that the two approaches are not equivalent. Intuitively, as c increases, we expect
consistency constraints to yield better results than usage of PN-potentials. Indeed, as c increases, the
PN-consistency potential enforces the joint distribution to be diagonal, i.e., b(X1 = i, X2 = j) = 0,
?i 6= j. However, the consistency constraint as specified in Eq. (3) only requires the univariate
marginals to agree. The latter is a considerably weaker requirement, as a diagonal pairwise distribution
implies agreement of the univariate marginals, but the opposite direction does not hold. Consequently,
using consistency constraints results in a larger search space, which is desirable.
2
Algorithm 1 Constraints Based Convex Belief Propagation (CBCBP)
Repeat until convergence:
Update ? messages - for each r update for all p ? P (r), xr :
?
?
X
X
X
X
?p?r (xr ) = ln
exp ??r (xr ) ?
?p?p0 (xp ) +
?r0 ?p (xr0 ) ? ?p?k (xp )?
p0 ?P (p)
xp \xr
r 0 ?C(p)\r
k?Kp
?
X
X
X
1
??r (xr ) +
?r?p (xr ) ?
?c?r (xc ) +
?p?r (xr ) ? ?r?k (xr )? ??p?r (xr )
1 + |P (r)|
?
c?C(r)
p?P (r)
k?Kr
Update ? messages - for each k ? K update for all r ? N (k) using ?r,k as defined in Eq. (6):
X
1
?r?k (skr ) = log ?r,k ?
log ?r0 ,k
|N (k)| 0
r ?N (k)
Figure 1: The CBCBP algorithm. Shown are the update rules for the ? and ? messages.
Next we derive a general message-passing algorithm that aims at solving the optimization problem
given in Eq. (1) subject to consistency constraints of the form given in Eq. (3).
3
Constraints Based Convex Belief Propagation (CBCBP)
To enforce consistency of beliefs we want to incorporate constraints of the form br1 (xr1 ) = . . . =
brm (xrm ). Each constraint involves a set of regions ri and some of their assignments xri . If this
constraint involves more than two regions, i.e., if m > 2, it is easier to formulate the constraint as a
series of constraints bri (xri ) = v, i ? {1, . . . , m}, for some constant v that eventually cancels.
Generally, given a constraint k, we define the set of its neighbours N (k) to be the involved regions
k
rik as well as the involved assignment xkri , i.e., N (k) = {rik , xkri }m
i=1 . To simplify notation we
subsequently use r ? N (k) instead of (r, xr ) ? N (k). However, it should be clear from the context
that each region rk is matched with a value xkr .
We subsume all constraints within the set K. Additionally, we let Kr denote the set of all those
constraints k which depend on region r, i.e., Kr = {k : r ? N (k)}.
Using the aforementioned notation we are now ready to augment the conventional CBP given in
Eq. (1) with one additional set of constraints. The CBCBP program then reads as follows:
?
P
bP
? ?r
r (xr ) ? 0,
xr br (xr ) = 1
X
X
?r, p ? P (r), xr
max
br (xr )?r (xr ) +
H(br ) s.t.
.
xp \xr bp (xp ) = br (xr )
?
br
r,xr
r
?k ? K, r ? N (k) br (xkr ) = vk
(4)
To solve this program we observe that its constraint space exhibits a rich structure, defined on the
one hand by the parent set P , and on the other hand by the neighborhood of the constraint subsumed
in the set K. To exploit this structure, we aim at deriving the dual which is possible because the
program is strictly convex. Importantly we can subsequently derive block-coordinate updates for the
dual variables, which are efficiently computable in closed form. Hence solving the program given in
Eq. (4) via its dual is much more effective. In the following we first present the dual before discussing
how to efficiently solve it.
Derivation of the dual program: The dual program of the task given in Eq. (4) is obtained by
using the Lagrangian as shown in the following lemma.
Lemma 3.1.: The dual problem associated with the primal program given in Eq. (4) is:
!
X
X
X
X
min
log
exp ?r (xr , ?) ?
?r?k (xr )
s.t. ?k ? K,
?r?k (xkr ) = 0,
?,?
r
xr
k?Kr
r?N (k)
3
where we P
set ?r?k (xr ) = 0 ?k P
? K, r ? N (k), xr 6= xkr and where we introduced ?r (xr , ?) =
?r (xr ) ? p?P (r) ?r?p (xr ) + c?C(r) ?c?r (sc ).
Proof: We begin by defining a Lagrange multiplier for each of the constraints given in Eq. (4).
Concretely, for all r, p ? P (r),
P xr we let ?r?p (xr ) be the Lagrange multiplier associated with the
marginalization constraint xp \xr bp (xp ) = br (xr ). Similarly for all k ? K, r ? N (k), we let
?r?k (xkr ) be the Lagrange multiplier that is associated with the constraint br (xkr ) = vk .
The corresponding Lagrangian is then given by
!
X
X
X
L(b, ?, ?) =
br (xr ) ?r (xr , ?) ?
?r?k (xr ) +
H(br ) +
r,xr
r
k?Kr
where ?r (xr , ?) = ?r (xr ) ?
k, r ? N (k), xr 6= xkr .
P
p?P (r)
?r?p (xr ) +
P
c?C(r)
X
?r?k (xkr )vk ,
k?K,r?N (k)
?c?r (xc ) and ?r?k (xr ) = 0 for all
Due to conjugate duality between the entropy and the log-sum-exp function [25], the dual function is:
!
X
X
X
X X
D(?, ?) = max L(b, ?, ?) =
log
exp ?r (xr , ?) ?
?r?k (xr ) +
vk
?r?k (xkr ).
b
r
xr
k?Kr
k
r?N (k)
The result follows since the dual function is unbounded from below with respect to the Lagrange
multipliers ?r?k (xkr ), requiring constraints.
Derivation of message passing update rules: As mentioned before we can derive blockcoordinate descent update rules for the dual which are computable in closed form. Hence the
dual given in Lemma 3.1 can be solved efficiently, which is summarized in the following theorem:
Theorem 3.2.: A block coordinates descent over the dual problem giving in Lemma 3.1 results in a
message passing algorithm whose details are given in Fig. 1 and which we refer to as the CBCBP
algorithm. It is guaranteed to converge.
Before proving this result, we provide intuition for the update rules: as in the standard and distributed [19] CBP algorithm, each region r sends a message to its parents via the dual variable ?r?p .
Differently from CBP but similar to distributed variants [19], our algorithm has another type of
messages, i.e., the ? messages. Conceptually, think of the constraints as a new node. A constraint
node k is connected to a region r if r ? N (k). Hence, a region r ?informs? the constraint node using
the dual variable ?r?k . We now show how to derive the message passing rules to optimize the dual.
Proof: First we note that convergence is guaranteed by the strict convexity of the primal problem [6].
Next we begin by optimizing the dual function given in Lemma 3.1 with respect to the ? parameters.
Specifically, for a chosen region r we optimize the dual w.r.t. a block of Lagrange multipliers
?r?p (xr ) ?p ? P (r), xr . To this end we derive the dual with respect to ?r?p (xr ) while keeping all
other variables fixed. The technique for solving the optimality conditions follows existing literature,
augmented by messages ?r?k . It yields the update rules given in Fig. 1.
Next we turn to optimizing the dual with respect to the Lagrange multipliers ?. Recall that each
constraint k ? K in the dual function given in Lemma 3.1 is associated with the linear constraint
P
k
r?N (k) ?r?k (xr ) = 0. Therefore we employ a Lagrange multiplier ?k for each k. For compact
exposition, we introduce the Lagrangian that is associated with a constraint k, denoted by Lk :
?
?
!
X
X
X
X
Lk (?, ?) =
log
exp ?r (xr , ?) ?
?r?k (xr ) + ?k ?
?r?k (xkr )? .
r?N (k)
xr
k?Kr
r?N (k)
Deriving Lk with respect to ?r?k ?r ? N (k) and using optimality conditions, we then arrive at:
1 + ?k
?r?k (xkr ) = log ?r,k ?
(5)
??k
for all r ? N (k), where
P
exp ?r (xkr , ?) ? k0 ?Kr \k ?r?k0 (xkr )
.
P
?r,k = P
(6)
exp
?
0 ?K ?r?k 0 (xr )
r (xr , ?) ?
xr \xk
k
r
r
4
n = 100
n = 200
n = 300
n = 400
?4
?4
?3
CBP
1.47 ? 2e
2.7 ? 1e
5.95 ? 3e
13.42 ? 2e?3
?4
?4
?3
CBCBP 0.05 ? 3e
0.11 ? 1e
0.23 ? 2e
0.43 ? 1e?3
Table 1: Average running time and standard deviation, over 10 repetitions, of CBCBP and CBP. Both
infer the parameters of MRFs that consist of n variables.
c=2
c=4
c=6
c=8
c = 10
CBP 31.40 ? 0.74 42.05 ? 1.02 49.17 ? 1.27 53.35 ? 0.93 58.01 ? 0.82
Table 2: Average speedup factor and standard deviation, over 10 repetitions, of CBCBP compared to
CBP, for different values of c. Both infer the beliefs of MRFs that consist of 200 variables.
P
Summing the right hand side of Eq. (5) over r ? N (k) and using the constraint r?N (k) ?r?k (xkr ) =
0 results in
? |N 1(k)|
?
Y
1 + ?k
1 ?
=?
.
??k
?r,k
r?N (k)
Finally, substituting this result back into Eq. (5) yields the desired update rule.
We summarized the resulting algorithm in Fig. 1 and now turn our attention to its evaluation.
4
Experiments
We first demonstrate the applicability of the procedure using synthetic data. We then turn to image
segmentation and machine translation, using real-world datasets. As our work directly improves the
standard CBP approach, we use it as a baseline.
4.1
Synthetic Evaluation
Consider two binary variables X and Y whose support consists of L levels, {1, . . . , L}. Assume we
are given the following PN-consistency potential:
0
(y = 1 ? x = 1) ? (y = 0 ? x 6= 1)
?x,y (x, y) =
(7)
?c otherwise,
where c is some positive parameter. This potential encourages the assignment y = 1 to agree with
the assignment x = 1 and y = 0 to agree with x = {2, . . . , L}. Phrased differently, this potential
favours beliefs such that:
by (y = 0) = bx (x 6= 1).
by (y = 1) = bx (x = 1),
(8)
Therefore, one may replace the above potential using a single consistency constraint. Note that the
above two constraints complement each other, hence, it suffices to include one of them. We use the
left consistency constraint since it fits our derivation.
We test this hypothesis by constructing four networks that consist of n = 2v, v = 50, 100, 150, 200
variables, where v variables are binary, denoted by Y and the other v variables are multi-levels,
subsumed within X. Note that the support of variable Xi , 1 ? i ? v, consists of i states. Each
multi-level variable is matched with a binary one. For each variable we randomly generate unary
potentials according to the standard Gaussian distribution.
We then run the standard CBP algorithm using the aforementioned PN-consistency potential given in
Eq. (7) with c = 1. In a next step we replace each such potential by its corresponding consistency
constraint following Eq. (8). For each network we repeat this process 10 times and report the mean
running time and standard deviation in Tab. 1.
As expected, CBCBP is significantly faster than the standard CBP. Quantitatively, CBCBP was
approximately 25 times faster for the smallest, and more than 31 times faster for the largest graphs.
Obviously, different values of c effect the convexity of the problem and therefore also the running
time of both CBP and CBCBP. To quantify its impact we repeat the experiment with n = 200 for
distinct values of c ? {2, 4, 6, 8, 10}. In Tab. 2 we report the mean speedup factor over 10 repetitions,
for each value of c. As clearly evident, the speedup factors substantially increases with c.
5
CBP
CBCBP
bo
dy
bo
at
do
g
bo
ok
ch
ai
r
ro
ad
ca
t
bi
rd
bi
cy
cl
e
flo
w
er
sig
n
ae
ro
pl
w
at
er
fa
ce
ca
r
sh
ee
p
sk
y
co
w
vo
id
gr
as
s
tre
e
global accuracy average accuracy mean running time
CBP
84.2
74.3
1.41 ? 5e?3
CBCBP
85.4
76.1
0.02 ? 2e?3
Table 3: Global accuracy, average accuracy and mean running time as well as standard deviation for
the 256 images of the MSRC-21 dataset.
0.79 0.99 0.84 0.68 0.67 0.92 0.78 0.83 0.82 0.79 0.90 0.92 0.56 0.42 0.94 0.48 0.87 0.81 0.51 0.63 0
0.72 0.97 0.89 0.77 0.84 0.95 0.83 0.83 0.82 0.80 0.92 0.96 0.69 0.49 0.95 0.58 0.89 0.81 0.53 0.65 0
Table 4: Segmentation accuracy per class of CBCBP and CBP, for the MSRC-21 dataset.
4.2 Image Segmentation
We evaluate our approach on the task of semantic segmentation using the MSRC-21 dataset [21] as
well as the PascalVOC 2012 [4] dataset. Both contain 21 foreground classes. Each variable Xi in our
model corresponds to a super-pixel in an image. In addition, each super-pixel is associated with a
binary variable Yi , that indicates whether the super-pixel belongs to the foreground, i.e., yi = 1, or to
the background, i.e., yi = 0. The model potentials are:
Super-pixel unary potentials: For MSRC-21 these potentials are computed by averaging the
TextonBoost [11] pixel-potentials inside each super-pixel. For the PascalVOC 2012 dataset we train a
convolutional neural network following the VGG16 architecture.
Foreground/Background unary potentials: For MSRC-21 we let the value of the potential at
yi = 1 equal the value of the super-pixel unary potential that corresponds to the ?void? label, and
for yi = 0 we define it to be the maximum value of the super-pixel unary potential among the other
labels. For PascalVOC 2012 we obtain the foreground/background potential by training another
convolutional neural network following again the VGG16 architecture.
Super pixel - foreground/background consistency: We define pairwise potentials between superpixel and the foreground/background labels using Eq. (7) and set c = 1.
Naturally, these consistency potentials encourage CBP to favour beliefs where pixels that are labeled
as ?void? are also labeled as ?background? and vice versa. This can also be formulated using the
constraints bi (Xi = 0) = bi (Yi = 0) and bi (Xi 6= 1) = bi (Yi = 1).
We compare the CBCBP algorithm with the standard CBP approach. For MSRC-21 we use the
standard error measure of average per class accuracy and average per pixel accuracy, denoted as
global. Performances results are provided in Tab. 3.
Appealingly, our results indicate that CBCBP outperforms the standard CBP, across both metrics.
Moreover and as summarized in Tab. 4, in 19 out of 21 classes CBCBP achieves an accuracy that is
equal to or higher than CBP. At last, CBCBP is more than 65 times faster than CBP.
In Tab. 5 we present the average pixel accuracy as well as the Intersection over Union (IoU) metric
for the VOC2012 data. We observe CBCBP to perform better since it is able to transfer information
between the foreground-background classification and the semantic segmentation.
4.3 Machine Translation
We now consider the task of machine translation. We define a phrase-based translation model as
a factor graph with many large constraints and use CBCBP to efficiently incorporate them during
inference. Our model is inspired by the widely-used approach of [8]. Given a sentence in a source
language, the output of our phrase-based model consists of a segmentation of the source sentence
into phrases (subsequences of words), a phrase translation for each source phrase, and an ordering of
the phrase translations. See Fig. 2 for an illustration.
We index variables in our model by i = 1, . . . , n, which include source words (sw), source phrases
(sp), and translation phrase slots (tp). The sequence of source words is first segmented into source
phrases. The possible values for source word sw are Xsw = {(sw1 , sw2 ) : (sw1 ? sw ?
sw2 ) ? (sw2 ? sw1 < m)}, where m is the maximum phrase length.
If source phrase sp is used in the derivation, we say that sp aligns to a translation phrase slot tp. If
sp is not used, it aligns to ?. We define variables Xsp to indicate what sp aligns to: Xsp = {tp :
6
average accuracy IOU
CBP
90.6
62.69
CBCBP
91.6
62.97
Table 5: Average accuracy and IOU accuracy for the 1449 images of the VOC2012 dataset.
Figure 2: An example German sentence with a derivation of its translation. On the right we show the
xsw variables, which assign source words to source phrases, the xsp variables, which assign source
phrases to translation phrase slots, and the xtp variables, which fill slots with actual words in the
translation. Dotted lines highlight how xsw values correspond to indices of xsp variables, and xsp
values correspond to indices of xtp variables. The xsp variables for unused source spans (e.g., x(1,1) ,
x(2,4) , etc.) are not shown.
sw1 ? d ? tp ? sw2 + d} ? {?}, i.e., all translation phrase slots tp (numbered from left to right in
the translation) such that the slot number is at most distance d from an edge of sp.1
Each translation phrase slot tp generates actual target-language words which comprise the translation.
We define variables Xtp ranging over the possible target-language word sequences (translation
phrases) that can be generated at slot tp. However, not all translation phrase slots must be filled
in with translations. Beyond some value of tp (equaling the number of source phrases used in the
derivation), they must all be empty. To enforce this, we also permit a null (?) translation.
Consistency constraints: Many derivations defined by the discrete product space X1 ? ? ? ? ? Xn
are semantically inconsistent. For example, a derivation may place the first source word into the
source phrase (1, 2) and the second source word into (2, 3). This is problematic because the phrases
overlap; each source word must be placed into exactly one source phrase. We introduce source word
consistency constraints:
?sp, ?sw ? sp : bsw (sp) = b(sp).
These constraints force the source word beliefs bsw (xsw ) to agree on their span. There are other
consistencies we wish to enforce in our model. Specifically, we must match a source phrase to a
translation phrase slot if and only if the source phrase is consistently chosen by all of its source words.
Formally,
P
? sp : b(sp) = xsp 6=? bsp (xsp ).
Phrase translation potentials: We use pairwise potential functions between source phrases sp =
(sw1 , sw2 ) and their aligned translation phrase slots tp. We include a factor hsp, tpi ? E if sw1 ?
d ? tp ? sw2 +d. Letting ?sp be the actual words in sp, the potentials ?sp,tp (xsp , xtp ) determine the
preference of the phrase translation h?sp , xtp i using a phrase table feature function ? : h?, ? 0 i ? Rk .
In particular, ?sp,tp (xsp , xtp ) = ?p> ? (h?sp , xtp i) if xsp = tp and a large negative value otherwise,
where ?p is the weight vector for the Moses phrase table feature vector.
Language model potentials: To include n-gram language models, we add potentials that score
P|xtp |
(i)
(1)
pairs of consecutive target phrases, i.e., ?tp?1,tp (xtp?1 , xtp ) = ?` i=1
log Pr(xtp |xtp?1 ? xtp ?
(i?1)
(i)
... ? xtp ), where |xtp | is the number of words in xtp , xtp is the i-th word in xtp , ? denotes string
concatenation, and ?` is the feature weight. This potential sums n-gram log-probabilities of words
in the second of the two target phrases. Internal n-gram features and the standard word penalty
feature [7] are computed in the ?tp potentials, since they depend only on the words in xtp .
Source phrase separation potentials: We use pairwise potentials between source phrases to
prevent them aligning to the same translation slot. We also prevent two overlapping source phrases
1
Our distortion limit d is based on distances from source words to translation slots, rather than distances
between source words as in the Moses system [7].
7
no sw constraints
sw constraints with CBCBP
%BLEU
13.12
16.73
Table 6: %BLEU on test set, showing the contribution of the source word consistency constraints.
from both aligning to non-null slots (i.e., one must align to ?). We include a factor between two sources
phrases if there is a translation phrase that may relate to both, namely hsp1 , sp2 i ? E if ? tp :
hsp1 , tpi ? E, hsp2 , tpi ? E. The source phrase separation potential ?sp1 ,sp2 (xsp1 , xsp2 ) is ?? if
either xsp1 = xsp2 6= ? or sp1 ? sp2 6= ? ? xsp1 6= ? ? xsp2 6= ?. Otherwise, it is ??d |(?(sp1 , sp2 ) ?
|xsp1 ? xsp2 |)|, where ?(sp1 , sp2 ) returns the number of source words between the spans sp1 and
sp2 . This favors similar distances between source phrases and their aligned slots.
Experimental Setup: We consider German-to-English translation. As training data for constructing
the phrase table, we use the WMT2011 parallel data [2], which contains 1.9M sentence pairs. We use
the phrase table to compute ?sp,tp and to fill Xtp . We use a bigram language model estimated from
the English side of the parallel data along with 601M tokens of randomly-selected sentences from the
Linguistic Data Consortium?s Gigaword corpus. This is used when computing the ?tp?1,tp potentials.
As our test set, we use the first 150 sentences from the WMT2009 test set. Results below are (uncased)
%BLEU scores [17] on this 150-sentence set.
We use maximum phrase length m = 3 and distortion limit d = 3. We run 250 iterations of CBCBP
for each sentence. For the feature weights (?), we use the default weights in Moses, since our features
are analogous to theirs. Learning the weights is left to future work.
Results: We compare to a simplified version of our model that omits the sw variables and all
constraints and terms pertaining to them. This variation still contains all sp and tp variables and their
factors. This comparison shows the contribution of our novel handling of consistency constraints.
Tab. 6 shows our results. The consistency constraints lead to a large improvement for our model at
negligible increase in runtime due to our closed-form update rules. We found it impractical to attempt
to obtain these results using the standard CBP algorithm for any source sentences of typical length.
For comparison to a standard benchmark, we also trained a Moses system [7], a state-of-the-art
phrase-based system, on the same data. We used default settings and feature weights, except we
used max phrase length 3 and no lexicalized reordering model, in order to more closely match the
setting of our model. The Moses %BLEU on this dataset is 17.88. When using the source word
consistency constraints, we are within 1.2% of Moses. Our model has the virtue of being able to
compute marginals for downstream applications and also permits us to study particular forms of
constraints in phrase-based translation modeling. Future work can add or remove constraints like
we did in our experiments here in order to determine the most effective constraints for phrase-based
translation. Our efficient inference framework makes such exploration possible.
5
Related Work
Variational approaches to inference have been extensively studied in the past. We address approximate
inference using the entropy barrier function and there has been extensive work in this direction,
e.g., [24, 14, 23, 5, 19, 20] to name a few. Our work differs since we incorporate consistency
constraints within the inference engine. We show that closed-form update rules are still available.
Consistency constraints are implied when using PN-potentials [9]. However, pairwise functions
are included for every constraint which is expensive if many constraints are involved. In contrast,
constraints over the feasible instances are considered in [22, 13, 16, 12, 1]. While impressive results
have been shown, each different restrictions of the feasible set may require a tailored algorithm.
In contrast, we propose to include probabilistic equalities among the model beliefs, which permits
derivation of an algorithm that is generally applicable.
6
Conclusions
In this work we tackled the problem of inference with belief based equality constraints, which arises
when consistency among variables in the network is required. We introduced the CBCBP algorithm
that directly incorporates constraints into the CBP framework and results in closed-form update
rules. We demonstrated the merit of CBCBP both on synthetic data and on two real-world tasks.
Our experiments indicate that CBCBP outperforms PN-potentials in both speed and accuracy. In the
future we intend to incorporate our approximate inference with consistency constraints into learning
frameworks, e.g., [15, 3].
8
References
[1] S. Bach, M. Broecheler, L. Getoor, and D. O?leary. Scaling mpe inference for constrained continuous
markov random fields with consensus optimization. In Advances in Neural Information Processing Systems,
pages 2654?2662, 2012.
[2] C. Callison-Burch, P. Koehn, C. Monz, and O. Zaidan. Findings of the 2011 Workshop on Statistical
Machine Translation. In Proc. of WMT, 2011.
[3] L.-C. Chen? , A. G. Schwing? , A. L. Yuille, and R. Urtasun. Learning Deep Structured Models. In Proc.
ICML, 2015. ? equal contribution.
[4] M. Everingham, L. Van Gool, C. Williams, J. Winn, and A. Zisserman. The pascal visual object classes
challenge 2012 (voc2012). http://www.pascal-network.org/challenges/VOC/voc2012/workshop/index.html,
2012.
[5] T. Hazan, J. Peng, and A. Shashua. Tightening fractional covering upper bounds on the partition function
for high-order region graphs. In Proc. UAI, 2012.
[6] T. Heskes. On the uniqueness of loopy belief propagation fixed points. Neural Computation, 16(11):2379?
2413, 2004.
[7] P. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran,
R. Zens, C. Dyer, O. Bojar, A. Constantin, and E. Herbst. Moses: Open source toolkit for statistical
machine translation. In Proc. of ACL, 2007.
[8] P. Koehn, F. J. Och, and D. Marcu. Statistical phrase-based translation. In Proceedings of the 2003
Conference of the North American Chapter of the Association for Computational Linguistics on Human
Language Technology-Volume 1, pages 48?54. Association for Computational Linguistics, 2003.
[9] P. Kohli, P. H. Torr, et al. Robust higher order potentials for enforcing label consistency. International
Journal of Computer Vision, 82(3):302?324, 2009.
[10] D. Koller and N. Friedman. Probabilistic graphical models: principles and techniques. MIT press, 2009.
[11] L. Ladicky, C. Russell, P. Kohli, and P. H. Torr. Graph cut based inference with co-occurrence statistics. In
Computer Vision?ECCV 2010, pages 239?253. Springer, 2010.
[12] A. F. Martins, M. A. Figeuiredo, P. M. Aguiar, N. A. Smith, and E. P. Xing. An augmented lagrangian
approach to constrained map inference. 2011.
[13] A. F. T. Martins. The geometry of constrained structured prediction: applications to inference and learning
of natural language syntax. PhD thesis, Columbia University, 2012.
[14] T. Meltzer, A. Globerson, and Y. Weiss. Convergent message passing algorithms-a unifying view. In UAI,
2009.
[15] O. Meshi, D. Sontag, T. Jaakkola, and A. Globerson. Learning efficiently with approximate inference via
dual losses. In Proc. ICML, 2010.
[16] S. Nowozin and C. H. Lampert. Global connectivity potentials for random field models. In Computer
Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 818?825. IEEE, 2009.
[17] K. Papineni, S. Roukos, T. Ward, and W. Zhu. BLEU: a method for automatic evaluation of machine
translation. In Proc. of ACL, 2002.
[18] A. G. Schwing, S. Fidler, M. Pollefeys, and R. Urtasun. Box In the Box: Joint 3D Layout and Object
Reasoning from Single Images. In Proc. ICCV, 2013.
[19] A. G. Schwing, T. Hazan, M. Pollefeys, and R. Urtasun. Distributed Message Passing for Large Scale
Graphical Models. In Proc. CVPR, 2011.
[20] A. G. Schwing, T. Hazan, M. Pollefeys, and R. Urtasun. Globally Convergent Dual MAP LP Relaxation
Solvers using Fenchel-Young Margins. In Proc. NIPS, 2012.
[21] J. Shotton, M. Johnson, and R. Cipolla. Semantic texton forests for image categorization and segmentation.
In Computer vision and pattern recognition, 2008. CVPR 2008. IEEE Conference on, pages 1?8. IEEE,
2008.
[22] D. A. Smith and J. Eisner. Dependency parsing by belief propagation. In Proceedings of the Conference
on Empirical Methods in Natural Language Processing, pages 145?156. Association for Computational
Linguistics, 2008.
[23] D. Tarlow, D. Batra, P. Kohli, and V. Kolmogorov. Dynamic tree block coordinate ascent. ICML, pages
113?120, 2011.
[24] M. J. Wainwright, T. S. Jaakkola, and A. S. Willsky. A new class of upper bounds on the log partition
function. Trans. on Information Theory, 51(7):2313?2335, 2005.
[25] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference.
R in Machine Learning, 1(1-2):1?305, 2008.
Foundations and Trends
9
| 6219 |@word kohli:3 version:1 faculty:1 bigram:1 everingham:1 open:1 p0:2 textonboost:1 contains:3 series:1 score:2 tuned:1 outperforms:3 existing:2 past:1 must:5 parsing:1 chicago:1 partition:2 remove:1 update:14 fewer:1 selected:1 xk:1 smith:2 tarlow:1 node:5 cbp:28 preference:2 org:1 unbounded:1 along:1 bertoldi:1 consists:3 inside:1 introduce:2 manner:1 pairwise:7 peng:1 expected:1 indeed:1 multi:2 inspired:1 voc:1 globally:1 actual:3 solver:3 increasing:1 spain:1 begin:2 notation:2 matched:2 provided:1 mass:1 moreover:1 null:2 israel:1 appealingly:1 what:1 substantially:1 string:1 bojar:1 finding:1 impractical:1 every:2 tackle:1 runtime:1 exactly:1 ro:2 och:1 brm:1 positive:2 before:3 engineering:2 negligible:1 limit:2 encoding:1 id:1 approximately:1 acl:2 studied:1 conversely:1 co:2 factorization:1 bi:6 globerson:2 enforces:1 practice:1 block:5 union:1 differs:1 xr:68 procedure:1 empirical:1 significantly:1 word:25 numbered:1 suggest:2 consortium:1 context:1 optimize:2 conventional:2 imposed:1 equivalent:1 lagrangian:4 restriction:1 demonstrated:1 williams:1 attention:1 map:2 layout:1 convex:7 formulate:1 shen:1 simplicity:1 rule:11 utilizing:1 importantly:1 xkr:15 deriving:2 fill:2 proving:1 coordinate:4 variation:1 analogous:1 target:4 hypothesis:1 sig:1 agreement:1 superpixel:1 trend:1 expensive:1 recognition:2 marcu:1 cut:1 blockcoordinate:1 labeled:2 electrical:1 solved:1 cy:1 region:14 equaling:1 connected:2 cycle:2 ordering:1 decrease:1 technological:1 russell:1 mentioned:1 intuition:1 convexity:2 dynamic:1 trained:1 raise:1 solving:3 depend:2 yuille:1 efficiency:1 easily:1 joint:4 bsw:2 differently:2 k0:2 various:1 chapter:1 kolmogorov:1 derivation:11 train:1 distinct:1 describe:1 effective:3 kp:1 pertaining:1 sc:1 kevin:1 neighborhood:1 whose:2 encoded:1 widely:2 plausible:1 cvpr:3 valued:2 larger:1 otherwise:4 solve:2 say:1 distortion:2 statistic:2 favor:1 federico:1 ward:1 think:1 jointly:1 obviously:1 advantage:1 sequence:2 tpi:3 propose:1 product:2 aligned:2 skr:1 papineni:1 flo:1 parent:3 yaniv:1 convergence:4 requirement:1 empty:1 categorization:1 object:2 illustrate:1 derive:5 informs:1 eq:16 involves:2 come:1 indicate:4 implies:1 quantify:1 direction:2 iou:3 closely:1 subsequently:2 exploration:1 human:1 meshi:1 require:2 assign:3 fix:1 generalization:1 suffices:1 alleviate:1 strictly:1 pl:1 hold:2 considered:1 exp:8 sp2:6 visualize:1 substituting:1 achieves:1 consecutive:1 smallest:1 purpose:1 uniqueness:1 proc:9 applicable:2 label:4 largest:1 repetition:3 vice:1 tool:1 xr0:1 mit:1 clearly:1 gaussian:1 aim:2 super:8 rather:2 pn:11 jaakkola:2 linguistic:1 encode:1 notational:1 vk:4 consistently:1 indicates:1 improvement:1 industrial:1 contrast:2 baseline:1 inference:18 mrfs:3 unary:5 koller:1 pixel:12 issue:1 dual:23 classification:1 aforementioned:2 augment:1 denoted:3 among:3 pascal:2 html:1 art:1 constrained:3 marginal:2 field:4 equal:3 comprise:1 biology:1 cancel:1 icml:3 foreground:7 future:3 report:2 simplify:1 quantitatively:1 employ:1 few:1 randomly:2 neighbour:1 geometry:1 attempt:1 friedman:1 subsumed:2 message:18 callison:2 evaluation:3 violation:1 sh:1 primal:2 constantin:1 edge:1 encourage:2 tree:2 iv:1 filled:1 desired:1 instance:4 fenchel:1 soft:1 modeling:1 tp:21 assignment:5 phrase:49 applicability:1 loopy:1 deviation:4 subset:2 hundred:1 technion:1 johnson:1 gr:1 dependency:3 xsp:11 considerably:1 synthetic:3 fundamental:1 international:1 probabilistic:4 leary:1 again:1 thesis:1 connectivity:1 management:1 zen:1 american:1 bx:2 return:1 reusing:1 potential:46 sec:1 b2:1 summarized:3 north:1 ad:1 view:1 closed:6 hazan:4 tab:6 mpe:1 shashua:1 xing:1 parallel:2 contribution:3 uncased:1 accuracy:14 convolutional:2 sw1:6 efficiently:6 correspond:3 yield:3 conceptually:1 whenever:1 aligns:3 involved:4 hereby:1 associated:6 proof:2 naturally:1 dataset:7 birch:1 recall:1 fractional:1 improves:2 segmentation:8 back:1 ok:1 higher:2 voc2012:4 sw2:6 zisserman:1 wei:1 box:2 until:1 hand:3 tre:1 overlapping:1 propagation:9 bsp:1 usage:3 effect:1 name:1 requiring:1 multiplier:7 contain:1 www:1 equality:5 hence:5 assigned:1 read:1 fidler:1 semantic:4 illustrated:1 reweighted:1 during:3 encourages:1 covering:1 syntax:1 evident:1 demonstrate:1 vo:1 performs:1 reasoning:2 image:8 ranging:1 variational:2 novel:1 common:1 volume:1 association:3 marginals:4 theirs:1 refer:4 versa:1 imposing:1 ai:1 automatic:1 tuning:1 rd:1 consistency:42 similarly:1 msrc:6 heskes:1 illinois:1 language:10 wmt:1 toolkit:1 impressive:1 etc:1 add:2 aligning:2 align:1 multivariate:1 optimizing:2 belongs:1 binary:5 discussing:1 life:1 herbst:1 yi:7 additional:4 impose:1 r0:2 converge:2 determine:2 ii:1 vgg16:2 multiple:2 desirable:2 infer:2 champaign:1 segmented:1 faster:6 match:2 bach:1 impact:1 prediction:1 scalable:1 variant:1 ae:1 vision:5 metric:2 physically:1 iteration:1 tailored:2 texton:1 penalize:1 background:8 want:2 addition:2 winn:1 void:2 source:38 sends:1 eliminates:1 strict:1 ascent:1 subject:2 cowan:1 incorporates:2 inconsistent:1 jordan:1 ee:1 presence:1 unused:1 iii:1 shotton:1 meltzer:1 variety:2 marginalization:1 fit:1 architecture:2 restrict:1 opposite:1 reduce:1 koehn:3 br:17 computable:2 favour:2 whether:1 reuse:1 penalty:1 sontag:1 passing:11 deep:1 generally:3 clear:1 involve:2 extensively:1 generate:1 http:1 exist:1 problematic:1 xr1:1 dotted:1 moses:7 estimated:1 per:3 gigaword:1 discrete:2 pollefeys:3 four:1 prevent:2 ce:1 utilize:1 graph:9 relaxation:1 downstream:1 sum:2 run:2 place:2 arrive:1 family:1 separation:2 dy:1 scaling:1 bound:2 guaranteed:2 tackled:1 convergent:2 constraint:73 burch:2 ladicky:1 bp:4 scene:1 x2:7 ri:1 phrased:1 generates:1 speed:1 min:1 optimality:2 span:3 martin:2 bri:1 speedup:3 department:2 structured:2 according:1 conjugate:1 across:2 lp:1 intuitively:1 iccv:1 pr:1 ln:1 agree:4 turn:3 eventually:1 german:2 letting:1 merit:1 dyer:1 end:2 available:1 permit:5 observe:2 enforce:6 occurrence:1 denotes:1 running:5 include:6 linguistics:3 graphical:3 sw:7 unifying:1 xc:2 exploit:1 giving:1 eisner:1 classical:1 implied:1 intend:1 fa:1 diagonal:2 exhibit:1 distance:4 concatenation:1 hsp:1 consensus:1 urtasun:4 bleu:5 enforcing:3 willsky:1 besides:1 length:4 index:6 illustration:1 hebrew:1 setup:1 unfortunately:2 potentially:1 relate:1 expense:1 xri:2 negative:2 tightening:1 design:1 perform:1 upper:2 markov:3 urbana:1 datasets:1 benchmark:1 descent:3 subsume:1 defining:1 incorporated:1 introduced:3 complement:1 pair:2 namely:1 specified:1 extensive:1 sentence:9 required:1 engine:1 omits:2 tremendous:1 barcelona:1 nip:2 trans:1 address:1 able:2 suggested:1 beyond:1 below:2 pattern:2 indoor:1 challenge:2 program:10 max:4 belief:21 gool:1 wainwright:2 overlap:1 getoor:1 natural:3 force:1 solvable:1 zhu:1 technology:2 gimpel:1 lk:3 ready:1 zaidan:1 columbia:1 review:1 geometric:1 understanding:1 literature:1 reordering:1 expect:2 highlight:2 loss:1 hoang:1 foundation:1 rik:2 consistent:2 xp:8 principle:1 bypass:1 nowozin:1 roukos:1 translation:34 eccv:1 token:1 repeat:3 last:1 keeping:1 placed:1 english:2 side:2 weaker:1 institute:2 barrier:1 benefit:1 distributed:3 van:1 default:2 xn:4 world:5 gram:3 tamir:1 rich:1 concretely:1 adaptive:1 simplified:1 approximate:3 emphasize:1 compact:1 global:4 uai:2 b1:1 summing:1 assumed:1 corpus:1 xi:5 br1:1 subsequence:1 search:1 continuous:1 sk:1 table:10 additionally:2 nature:1 transfer:1 robust:1 ca:2 forest:1 cl:1 constructing:2 domain:2 sp:21 did:1 lampert:1 child:1 x1:11 augmented:2 fig:4 xtp:20 wish:1 exponential:1 toyota:1 young:1 rk:2 theorem:2 showing:1 er:2 moran:1 virtue:1 concern:1 burden:3 incorporating:1 consist:3 lexicalized:1 workshop:2 kr:8 phd:1 magnitude:2 margin:1 chen:1 easier:1 entropy:3 intersection:1 broecheler:1 univariate:2 visual:1 conveniently:2 lagrange:7 bo:3 cipolla:1 springer:1 ch:1 corresponds:3 slot:15 goal:1 formulated:1 consequently:1 exposition:1 aguiar:1 price:1 replace:2 feasible:6 hard:1 included:1 specifically:3 sp1:5 typical:1 semantically:1 averaging:1 except:1 schwing:5 lemma:6 torr:2 batra:1 duality:1 experimental:1 formally:1 internal:1 support:2 latter:1 arises:2 alexander:1 incorporate:4 evaluate:2 handling:1 |
5,769 | 622 | Information, prediction, and query by
committee
Yoav Freund
Computer and Information Sciences
University of California, Santa Cruz
yoavQcse.ucsc.edu
Eli Shamir
Institute of Computer Science
Hebrew University, Jerusalem
sharnirQcs.huji.ac.il
H. Sebastian Seung
AT &T Bell Laboratories
Murray Hill, New Jersey
seungQphysics.att.com
N aft ali Tishby
Institute of Computer Science and
Center for Neural Computation
Hebrew University, Jerusalem
tishbyQcs.huji.ac.il
Abstract
We analyze the "query by committee" algorithm, a method for filtering informative queries from a random stream of inputs. We
show that if the two-member committee algorithm achieves information gain with positive lower bound, then the prediction error
decreases exponentially with the number of queries. We show that,
in particular, this exponential decrease holds for query learning of
thresholded smooth functions.
1
Introduction
For the most part, research on supervised learning has utilized a random input
paradigm, in which the learner is both trained and tested on examples drawn at
random from the same distribution. In contrast, in the query paradigm, the learner
is given the power to ask questions, rather than just passively accept examples.
What does the learner gain from this additional power? Can it attain the same
prediction performance with fewer examples?
Most work on query learning has been in the constructive paradigm, in which the
483
484
Freund, Seung, Shamir, and Tishby
learner constructs inputs on which to query the teacher. For some classes of boolean
functions and finite automata that are not PAC learnable from random inputs, there
are algorithms that can successfully PAC learn using "membership queries" [VaI84,
Ang88]. Query algorithms are also known for neural network learning[Bau91]. The
general relevance of these positive results is unclear, since each is specific to the
learning of a particular concept class. Moreover, as shown by Eisenberg and Rivest
in [ER90], constructed membership queries cannot be used to reduce the number of
examples required for PAC learning. That is because random examples provide the
learner with information not only about the correct mapping, but also about the
distribution of future test inputs. This information is lacking if the learner must
construct inputs.
In the statistical literature, some attempt has been made towards a more fundamental understanding of query learning, there called "sequential design of
experiments." 1. It has been suggested that the optimal experiment (query) is the
one with maximal Shannon information[Lin56, Fed72, Mac92]. Similar suggestions
have been made in the perceptron learning literature[KR90]. Although the use of an
entropic measure seems sensible, its relationship with prediction error has remained
unclear.
Understanding this relationship is a main goal of the present work, and enables us
to prove a positive result about the power of queries. Our work is derived within the
query filtering paradigm, rather than the constructive paradigm. In this paradigm,
proposed by [CAL90], the learner is given access to a stream of inputs drawn at
random from a distribution. The learner sees every input, but chooses whether or
not to query the teacher for the label. This paradigm is realistic in contexts where
it is cheap to get unlabeled examples, but expensive to label them. It avoids the
problems with the constructive paradigm described in [ER90] because it gives the
learner free access to the input distribution.
In [CAL90] there are several suggestions for query filters together with some empirical tests of their performance on simple problems. Seung et al.[SOS92] have
suggested a filter called "query by committee," and analytically calculated its performance for some perceptron-type learning problems. For these problems, they
found that the prediction error decreases exponentially fast in the number of queries.
In this work we present a more complete and general analysis of query by committee, and show that such an exponential decrease is guaranteed for a general class of
learning problems.
We work in a Bayesian model of concept learning[HKS91] in which the target concept I is chosen from a concept class C according to some prior distribution P.
The concept class consists of boolean-valued functions defined on some input space
X. An example is an input x E X along with its label I
I( x). For any set of
examples, we define the version space to be the set of all hypotheses in C that are
consistent with the examples. As each example arrives, it eliminates inconsistent
hypotheses, and the probability of the version space (with respect to P) is reduced.
The instantaneous information gain (i.i.g.) is defined as the logarithm of the ratio
=
IThe paradigm of (non-sequential) experimental design is analogous to what might be
called "batch query learning," in which all of the inputs are chosen by the learner before
a single label is received from the teacher
Information, prediction, and query by committee
of version space probabilities before and after receiving the example. In this work,
we study a particular kind oflearner, the Gibbs learner, which chooses a hypothesis
at random from the version space. In Bayesian terms, it chooses from the posterior
distribution on the concept class, which is the restriction of the prior distribution
to the version space.
If an unlabeled input x is provided, the expected i.i.g. of its label can be defined by
taking the expectation with respect to the probabilities of the unknown label. The
input x divides the version space into two parts, those hypotheses that label it as
a positive example, and those that label it negative. Let the probability ratios of
these two parts to the whole be X anti 1 - X. Then the expected i.i.g. is
1i(X) = -X logX - (1 - X) log(1- X) .
(1)
The goal of the learner is to minimize its prediction error, its probability of error
on an input drawn from the input distribution V. In the ease of random input
learning, every input x is drawn independently from V. Since the expected i.i.g.
tends to zero (see [HKS91]), it seems that random input learning is inefficient. We
will analyze query construction and filtering algorithms that are designed to achieve
high information gain.
The rest of the paper is organized as follows. In section 2 we exhibit query construction algorithms for the high-low game. The bisection algorithm for high-low
illustrates that constructing queries with high information gain can improve prediction performance. But the failure of bisection for multi-dimensional high-low
exposes a deficiency of the query construction paradigm. In section 3 we define
the query filtering paradigm, and discuss the relation between information gain and
prediction error for queries filtered by a committee of Gibbs learners. In section
4 lower bounds for information gain are proved for the learning of some nontrivial
concept classes. Section 5 is a summary and discussion of open problems.
2
Query construction and the high-low game
In this section, we give examples of query construction algorithms for the high-low
game and its generalizations. In the high-low game, the concept class C consists of
functions of the form
w <x
I
(2)
fw(x) {
w >x
=
0:
where 0 ~ w, x ~ 1. Thus both X and C are naturally mapped to the interval [0,1].
Both P, the prior distribution for the parameter w, and V, the input distribution for
x, are assumed to be uniform on [0,1]. Given any sequence of examples, the version
space is [XL, XR] where XL is the largest negative example and XR is the smallest
positive example. The posterior distribution is uniform in the interval [XL, XR] and
vanishes outside.
The prediction error of a Gibbs learner is Pr(fv(x) I- fw(x)) where x is chosen
from V, and v and w from the posterior distribution. It is easy to show that
Pr(fv (x) I- fw (x)) (XR - xL)/3. Since the prediction error is proportional to the
version space volume, always querying on the midpoint (XR + xL)/2 causes the
prediction error after m queries to decrease like 2- m . This is in contrast to the case
of random input learning, for which the prediction error decreases like l/m.
=
485
486
Freund, Seung, Shamir, and Tishby
The strategy of bisection is clearly maximally informative, since it achieves
1 bit per query, and can be applied to the learning of any concept class.
Naive intuition suggests that it should lead to rapidly decreasing prediction error,
but this is not necessarily so. Generalizing the high-low game to d dimensions
provides a simple counterexample. The target concepts are functions of the form
1i(lj2)
=
fw(i, x)
=
{6:
Wi
<
X
Wi> X
(3)
The prior distribution of 'Iii is uniform on the concept class C = [0, l]d. The inputs
are pairs (i, x), where i takes on the values 1, ... , d with equal probability, and x is
uniformly distributed on [0, 1]. Since this is basically d concurrent high-low games
(one for each component of 'Iii), the version space is a product of subintervals of
[0,1]. For d = 2, the concept class is the unit square, and the version space is a
rectangle. The prediction error is proportional to the perimeter of the rectangle. A
sequence of queries with i 1 can bisect the rectangle along one dimension, yielding
1 bit per query, while the perimeter tends to a finite constant. Hence the prediction
error tends to a finite constant, in spite of the maximal information gain.
=
3
The committee filter: information and prediction
The dilemma of the previous section was that constructing queries with high information gain does not necessarily lead to rapidly decreasing prediction error. This is
because the constructed query distribution may have nothing to do with the input
distribution V. This deficiency can be avoided in a different paradigm in which
the query distribution is created by filtering V. Suppose that the learner receives
a stream of unlabeled inputs Xl, x2, ... drawn independently from the distribution
V. After seeing each input Xi, the learner has the choice of whether or not to query
the teacher for the correct label Ii
f( Xi).
=
In [50592] it was suggested to filter queries that cause disagreement in a committee
of Gibbs learners. In this paper we concentrate on committees with two members.
The algorithm is:
Query by a committee of two
Repeat the following until n queries have been accepted
1. Draw an unlabeled input x E X at random from V.
2. Select two hypotheses hl' h2 from the posterior distribution. In other words,
pick two hypotheses that are consistent with the labeled examples seen so
far.
3. If hl(x) -:j:. h2(X) then query the teacher for the label of x, and add it to
the training set.
The committee filter tends to select examples that split the version space into two
parts of comparable size, because if one of the parts contains most of the version
space, then the probability that the two hypotheses will disagree is very small.
More precisely, if x cuts the version space into parts of size X and 1 - X, then the
probability of accepting x is 2x(1 - X). One can show that the i.i.g. of the queries
is lower bounded by that obtained from random inputs.
Information, prediction, and query by committee
In this section, we assume something stronger: that the expected i.i.g. of the committee has positive lower bound. Conditions under which this assumption holds will
be discussed in the next section. The bound implies that the cumulative information
gain increases linearly with the number of queries n. But the version space resulting
from the queries alone must be larger than the version space that would result if
the learner knew all of the labels. Hence the cumulative information gain from the
queries is upper bounded by the cumulative information gain which would be obtained from the labels of all m inputs, which behaves like O(dlog r;;) for a concept
class C with finite VC dimension d ([HKS91]). These O(n) and O(log m) behaviors
are consistent only if the gap between consecutive queries increases exponentially
fast. This argument is depicted in Fi!~ure 1.
Cumulative
Information
Gain
Cumulative
Information
of Queries
Expected
~~;~~
Random
Examples
_ _
r------- _r------------
r----
Gap between example:
accepted as queries
Number of
x
x
x
x
Random Examples
Figure 1: Each tag on the x axis denotes a random example in a specific typical
sequence. The symbol X under a tag denotes the fact that the example was chosen
as a query.
Recall that an input is accepted if it provokes disagreement between the two Gibbs
learners that constitute the committee. Thus a large gap between consecutive
queries is equivalent to a small probability of disagreement. But in our Bayesian
framework the probability of disagreement between two Gibbs learners is equal to
the probability of disagreement between a Gibbs learner and the teacher, which is
the expected prediction error. Thus the prediction error is exponentially small as a
function of the number of queries. The exact statement of the result is given below,
a detailed proof of which will be published elsewhere.
Theorem 1 Suppose that a concept class C has VC-dimension d
<
and the
expected information gained by the two member committee algorithm is bounded by
c > 0, independent of the query number and of the previous queries. Then the
probability that one of the two committee members makes a mistake on a randomly
chosen example with respect to a randomly chosen fEe is bounded by
00
(3+0(e-Cln?~exp (-2(d~ l)n)
for some constant
Cl
> 0,
where n is the number of queries asked so far.
(4)
487
488
Freund, Seung, Shamir, and Tishby
4
Lower bounds on the information gain
Theorem 1 is applicable to learning problems for which the committee achieves i.i.g.
with positive lower bound. A simple case of this is the d-dimensional high-low game
of section 2, for which the i.i.g. is 7/(121n 2) R: 0.84, independent of dimension. This
exact result is simple to derive because the high-low game is geometrically trivial:
all version spaces are similar to each other. In general, the shape of the version space
is more complex, and depends on the randomness of the examples. Nevertheless,
the expected i.i.g. can be lower bounded even for some learning problems with
nontrivial version space geometry.
4.1
The information gain for convex version spaces
Define a class of functions f w by
fw(x, t)
= { ~:
... ... t ,
w?x>
(5)
tij?x<t.
The vector tij E Rd is drawn at random from a prior distribution P, which is
uniform over some convex body contained in the unit ball. The distribution of
inputs (x, t) E Bd x [-1,1], is a product of any distribution over Bd (the unit
ball centered at the origin) and the uniform distribution over [-1, +1]. Since each
example defines a plane in the concept space, all version spaces for this problem are
convex. We show that there is a uniform lower bound on the expected i.i.g. for any
convex version space when a two member committee filters inputs drawn from V.
In the next paragraphs we sketch our proof, the full details of which shall appear
elsewhere.
In fact, we prove a stronger statement, a bound on the expected i.i.g. for any fixed X.
Fix and define x( t) as the fraction of the version space volume for which
w< t.
Since the probability of filtering a query at t is proportional to 2X(t)[1 - X(t)], the
expected i.i.g. is given by
x
x?
I[X(t)]
= J~l 2X(t)[1 -
X(t)]1i(X(t))dt
J~l 2X(t)[1 - X(t)]dt
(6)
In the following, it is more convenient to define the expected i.i.g. as a functional
of r(t)
d-VdX/dt, which is the radius function of the body of revolution with
equivalent cross sectional area dX/dt. Using the Brunn-Minkowski inequality, it can
be shown that any convex body has a concave radius function r(t).
=
We have found a set of four transformations of r(t) which decrease I[r]. The only
concave function that is a fixed point of these transformations is (up to volume
preserving rescaling transformations):
r*(t)
= d-{/df2 (1 - It I) .
This corresponds to the body constructed by placing two cones base to base with
their axes pointing along x. We can calculate I[r*] explicitly for each dimension
d. As the dimension of the space increases to infinity this value converges from
above to a strictly positive value which is 1/9 + 7/(181n2) R: 0.672 bits, which is
surprisingly close to the upper bound of 1 bit.
Information, prediction, and query by committee
4.2
The information gain for thresholded continuous functions
Consider a concept class consisting of functions of the form
fw(x) = {
~:
F(w,x)~O,
F(w, i) <0 ,
(7)
where i E R', w E Rd and F is a smooth function of both x and W. Random
input learning of this type of concept class has been studied within the annealed
approximation by [AFS92]. We assume that both V and P are described by density
functions that are smooth and nonvanishing almost everywhere. Let the target
concept be denoted by fwo(i). We now argue that in the small version space limit
(reached in the limit of a large number of examples), the expected i.i.g. for query
learning of this concept class has the same lower bound that was derived in section
4.1.
This is because a linear expansion of F becomes a good approximation in the version
space,
F(w, X)
= F(wa, i) + (w -
wa) . 'V wF(wa, x) .
(8)
Consequently, the version space is a convex body containing wa, each boundary of
which is a hyperplane perpendicular to 'VwF(wa, x) for some x in the training set.
Because the prior density P is smooth and nonvanishing, the posterior becomes
uniform on the version space.
From Eq. (8) it follows that a small version space is only cut by hyperplanes corresponding to inputs i for which F(wa, X) is small. Such inputs can be parametrized
by using coordinates on the decision boundary (the manifold in x space determined
by F( wa, i) = 0), plus an additional coordinate for the direction normal to the
decision boundary. Varying the normal coordinate of x changes the distance of
the corresponding hyperplane from wa, but does not change its direction (to lowest
order). Hence each normal average is governed by the lower bound of 0.672 bits
that was derived in section 4.1 for planar cuts along a fixed axis of a convex version
space. The expected i.i.g. is obtained by integrating the normal average over the
rest of the coordinates, and therefore is governed by the same lower bound.
5
Summary and open questions
In this work we have shown that the number of examples required for query learning behaves like the logarithm of the number required for random input learning.
This result on the power of query filtering applies generally to concept classes for
which the committee filter achieves information gain with positive lower bound,
and in particular to concept classes consisting of thresholded smooth functions.
A wide variety of learning architectures in common use fall in this group, including radial basis function networks and layered feedforward neural networks with
smooth transfer functions. Our main unrealistic assumption is that the learned rule
is assumed to be realizable and noiseless. Understanding how to filter queries for
learning unrealizable or noisy concepts remains an important open problem.
489
490
Freund, Seung, Shamir, and Tishby
Acknowledgment s
Part of this research was done at the Hebrew University of Jerusalem. Freund,
Shamir and Tishby would like to thank the US-Israel Binational Science Foundation
(BSF) Grant no. 90-00189/2 for support of their work. We would also like to thank
Yossi Azar and Manfred Opper for helpful discussions regarding this work.
References
[AFS92] S. Amari, N. Fujita, and S. Shinomoto. Four types of learning curves.
Neural Comput., 4:605-618, 1992.
[Ang88] D. Angluin. Queries and concept learning. Machine Learning, 2:319-342,
1988.
[Bau91] E. Baum. Neural net algorithms that learn in polynomial time from examples and queries. IEEE Trans. Neural Networks, 2:5-19, 1991.
[CAL90] D. Cohn, L. Atlas, and R. Ladner. Training connectionist networks with
queries and selective sampling. Advances in Neural Information Processing
Systems, 2:566-573, 1990.
[ER90] B. Eisenberg and R. Rivest. On the sample complexity of PAC-learning
using random and chosen examples. In M. Fulk and J. Case, editors, Proceedings of the Third Annual ACM Workshop on Computational Learning
Theory, pages 154-162, San Mateo, Ca, 1990. Kaufmann.
[Fed72] V. V. Fedorov. Theory of Optimal Experiments. Academic Press, New
York, 1972.
[HKS91] D. Haussler, M. Kearns, and R. Schapire. Bounds on the sample complexity of Bayesian learning using information theory and the VC dimension.
In M. K. Warmuth and L. G. Valiant, editors, Proceedings of the Fourth
Annual Workshop on Computational Learning Theory, pages 61-74, San
Mateo, CA, 1991. Kaufmann.
[KR90] W. Kinzel and P. Rujan. Improving a network generalization ability by
selecting examples. Europhys. Lett., 13:473-477, 1990.
[Lin56] D. V. Lindley. On a measure of the information provided by an experiment.
Ann. Math. Statist., 27:986-1005, 1956.
[Mac92] D. J. C. MacKay. Bayesian methods for adaptive models. PhD thesis,
California Institute of Technology, Pasadena, 1992.
[SOS92] H. S. Seung, M. Opper, and H. Sompolinsky. Query by committee. In
Proceedings of the fifth annual A CM workshop on computational learning
theory, pages 287-294, New York, 1992. ACM.
[VaI84] L. G. Valiant. A theory of the learnable. Comm. ACM, 27:1134-1142,
1984.
| 622 |@word version:28 polynomial:1 seems:2 stronger:2 open:3 cal90:3 pick:1 contains:1 att:1 selecting:1 com:1 dx:1 aft:1 must:2 bd:2 cruz:1 realistic:1 informative:2 shape:1 enables:1 cheap:1 designed:1 atlas:1 alone:1 fewer:1 warmuth:1 plane:1 accepting:1 filtered:1 manfred:1 provides:1 math:1 hyperplanes:1 along:4 ucsc:1 constructed:3 prove:2 consists:2 paragraph:1 lj2:1 expected:14 behavior:1 multi:1 decreasing:2 becomes:2 provided:2 moreover:1 rivest:2 bounded:5 lowest:1 what:2 israel:1 kind:1 cm:1 transformation:3 every:2 concave:2 unit:3 grant:1 appear:1 positive:9 before:2 tends:4 mistake:1 limit:2 ure:1 might:1 plus:1 studied:1 mateo:2 suggests:1 ease:1 perpendicular:1 acknowledgment:1 xr:5 area:1 empirical:1 bell:1 attain:1 convenient:1 word:1 integrating:1 radial:1 seeing:1 spite:1 get:1 cannot:1 unlabeled:4 close:1 layered:1 context:1 restriction:1 equivalent:2 center:1 baum:1 annealed:1 jerusalem:3 independently:2 automaton:1 convex:7 rule:1 bsf:1 haussler:1 coordinate:4 analogous:1 shamir:6 target:3 construction:5 ang88:2 suppose:2 exact:2 hypothesis:7 origin:1 expensive:1 utilized:1 cut:3 labeled:1 calculate:1 sompolinsky:1 decrease:7 bisect:1 intuition:1 vanishes:1 comm:1 complexity:2 asked:1 seung:7 trained:1 ithe:1 ali:1 dilemma:1 learner:21 basis:1 sos92:2 jersey:1 brunn:1 fast:2 query:66 outside:1 europhys:1 larger:1 valued:1 amari:1 ability:1 noisy:1 sequence:3 net:1 maximal:2 product:2 rapidly:2 achieve:1 converges:1 derive:1 ac:2 received:1 eq:1 implies:1 concentrate:1 direction:2 radius:2 correct:2 filter:8 vc:3 centered:1 fix:1 generalization:2 unrealizable:1 strictly:1 hold:2 normal:4 exp:1 mapping:1 pointing:1 achieves:4 entropic:1 smallest:1 consecutive:2 applicable:1 label:12 expose:1 largest:1 concurrent:1 successfully:1 clearly:1 always:1 rather:2 varying:1 derived:3 ax:1 contrast:2 wf:1 realizable:1 helpful:1 membership:2 accept:1 pasadena:1 relation:1 selective:1 fujita:1 denoted:1 mackay:1 equal:2 construct:2 sampling:1 placing:1 future:1 connectionist:1 logx:1 randomly:2 geometry:1 consisting:2 attempt:1 arrives:1 provokes:1 yielding:1 perimeter:2 divide:1 logarithm:2 boolean:2 yoav:1 uniform:7 tishby:6 cln:1 teacher:6 chooses:3 density:2 fundamental:1 huji:2 receiving:1 together:1 nonvanishing:2 thesis:1 containing:1 inefficient:1 rescaling:1 explicitly:1 depends:1 stream:3 analyze:2 reached:1 lindley:1 minimize:1 il:2 square:1 kaufmann:2 bayesian:5 bisection:3 basically:1 fulk:1 published:1 randomness:1 sebastian:1 failure:1 naturally:1 proof:2 gain:17 proved:1 ask:1 recall:1 organized:1 dt:4 supervised:1 planar:1 maximally:1 done:1 just:1 until:1 df2:1 sketch:1 receives:1 cohn:1 defines:1 concept:23 analytically:1 hence:3 laboratory:1 shinomoto:1 game:8 hill:1 complete:1 mac92:2 instantaneous:1 fi:1 common:1 behaves:2 functional:1 kinzel:1 binational:1 exponentially:4 volume:3 discussed:1 gibbs:7 counterexample:1 rd:2 access:2 add:1 base:2 something:1 posterior:5 inequality:1 seen:1 preserving:1 additional:2 paradigm:12 ii:1 full:1 smooth:6 academic:1 cross:1 prediction:22 noiseless:1 expectation:1 interval:2 eliminates:1 rest:2 member:5 inconsistent:1 feedforward:1 iii:2 easy:1 split:1 variety:1 architecture:1 reduce:1 regarding:1 whether:2 york:2 cause:2 constitute:1 tij:2 generally:1 santa:1 detailed:1 fwo:1 statist:1 reduced:1 angluin:1 schapire:1 per:2 shall:1 group:1 four:2 nevertheless:1 drawn:7 thresholded:3 rectangle:3 geometrically:1 fraction:1 cone:1 eli:1 everywhere:1 fourth:1 almost:1 draw:1 decision:2 fee:1 comparable:1 bit:5 bound:14 guaranteed:1 annual:3 nontrivial:2 precisely:1 deficiency:2 infinity:1 x2:1 tag:2 argument:1 passively:1 minkowski:1 according:1 ball:2 wi:2 hl:2 dlog:1 pr:2 remains:1 discus:1 committee:22 yossi:1 disagreement:5 batch:1 denotes:2 murray:1 question:2 strategy:1 unclear:2 exhibit:1 distance:1 thank:2 mapped:1 parametrized:1 sensible:1 manifold:1 argue:1 trivial:1 relationship:2 ratio:2 hebrew:3 statement:2 negative:2 design:2 unknown:1 disagree:1 upper:2 ladner:1 fedorov:1 finite:4 anti:1 pair:1 required:3 california:2 fv:2 learned:1 trans:1 suggested:3 below:1 including:1 power:4 unrealistic:1 improve:1 technology:1 axis:2 created:1 naive:1 prior:6 literature:2 understanding:3 eisenberg:2 freund:6 lacking:1 suggestion:2 filtering:7 proportional:3 querying:1 h2:2 foundation:1 consistent:3 editor:2 elsewhere:2 summary:2 repeat:1 surprisingly:1 free:1 perceptron:2 institute:3 wide:1 fall:1 taking:1 midpoint:1 fifth:1 distributed:1 boundary:3 calculated:1 dimension:8 opper:2 avoids:1 cumulative:5 curve:1 lett:1 made:2 adaptive:1 san:2 avoided:1 far:2 assumed:2 knew:1 xi:2 continuous:1 learn:2 transfer:1 ca:2 subintervals:1 improving:1 expansion:1 necessarily:2 cl:1 constructing:2 complex:1 rujan:1 main:2 linearly:1 whole:1 azar:1 n2:1 nothing:1 body:5 exponential:2 xl:6 comput:1 governed:2 third:1 theorem:2 remained:1 specific:2 pac:4 revolution:1 learnable:2 symbol:1 workshop:3 sequential:2 valiant:2 gained:1 phd:1 illustrates:1 gap:3 generalizing:1 depicted:1 sectional:1 contained:1 applies:1 corresponds:1 acm:3 goal:2 consequently:1 ann:1 towards:1 fw:6 change:2 typical:1 determined:1 uniformly:1 hyperplane:2 kearns:1 called:3 accepted:3 experimental:1 shannon:1 select:2 support:1 relevance:1 constructive:3 tested:1 |
5,770 | 6,220 | Multivariate tests of association based on univariate
tests
Ruth Heller
Department of Statistics and Operations Research
Tel-Aviv University
Tel-Aviv, Israel 6997801
[email protected]
Yair Heller
[email protected]
Abstract
For testing two vector random variables for independence, we propose testing
whether the distance of one vector from an arbitrary center point is independent
from the distance of the other vector from another arbitrary center point by a
univariate test. We prove that under minimal assumptions, it is enough to have
a consistent univariate independence test on the distances, to guarantee that the
power to detect dependence between the random vectors increases to one with
sample size. If the univariate test is distribution-free, the multivariate test will
also be distribution-free. If we consider multiple center points and aggregate
the center-specific univariate tests, the power may be further improved, and the
resulting multivariate test may have a distribution-free critical value for specific
aggregation methods (if the univariate test is distribution free). We show that certain
multivariate tests recently proposed in the literature can be viewed as instances
of this general approach. Moreover, we show in experiments that novel tests
constructed using our approach can have better power and computational time than
competing approaches.
1
Introduction
Let X ? <p and Y ? <q be random vectors, where p and q are positive integers. The null hypothesis
of independence is H0 : FXY = FX FY , where the joint distribution of (X, Y ) is denoted by FXY ,
and the distributions of X and Y , respectively, by FX and FY . If X is a categorical variable with K
categories, then the null hypothesis of independence is the null hypothesis in the K-sample problem,
H0 : F1 = . . . = FK , where Fk , k ? {1, . . . , K} is the distribution of Y in category k.
The problem of testing for independence of random vectors, as well as the K-sample problem on a
multivariate Y , against the general alternative H1 : FXY 6= FX FY , has received increased attention
in recent years. The most common approach is based on pairwise distances or similarity measures.
See (26), (6), (24), and (12) for consistent tests of independence, and (10), (25), (1), (22), (5), and (8)
for recent K-sample tests. Earlier tests based on nearest neighbours include (23) and (13). For the
K-sample problem, the practice of comparing multivariate distributions based on pairwise distances is
justified by the fact that, under mild conditions, the distributions differ if and only if the distributions
of within and between pairwise distances differ (19). Other innovative approaches have also been
considered in recent years. In (4) and (28), the authors suggest to reduce the multivariate data to
a lower dimensional sub-space by (random) projections. Recently, in (3) another approach was
introduced for the two sample problem, which is based on distances between analytic functions
representing each of the distributions. Their novel tests are almost surely consistent when randomly
selecting locations or frequencies and are fast to compute.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
We suggest the following approach for testing for independence: first compute the distances from a
fixed center point, then apply any univariate independence test on the distances. We show that this
approach can result in novel powerful multivariate tests, that are attractive due to their theoretical
guarantees and computational complexity. Specifically, in Section 2 we show that if H0 is false,
then applying a univariate consistent test on distances from a single center point will result in a
multivariate consistent test (except for a measure zero set of center points), where a consistent test is
a test with power (i.e., probability of rejecting H0 when H0 is false) increasing to one as the sample
size increases when H0 is false. Moreover, the computational time is that of the univariate test, which
means that it can be very fast. In particular, a desirable requirement is that the null distribution
of the test statistic does not depend on the marginal distributions of X and Y , i.e., that the test is
distribution-free. Powerful univariate consistent distribution-free tests exist (see (11) for novel tests
and a review), so if one of these distribution-free univariate test is applied on the distances, the
resulting multivariate test is distribution-free.
In Section 3 we show that considering the distances from M > 1 points and aggregating the resulting
statistics can also result in consistent tests, which may be more powerful than tests that consider a
single center point. Both distribution-free and permutation-based tests can be generated, depending
on the choice of aggregation method and univariate test.
In Section 4 we draw the connection between these results and some known tests mentioned above.
The tests of (10) and of (12) can be viewed as instances of this approach, where the fixed center
point is a sample point, and all sample points are considered each in turn as a fixed center point,
for a particular univariate test. In Section 5 we demonstrate in simulations that novel tests based on
our approach can have both a power advantage and a great computational advantage over existing
multivariate tests. In Section 6 we discuss further extensions.
2
From multivariate to univariate
We use the following result by (21). Let Bd (x, r) = {y ? <d : kx ? yk ? r} be a ball centered at x
with radius r. A complex Radon measure ?, defined formally in Supplementary Material (SM) ? D,
on <d is said to be of at most exponential-quadratic growth if there exist positive constants A and ?
2
such that |?|(Bd (0, r)) ? Ae?r .
Proposition 2.1 (Rawat and Sitaram (21)). Let ? ? <d be such that the only real analytic function
(defined on an open set containing ?) that vanishes on ?, is the zero function. Let C = {Bd (x, r) :
x ? ?, r > 0}. Then for any complex Radon measure ? on <d of at most exponential-quadratic
growth, if ?(C) = 0 for all C ? C, then it necessarily follows that ? = 0.
For the two-sample problem, let Y ? <q be a random variable with cumulative distribution F1 in
0
category X = 1, and F2 in category X = 2. For z ? <q , let Fiz
be the cumulative distribution
function of kY ? zk when Y has cumulative distribution Fi , i ? {1, 2}. We show that if the
distribution of Y differs across categories, then so does the distribution of the distance of Y from
almost every point z. Therefore, any univariate consistent two-sample test on the distances from z
results in a consistent test of the equality of the multivariate distributions F1 and F2 , for almost every
z. It is straightforward to generalize these results to K > 2 categories.
Proofs of all Theorems are in SM ? A.
Theorem 2.1. If H0 : F1 = F2 is false, then for every z ? <q , apart from at most a set of Lebesgue
0
0
measure 0, there exists an r > 0 such that F1z
(r) 6= F2z
(r).
Corollary 2.1. For every z ? <q , apart from at most a set of Lebesgue measure 0, a consistent
0
0
two-sample univariate test of the null hypothesis H00 : F1z
= F2z
will result in a multivariate
consistent test of the null hypothesis H0 : F1 = F2 .
For the multivariate independence test, let X ? Rp and Y ? <q be two random vectors with marginal
distributions FX and FY , respectively, and with joint distribution FXY . For z = (zx , zy ), zx ?
0
0
0
<p , zy ? <q , let FXY
z be the joint distribution of (kX ? zx k, kY ? zy k). Let FXz and FY z be the
marginal distribution of kX ? zx k and kY ? zy k, respectively.
Theorem 2.2. If H0 : FXY = FX FY is false, then for every zx ? <p , zy ? <q , apart from at most a
0
0
0
set of Lebesgue measure 0, there exists rx > 0, ry > 0, such that FXY
z (rx , ry ) 6= FXz (rx )FY z (ry ).
2
Corollary 2.2. For every z ? <p+q , apart from at most a set of Lebesgue measure 0, a consistent
0
0
0
univariate test of independence of the null hypothesis H00 : FXY
z = FXz FY z will result in a
multivariate consistent test of the null hypothesis H0 : FXY = FX FY .
We have N independent copies (xi , yi ) (i = 1, . . . , N ) from the joint distribution FXY . The above
results motivate the following two-step procedure for the multivariate tests. For the K-sample test,
xi ? {1, . . . , K} determines the category and yi ? <q is the observation in category xi , so the
two-step procedure is to first choose z ? <q and then to apply a univariate K-sample consistent
test on (x1 , ky1 ? zk), . . . , (xN , kyN ? zk). Examples of such univariate tests include the classic
Kolmogorov-Smirnov and Cramer-von Mises tests. For the independence test, the two-step procedure
is to first choose zx ? <p and zy ? <q , and then to apply a univariate consistent independence test
on (kx1 ? zx k, ky1 ? zy k), . . . , (kxN ? zx k, kyN ? zy k). An example of such a univariate test is
the classic test of Hoeffding (14). Note that the consistency of a univariate test may be satisfied only
under some assumptions on the distribution of the distances of the multivariate vectors. For example,
the consistency of (14) follows if the densities of kX ? zx k and kY ? zy k are continuous. See (11)
for additional distribution-free univariate K-sample and independence tests.
A great advantage of this two-step procedure is the fact that it has the same computational complexity
as the univariate test. For example, if one chooses to use Hoeffding?s univariate independence
test (14) , then the total complexity is only O(N log N ), which is the cost of computing the test
statistic. The p-value can be extracted from a look-up table since Hoeffding?s test is distributionfree. In comparison, the computational complexity of the multivariate permutation tests of (26) and
(12) is O(BN 2 ), and O(BN 2 log N ), respectively, where B is the number of permutations. For
many univariate tests the asymptotic null distribution is known, thus it can be used to compute the
significance efficiently without resorting to permutations, which are typically required for assessing
the multivariate significance.
Another advantage of the two-step procedure is the fact that the test statistic may be estimating
an easily interpretable population value. The univariate test statistics often converge to easily
interpretable population values, which are often between 0 and 1. These values carry over to provide
meaning to the new multivariate statistics, see examples in equations (1) and (2).
In practice, the choice of the center value from which the distances are measured can have a significant
Pk
2
2
, ?i2
))
impact on power, as demonstrated in the following example. Let i=1 pi N2 (?i , diag(?i1
denote the mixture distribution of k bivariate normals, with mean ?i and a diagonal covariance matrix
2
2
2
2
with diagonal entries ?i1
and ?i2
, denoted by diag(?i1
, ?i2
), i = 1, . . . , k. Consider the following
bivariate two sample problem which is depicted in Figure 1, where F1 = 12 N2 (0, diag(1, 9)) +
1
1
1
0
2 N2 (0, diag(100, 100)) and F2 = 2 N2 (0, diag(9, 1)) + 2 N2 (0, diag(100, 100)). Clearly F1z has
0
the same distribution as F2z if z ? {(y1 , y2 ) : y1 = y2 or y1 = ?y2 }, see Figure 1 (c). In agreement
with theorem 2.1 the measure of these non-informative center points is zero. On the other hand, if we
use as a center point a point on one of the axes, the distribution of the distances will be very different.
See in particular the distribution of distances from the point (0,100) in Figure 1 (b) and the power
analysis in Table 2.
3
Pooling univariate tests together
We need not rely on a single z ? <p+q (or a single z ? <q for the K-sample problem). If we apply a
consistent univariate test using many points zi for i = 1, . . . , M as our center points, where the test
is applied on the distances of the N sample points from the center point, we obtain M test-statistics
and corresponding p-values, p1 , . . . , pM .
We can use the p-values or the test statistics of the univariate tests to design consistent multivariate
tests. We suggest three useful approaches. The first approach is to combine the p-values, using a
combining function f : [0, 1]M ? [0, 1]. Common combining functions include f (p1 , . . . , pM ) =
PM
mini=1,...,M pi , and f (p1 , . . . , pM ) = ?2 i=1 log pi .
The second approach is to combine the univariate test statistics, by a combining function such as
the average, maximum, or minimum statistic. These aggregation methods can result in test statistics
which converge to meaningful population values, see equations (1) and (2) below for multivariate
tests based on the univariate Kolmogorov-Smirnov two sample test (18). We note that if the univariate
3
0.10
0.20
0.06
0.04
0.10
Density
Density
0.15
0.08
20
10
0
Y2
0.02
0.05
?10
?10
0
Y1
(a)
10
20
0.00
0.00
?20
?20
70
80
90
100
110
120
130
110
||Y?(0,100)||
120
130
140
150
160
170
||Y?(100,100)||
(b)
(c)
Figure 1: (a) Realizations from two bivariate normal distributions, with a sample size of
1000 from each group: 12 N2 (0, diag(1, 9)) + 12 N2 (0, diag(100, 100)) (black points), and F2 =
1
1
2 N2 (0, diag(9, 1)) + 2 N2 (0, diag(100, 100)) (red points); (b) the empirical density of the distance
from the point (0,100) in each group; (c) the empirical density of the distance from the point (100,100)
in each group.
tests are distribution-free then taking the maximum (minimum) p-value is equivalent to taking the
minimum (maximum) test statistic (when the test rejects for large values of the test statistic). The
significance of the combined p-value or the combined test statistic can be computed by a permutation
test.
A drawback of the two approaches above is that the distribution-free property of the univariate test
does not carry over to the multivariate test. In our third approach, we consider the set of M p-values as
coming from the family of M null hypotheses, and then apply a valid test of the global null hypothesis
that all M null hypotheses are true. Let p(1) ? . . . ? p(M ) be the sorted p-values. The simplest valid
test for any type of dependence is the Bonferroni test, which will reject the global null if M p(1) ? ?.
PM
Another valid test is the test of Hommel (16), which rejects if minj?1 {M ( l=1 1/l)p(j) /j} ? ?.
(This test statistic was suggested independently in a multiple testing procedure for false discovery
rate control under general dependence in (2).) The third approach is computationally much more
efficient than the first two approaches, since no permutation test is required after the computation of
the univariate p-values, but it may be less powerful. Clearly, if the univariate test is distribution free,
the resulting multivariate test has a distribution-free critical value.
As an example we prove that when using the Kolmogorov-Smirnov two sample test as the univariate
test, all the pooling methods above result in consistent multivariate two-sample tests. Let KS(z) =
0
0
supd?< |F1z
(d) ? F2z
(d)| be the population value of the univariate Kolmogorov-Smirnov two sample
test statistic comparing the distribution of the distances. Let N be the total number of independent
observations. We assume for simplicity an equal number of observations from F1 and F2 .
Theorem 3.1. Let z1 , . . . , zM be a sample of center points from an absolutely continuous distribution
with probability measure ?, whose support S has a positive Lebesgue measure in <q . Let KSN (zi ) be
the empirical value of KS(zi ) with corresponding p-value pi , i = 1, . . . , M . Let p(1) ? . . . ? p(M )
be the sorted p-values. Assume that the distribution functions F1 and F2 are continuous. For
M = o(eN ), if H0 : F1 = F2 is false, then ?-almost surely, the multivariate test will be consistent
for the following level ? tests:
1. the permutation test using the test statistics S1 = maxi=1,...,M {KSN (zi )} or S2 = p(1) .
2. the test based on Bonferroni, which rejects H0 if M p(1) ? ?.
3. for M log M n= o(eN ), the test based
o on Hommel?s global null p-value, which rejects H0 if
PM
minj=1,...,M M ( l=1 1/l)p(j) /j ? ?.
4. the permutation tests using the statistics T 1 =
4
PM
i=1
KSN (zi ) or T 2 = ?2
PM
i=1
log pi .
Arguably, the most natural choice of center points is the sample points themselves. Interestingly, if
the univariate test statistic is a U-statistic (15) of order m (defined formally in SM ?sup-sec-technical),
then the resulting multivariate test statistic is a U-statistic of order m + 1, if each sample point acts as
a center point, and the univariate test statistics are averaged, as stated in the following Lemma (see
SM ? A for the proof).
Lemma 3.1. For univariate random variables (U, V ), let TN ?1 ((uk , vk ), k = 1, . . . , N ? 1) be
a univariate test statistic based on a random sample of size N ? 1 from the joint distribution
of (U, V ). If TN ?1 is a U -statistic of order m, then SN = N1 [T {(kxk ? x1 k, kyk ? y1 k), k =
2, . . . , N } + . . . + T {(kxk ? xN k, kyk ? yN k), k = 1, . . . , N ? 1}] is a U -statistic of order m + 1.
The test statistics S1 and T 1/M converge to meaningful population quantities,
lim S1 = lim max KS(z) = sup KS(z),
N,M ??
lim
N,M ??
M ?? z1 ,...,zM
T1 /M = lim
M ??
M
X
(1)
z?S
KS(zi )/M = E{KS(Z)},
(2)
i=1
where the expectation is over the distribution of the center point Z.
4
Connection to existing methods
We are aware of two multivariate test statistics of the above-mentioned form: aggregation of the
univariate test statistics on the distances from center points. The tests are the two sample test of (10)
and the independence test of (12). Both these tests use the second pooling method mentioned above
by summing up the univariate test statistics. Furthermore, both these tests use the N sample points
as the center points (or z?s) and perform a univariate test on the remaining N ? 1 points. Indeed,
(10) recognized that their test can be viewed as summing up univariate Cramer von-Mises tests on
the distances from each sample point. We shall show that the test statistic of (12) can be viewed as
aggregation by summation of the univariate weighted Hoeffding independence test suggested in (27).
PN PN
In (12) a permutation test was introduced, using the test statistic i=1 j=1,j6=i S(i, j), where
S(i, j) is the Pearson test score for the 2?2 contingency table for the random variables I(kX ?xi k ?
kxj ? xi k) and I(kY ? yi k ? kyj ? yi k), where I(?) is the indicator function. Since kX ? xi k
and kY ? yi k are univariate random variables, S(i, j) can also be viewed as the test statistic for
the independence test between kX ? xi k and kY ? yi k, based on the 2 ? 2 contingency table
induced by the 2 ? 2 partition of <2 about the point (kxj ? xi k, kyj ? yi k) using the N ? 2
sample points (kxk ? xi k, kyk ? yi k), k = 1, . . . , N, k 6= i, k 6= j. The statistic that sums the
Pearson test statistics over all 2 ? 2 partitions of <2 based on the observations, results in a consistent
independence test for univariate random variables (27). The test statistic of (27) on the sample points
PN
(kxk ? xi k, kyk ? yi k), k = 1, . . . , N, k 6= i, is therefore j=1,j6=i S(i, j). The multivariate test
statistic of (12) aggregates by summation the univariate test statistics of (27), where the ith univariate
test statistic is based on the N ? 1 distances of xk from xi , and the N ? 1 distances of yk from yi ,
for k = 1, . . . , N, k 6= i.
Of course, not all known consistent multivariate tests belong to the framework defined above. As
an interesting example we discuss the energy test of (25) and (1) for the two-sample problem.
Without loss of generality, let y1 , . . . , yN1 be the observations from F1 , and yN 1+1 , . . . , yN be the
observations from F2 , N2 = N ? N1 . The test statistic E is equal to
!
N1 X
N1
N1
N
N
N
X
X
N1 N2
2 X
1 X
1 X
kyl ? ym k ? 2
kyl ? ym k ? 2
kyl ? ym k ,
N
N1 N2
N1
N2
l=1 m=1
l=N1 +1 m=N1 +1
l=1 m=N1 +1
PN
where k ? k is the Euclidean norm. It is easy to see that E = i=1 Si , where the univariate score is
(
)
N1
N
X
1 X
1
Si =
kyi ? ym k ?
kyi ? ym k w(i),
(3)
N1 m=1
N2
m=N1 +1
and w(i) = ? NN2 if i ? N1 and w(i) = NN1 if i > N1 , for i ? {1, . . . , N }. The statistic Si is not
an omnibus consistent test statistic, since a test based on Si will have no power to detect difference
in distributions with the same expected distance from yi across groups. However, the energy test is
omnibus consistent.
5
5
Experiments
In order to assess the effect of using our novel approach, we carry out experiments. We have three
specific aims: (1) to compare the power of using a single center point versus multiple center points;
(2) to assess the effect of different univariate tests on the power; and (3) to see how the resulting tests
fare against other multivariate tests. For simplicity, we address the two-sample problem, and we do
not consider the more computationally intensive pooling approaches one and two, but rather consider
only the third approach that results in a distribution-free critical value for the multivariate test.
Simulation 1: distributions of dimension ? 2. We examined the distributions depicted in Figure 2.
Scenario (a) was chosen to examine the classical setting of discovering differences in multivariate
normal distributions. The other scenarios were chosen to discover differences in the distributions
when one or both distributions have clusters. These are similar to the settings considered in (9). In
addition, we examined the following scenario from (25) in five dimensions: F1 is the multivariate
standard normal distribution, and F2 = t(5)(5) is the multivariate t distribution, where each of the
independent 5 coordinates has the univariate t distribution with five degrees of freedom.
Y2
?3
10
?2
?2
15
20
0
Y2
?1
0
?1
Y2
1
25
1
2
30
2
3
Regarding the choice of center points, we examine as single center point a sample point selected
at random or the center of mass (CM), and as multiple center points all sample points pooled by
the third approach (using either Bonferroni?s test or Hommel?s test). Regarding the univariate tests,
we examine: the test of Kolmogorov-Smirnov (18), referred to as KS; the test of the Anderson and
Darling family, constructed by (20) for the univariate two-sample problem, referred to as AD; the
generalized test of (11), that aggregates over all partition sizes using the minimum p-value statistic,
referred to as minP (see SM ? C for a detailed description). We compare our tests to Hotelling?s
T 2 classical generalization of the Student?s t statistic for multivariate normal data (17), referred
to as Hotelling; to the energy test of (25) and (1), referred to as Edist; and to the maximum mean
discrepancy test of (8), referred to as MMD.
?2
?1
0
1
Y1
(a)
2
3
?3
?2
?1
0
Y1
(b)
1
2
3
10
15
20
25
30
Y1
(c)
Figure 2: Realizations from the three non-null bivariate settings considered, with a sample size
of 100 from each group: (a) F1 = N2 {(0, 0), diag(1, 1)} and F2 = N2 {(0, 0.05), diag(0.9, 0.9)};
P4
(b) F1 = N2 {(0, 0), diag(1, 1)} and F2 = i=1 14 N2 {?i , diag(0.25, 0.25)}, where ?1 = c(1, 1),
P9 1
?2 = c(?1, 1), ?3 = c(1, ?1), ?4 = c(?1, ?1) ; (c) F1 =
i=1 9 N2 {?i , diag(1, 1)} and
P9
F2 = i=1 19 N2 {?i + (1, 1), diag(0.25, 0.25)} are both mixtures of nine bivariate normals with
equal probability of being sampled, but the centers of the bivariate normals of F1 are on the grid
points (10, 20, 30) ? (10, 20, 30) and have covariance diag(1, 1), and the centers of the bivariate
normals of F2 are on the grid points (11, 21, 31) ? (11, 21, 31) and have covariance diag(0.25, 0.25).
Table 1 shows the actual significance level (column 3) and power (columns 4?7), for the different
multivariate tests considered, at the ? = 0.1 significance level. We see that the choice of center point
matters: comparing rows 4?6 to rows 7?9 shows that depending on the data generation, there can be
more or less power to the test that selects as the center point a sample point at random, versus the
center of mass, depending on whether the distances from the center of mass are more informative
than the distances from a random point. Comparing these rows with rows 10?15 shows that in most
settings there was benefit in considering all sample points as center points versus only a single center
point, even at the price of paying for multiplicity of the different center points. This was true despite
6
Table 1: The fraction of rejections at the 0.1 significance level for the null case (column 3), the
three scenarios depicted in Figure 2 (columns 4?6), and the additional scenario of higher dimension
(column 7). The sample size in each group was 100. Rows 4?6 use the center of mass (CM) as a
single center point; rows 7?9 use a random sample point as the single center point; rows 10?12 use
all sample points as center points. The adjustment for the multiple center points is by Bonferroni in
rows 10?12, and by Hommel?s test in rows 13?15. Based on 500 repetitions for columns 4?7, and on
1000 repetitions for the true null setting in column 3.
Row
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Test
Hotelling
Edist
MMD
single Z-CM - minP
single Z -CM - KS
single Z-CM-AD
single Z -random - minP
single Z -random - KS
single Z - random - AD
vector Z - minP-Bonf
vector Z - KS-Bonf
vector Z-ad-Bonf
vector Z - minP-Hommel
vector Z - KS-Hommel
vector Z-AD-Hommel
F1 = F2 =
N2 {(0, 0), diag(1, 1)}
0.097
0.090
0.114
0.095
0.087
0.112
0.097
0.099
0.102
0.028
0.013
0.011
0.008
0.009
0.007
Scenarios in Figure 2
(a)
(b)
(c)
0.952
0.064
0.246
0.958
0.826
0.298
0.908
0.926
0.190
0.308
0.990
0.634
0.262
0.982
0.214
0.350
0.994
0.266
0.504
0.736
0.922
0.502
0.702
0.394
0.556
0.708
0.436
0.592
0.962
1.000
0.692
0.858
0.196
0.772
0.820
0.132
0.606
0.936
1.000
0.588
0.776
0.174
0.720
0.760
0.150
N5 {(0, 0), diag(1, 1, 1, 1, 1)}
, t(5)(5)
0.080
0.438
0.682
0.974
0.924
0.978
0.754
0.656
0.750
0.906
0.722
0.778
0.774
0.550
0.668
the fact that the cut-off for significance when considering all sample points was conservative, as
manifest by the lower significance levels when the null is true (in column 3, rows 10-15 the actual
significance level is at most 0.028). Applying Hommel?s versus Bonferroni?s test matters as well, and
the latter has better power in most scenarios. The greatest difference in power is due to the univariate
test choice. A comparison of using KS (rows 5, 8, 11, and 14) versus AD (rows 6, 9, 12, and 15)
and minP (rows 4, 7, 10, and 13) shows that AD and minP are more powerful than KS, with a
large power gain for using minP when there are many clusters in the data (column 6). As expected,
Hotelling, Edist and MMD perform best for differences in the Gaussian distribution (column 4).
However, in all other settings Hotelling?s test has poor power, and our approach with minP as the
univariate test has more power than Edist and MMD in columns 5?7. A possible explanation for
the power advantage using an omnibus consistent univariate test over Edist is the fact that Edist
aggregates over the univariate scores in (3), and the absolute value of these scores is close to zero for
sample points that are on average the same distance away from both groups (even if the spread of the
distances from these sample points is different across groups), and for certain center points the score
can even be negative.
Simulation 2: a closer inspection of a specific alternative. For the data generation of Figure 1, we
can actually predict which of the partition based univariate tests should be most powerful. This of
course requires knowing the data generations mechanism, which is unknown in practice, but it is
interesting to examine the magnitude of the gaps in power from using optimal versus other choices of
center points and univariate tests. As one intuitively expects, choosing a point on one of the axes
gives the best power. Specifically, looking at the densities of the distributions of distances from
(0,100) in Figure 1 (b) one can expect that a good way to differentiate between the two densities is
by partitioning the sample space into at least five sections, defined by the four intersections of the
two densities closest to the center. In the power analysis in Table 2, M5 , a test which looks for the
best 5-way partition, has the highest power among all Mk scores, k = 2, 3, . . .. Similarly, an Sk
score sums up all the scores of partitions into exactly k parts, and we would like a partition to be a
refinement of the best five way partition in order for it to get a good score. Here, S8 has the best
power among all Sk scores, k = 2, 3, . . .. For more details about these univariate tests see SM ? C.
In summary, in this specific situation, it is possible to predict both a good center point and a good
very specific univariate score. However this is not the typical situation since usually we do not know
enough about the alternative and therefore it is best to pool information from multiple center points
together as suggested in Section 3, and to use a more general univariate score, such as minP , which
is the minimum of the p-values of the scores Sk , k ? {2, 3, . . .}.
We expect pooling methods one and two to be more powerful than the third pooling method used
in the current study, since the Bonferroni and Hommel tests are conservative compared to using
7
Table 2: The fraction of rejections at the 0.1 significance level for testing H0 : F1 = F2
when F1 = 12 N2 (0, diag(1, 9)) + 12 N2 (0, diag(100, 100)) and F2 = 12 N2 (0, diag(9, 1)) +
1
2 N2 (0, diag(100, 100)), based on a sample of 100 points from each group, using different univariate tests and different center points schemes. Based on 500 repetitions. The competitors had the
following power: Hotelling, 0.090; Edist, 0.274; MMD 0.250.
Test
minP
KS
AD
M5
S5
M8
S8
Partitions
considered
all
2?2
2?2
5?5
5?5
8?8
8?8
Aggregation
type
maximum
sum
maximum
sum
maximum
sum
Single center point
z = (0, 100)
z = (0, 4)
0.896
0.864
0.574
0.508
0.504
0.702
0.850
0.834
0.890
0.902
0.820
0.794
0.924
0.912
Sample points are the center points
Bonferroni
Hommel
0.870
0.758
0.208
0.110
0.064
0.030
0.904
0.644
0.706
0.550
0.856
0.586
0.876
0.736
the exact permutation null distribution of their corresponding test statistics. We learn from the
experiments above and in SM ? B, that our approach can be useful in designing well-powered tests,
but that important choices need to be made, especially the choice of univariate test, for the resulting
multivariate test to have good power.
6
Discussion
We showed that multivariate K-sample and independence tests can be performed by comparing
the univariate distributions of the distances from center points, and that favourable properties of
the univariate tests can carry over to the multivariate test. Specifically, (1) if the univariate test is
consistent then the multivariate test will be consistent (except for a measure zero set of center points);
(2) if the univariate test is distribution-free, the multivariate test has a distribution-free critical value
if the third pooling method is used; and (3) if the univariate test-statistic is a U -statistic of order
m, then aggregating by summation with the sample points as center points produces a multivariate
test-statistic which is a U -statistic of order m + 1. The last property may be useful in working out the
asymptotic null distribution of the multivariate test-statistic, thus avoiding the need for permutations
when using the second pooling method. It may also be useful for working out the non-null distribution
of the test-statistic, which may converge to a meaningful population quantity.
The experiments show great promise for designing multivariate tests using our approach. Even
though only the most conservative distribution-free tests were considered, they had excellent power.
The approach is general, and several important decisions have to be made when tailoring a test to
a specific application: (1) the number and location of the center points; (2) the univariate test; and
(3) the pooling method.We plan to carry out a comprehensive empirical investigation to assess the
impact of the different choices. We believe that our approach will generate useful multivariate tests
for various modern applications, especially applications where the data are naturally represented by
distances such as the study of microbiome diversity (see SM ? B for an example).
The main results were stated for given center points, yet in simulations we select the center points
using the sample. The theoretical results hold for a center point selected at random from the sample.
This can be seen by considering a two-step process, of first selecting the sample point that will be a
center point, and then testing the distances from this center point to the remaining N-1 sample points.
Since the N sample points are independent, the consistency result holds. However, if the center point
is the center of mass, and it converges to a bad point, then such a test will not be consistent. Therefore
we always recommend at least one center point randomly sampled from a distribution with a support
of positive measure.
Our theoretical results were shown to hold for the Euclidean norm. However, imposing the restriction
that the multivariate distribution function is smooth, the theoretical results will hold more generally
for any norms or quasi-norms. From a practical point of view, adding a small Gaussian error to the
measured signal guarantees that these results will hold for any normed distance.
Acknowledgments
We thank Boaz Klartag and Elchanan Mossel for useful discussions of the main results.
8
References
[1] BARINGHAUS , L. & F RANZ , C. (2004). On a new multivariate two-sample test. Journal of Multivariate
Analysis, 88:190?206.
[2] B ENJAMINI , Y. & Y EKUTIELI , D. (2001). The control of the false discovery rate in multiple testing under
dependency. The Annals of Statistics, 29 (4):1165?1188.
[3] C HWIALKOWSKI , K., R AMDAS , A. , S EJDINOVIC , D. & G RETTON , A. (2015). Fast two-sample testing
with analytic representations of probability measures. Advances in Neural Information Processing Systems
(NIPS) , 28.
[4] C UESTA -A LBERTOS , J. A., F REIMAN , R. & R ANSFORD , T. (2006). Random projections and goodness-offit tests in infinite-dimensional spaces. Bull. Braz. Math. Soc. 37(4), 1?25.
[5] G RETTON , A., B OGWARDT, K.M., R ASCH , M.J., S CHOLKOPF, B & S MOLA , A. (2007). A kernel method
for the two-sample problem. Advances in Neural Information Processing Systems (NIPS), 19.
[6] G RETTON , A., F UKUMIZU , K., TEO, C.H., S ONG , L., S CHOLKOPF, B. & S MOLA , A. (2008). A kernel
statistical test of independence. Advances in Neural Information Processing Systems, 20:585?592.
[7] G RETTON , A. & G YORFI , L. (2010). Consistent nonparametric tests of independence. Journal of Machine
Learning Research, 11:1391?1423.
[8] G RETTON , A., B ORGWARDT, K.M., R ASCH , M.J. S CHOLKOPF, B. & S MOLA , A. (2012). A kernel
two-sample test. The Journal of Machine Learning Research, 13:723?773.
[9] G RETTON , A., S EJDINOVIC , D., S TRATHMANN , H. BALAKRISHNAN , S. & P ONTIL , M. & F UKUMIZU ,
K. & S RIPERUMBUDUR , B.K.(2012). Optimal kernel choice for large-scale two-sample tests. Advances in
Neural Information Processing Systems, 25:1205?1213.
[10] H ALL , P. & TAJVIDI , N. (2002). Permutation tests for equality of distributions in high-dimensional
settings. Biometrika, 89 (2):359?374.
[11] H ELLER , R., H ELLER , Y., K AUFMAN , S., B RILL , B. & G ORFINE , M. (2016). Consistent distribution-free
K-sample and independence tests for univariate random variables Journal of Machine Learning 17 (29):
1?54.
[12] H ELLER , R., H ELLER , Y. & G ORFINE , M. (2013). A consistent multivariate test of association based on
ranks of distances. Biometrika, 100(2):503?510.
[13] H ENZE , N.(1988). A multivariate two-sample test based on the number of nearest neighbor type coincidences. The Annals of Statistics, 16(2): 772?783.
[14] H OEFFDING , W.(1948a). A non-parametric test of independence. Ann. Math. Stat., 19 (4), 546?557.
[15] H OEFFDING , W.(1948b). A class of statistics with asymptotically normal distributions. Annals of Statistics,
19, 293-325.
[16] H OMMEL , G. (1983). Tests of the overall hypothesis for arbitrary dependence structures Biom. J.
25:423?430
[17] H OTELLING , H. (1931). The Generalization of Student?s Ratio Ann. Math. Statist. 3:360?378
[18] KOLMOGOROV, A. N.(1941). Confidence limits for an unknown distribution function. Ann. Math. Stat.
12, 461?463.
[19] M AA , J.F., P EARL , D.K, & BARTOSZYNSKI , R. (1996). Reducing multidimensional two-sample data to
one-dimensional interpoint comparisons. Annals of Statistics, 24 (3), 1069-1074.
[20] P ETTITT, A.N.(1976). A two-sample Anderson-Darling rank statistics. Biometrika, 63 (1) 161-168.
[21] R AWAT, R. & S ITARAM , A. (2000). Injectivity sets for spherical means on Rn and on symmetric spaces
Journal of Fourier Analysis and Applications, 6(3):343?348.
[22] ROSENBAUM , R. (2005). An exact distribution-free test comparing two multivariate distributions based on
adjacency. Journal of the Royal Statitistical Society B, 67:515?530.
[23] S CHILLING , M. F. (1986). Multivariate two-sample tests based on nearest neighbors. J. Am. Statist. Assoc.
81, 799?806.
[24] S EJDINOVIC , D., S RIPERUMBUDUR , B., G RETTON , A. & F UKUMIZU , K. (2013). Equivalence of
distance-based and RKHS-based statistics in hypothesis testing. Annals of Statistics, 41 (5):2263?2291.
[25] S Z?KELY, G. & R IZZO , M. (2004). Testing for equal distributions in high dimensions. InterStat.
[26] S Z?KELY, G., R IZZO , M. & BAKIROV, N. (2007). Measuring and testing dependence by correlation of
distances. The Annals of Statistics, 35:2769?2794.
[27] T HAS , O. & OTTOY, J.P. (2004). A nonparamteric test for independence based on sample space partitions..
Communcations in Statistics - Simulation and Computation 33 (3), 711?728.
[28] W EI , S., L EE , C., W ICHERS , L. & M ARRON , J. S. (2015). Direction-Projection-Permutation
for High Dimensional Hypothesis Tests. Journal of Computational and Graphical Statisitcs, doi:
10.1080/10618600.2015.1027773.
9
| 6220 |@word mild:1 norm:4 smirnov:5 open:1 simulation:5 bn:2 covariance:3 carry:5 score:13 selecting:2 rkhs:1 interestingly:1 existing:2 current:1 com:2 comparing:6 si:4 gmail:2 yet:1 bd:3 partition:10 informative:2 tailoring:1 analytic:3 interpretable:2 braz:1 discovering:1 selected:2 kyk:4 inspection:1 xk:1 ith:1 math:4 location:2 five:4 constructed:2 prove:2 combine:2 pairwise:3 expected:2 indeed:1 p1:3 themselves:1 examine:4 ry:3 m8:1 spherical:1 p9:2 actual:2 considering:4 increasing:1 spain:1 estimating:1 moreover:2 discover:1 mass:5 null:21 israel:1 cm:5 guarantee:3 every:6 multidimensional:1 act:1 growth:2 exactly:1 biometrika:3 assoc:1 uk:1 control:2 partitioning:1 yn:3 arguably:1 positive:4 t1:1 aggregating:2 limit:1 despite:1 black:1 k:14 distributionfree:1 examined:2 equivalence:1 averaged:1 practical:1 acknowledgment:1 testing:12 practice:3 differs:1 procedure:6 empirical:4 reject:5 projection:3 confidence:1 suggest:3 get:1 close:1 applying:2 restriction:1 equivalent:1 demonstrated:1 center:61 straightforward:1 attention:1 independently:1 normed:1 simplicity:2 classic:2 population:6 fx:6 coordinate:1 annals:6 exact:2 designing:2 hypothesis:13 agreement:1 cut:1 coincidence:1 highest:1 yk:2 mentioned:3 vanishes:1 complexity:4 ong:1 motivate:1 depend:1 f2:18 easily:2 joint:5 kxj:2 various:1 represented:1 kolmogorov:6 fast:3 doi:1 aggregate:4 pearson:2 h0:14 choosing:1 whose:1 supplementary:1 statistic:62 differentiate:1 advantage:5 propose:1 coming:1 zm:2 p4:1 combining:3 realization:2 baringhaus:1 kx1:1 description:1 ky:7 cluster:2 requirement:1 assessing:1 produce:1 converges:1 depending:3 stat:2 measured:2 nearest:3 received:1 paying:1 soc:1 rosenbaum:1 differ:2 direction:1 radius:1 drawback:1 centered:1 material:1 adjacency:1 f1:18 generalization:2 investigation:1 proposition:1 summation:3 extension:1 fxy:10 hold:5 considered:7 cramer:2 normal:9 great:3 predict:2 reiman:1 cholkopf:3 teo:1 repetition:3 weighted:1 clearly:2 gaussian:2 always:1 aim:1 rather:1 pn:4 corollary:2 ax:2 vk:1 rank:2 detect:2 am:1 typically:1 quasi:1 i1:3 selects:1 overall:1 among:2 denoted:2 plan:1 marginal:3 equal:4 aware:1 look:2 discrepancy:1 recommend:1 modern:1 randomly:2 neighbour:1 comprehensive:1 lebesgue:5 fiz:1 n1:16 freedom:1 mixture:2 closer:1 elchanan:1 euclidean:2 mola:3 theoretical:4 minimal:1 mk:1 instance:2 increased:1 earlier:1 column:11 goodness:1 measuring:1 bull:1 cost:1 entry:1 expects:1 dependency:1 chooses:1 combined:2 density:8 kely:2 off:1 pool:1 together:2 ym:5 von:2 satisfied:1 containing:1 choose:2 hoeffding:4 diversity:1 sec:1 pooled:1 student:2 matter:2 ad:8 performed:1 h1:1 view:1 sup:2 red:1 aggregation:6 ass:3 efficiently:1 generalize:1 rejecting:1 zy:9 rx:3 zx:9 j6:2 minj:2 against:2 competitor:1 energy:3 frequency:1 naturally:1 proof:2 mi:2 sampled:2 gain:1 manifest:1 lim:4 actually:1 higher:1 improved:1 though:1 generality:1 furthermore:1 kyn:2 anderson:2 correlation:1 hand:1 working:2 ei:1 believe:1 aviv:2 omnibus:3 effect:2 y2:7 true:4 equality:2 kxn:1 symmetric:1 yn1:1 i2:3 attractive:1 bonferroni:7 generalized:1 m5:2 demonstrate:1 tn:2 meaning:1 novel:6 fi:1 recently:2 common:2 nn2:1 association:2 belong:1 fare:1 s8:2 significant:1 s5:1 imposing:1 fk:2 consistency:3 resorting:1 pm:8 grid:2 similarly:1 had:2 similarity:1 multivariate:54 closest:1 recent:3 showed:1 apart:4 scenario:7 certain:2 yi:11 kyl:3 seen:1 minimum:5 additional:2 injectivity:1 surely:2 converge:4 recognized:1 signal:1 multiple:7 desirable:1 smooth:1 technical:1 impact:2 ae:1 n5:1 expectation:1 kernel:4 mmd:5 justified:1 addition:1 interstat:1 pooling:9 induced:1 balakrishnan:1 integer:1 ee:1 enough:2 easy:1 independence:24 zi:6 competing:1 reduce:1 regarding:2 knowing:1 intensive:1 whether:2 nine:1 useful:6 generally:1 detailed:1 nonparametric:1 statist:2 category:8 simplest:1 generate:1 exist:2 shall:1 darling:2 promise:1 group:9 four:1 kyi:2 asymptotically:1 fraction:2 year:2 sum:5 powerful:7 almost:4 family:2 draw:1 decision:1 radon:2 bakirov:1 quadratic:2 fourier:1 innovative:1 department:1 ball:1 poor:1 across:3 chilling:1 s1:3 intuitively:1 multiplicity:1 computationally:2 equation:2 turn:1 discus:2 mechanism:1 know:1 operation:1 apply:5 away:1 hotelling:6 alternative:3 yair:2 ky1:2 rp:1 remaining:2 include:3 graphical:1 especially:2 classical:2 society:1 quantity:2 parametric:1 dependence:5 diagonal:2 said:1 distance:39 thank:1 fy:9 ruth:1 mini:1 ratio:1 stated:2 negative:1 design:1 bonf:3 unknown:2 perform:2 observation:6 sm:8 situation:2 looking:1 y1:9 rn:1 arbitrary:3 introduced:2 required:2 connection:2 z1:2 barcelona:1 nip:3 address:1 suggested:3 below:1 usually:1 max:1 royal:1 explanation:1 power:26 critical:4 greatest:1 natural:1 rely:1 indicator:1 representing:1 scheme:1 mossel:1 categorical:1 sn:1 review:1 heller:3 literature:1 discovery:2 powered:1 asymptotic:2 loss:1 expect:2 permutation:13 interesting:2 generation:3 versus:6 contingency:2 earl:1 degree:1 consistent:31 minp:11 pi:5 row:14 course:2 summary:1 last:1 free:20 copy:1 neighbor:2 taking:2 absolute:1 benefit:1 dimension:4 xn:2 valid:3 cumulative:3 kyj:2 author:1 made:2 refinement:1 offit:1 boaz:1 global:3 summing:2 xi:11 continuous:3 sk:3 table:8 learn:1 zk:3 tel:2 excellent:1 complex:2 necessarily:1 diag:24 significance:10 pk:1 spread:1 main:2 s2:1 n2:25 x1:2 referred:6 en:2 supd:1 sub:1 exponential:2 third:6 tajvidi:1 theorem:5 bad:1 specific:7 maxi:1 favourable:1 bivariate:7 exists:2 false:8 adding:1 magnitude:1 kx:7 gap:1 rejection:2 depicted:3 intersection:1 univariate:74 kxk:4 adjustment:1 aa:1 determines:1 extracted:1 viewed:5 sorted:2 ann:3 nn1:1 price:1 h00:2 specifically:3 except:2 typical:1 infinite:1 reducing:1 rill:1 lemma:2 conservative:3 total:2 ksn:3 meaningful:3 formally:2 select:1 support:2 latter:1 interpoint:1 absolutely:1 avoiding:1 |
5,771 | 6,221 | Memory-Efficient Backpropagation Through Time
?
Audrunas
Gruslys
Google DeepMind
[email protected]
R?mi Munos
Google DeepMind
[email protected]
Marc Lanctot
Google DeepMind
[email protected]
Ivo Danihelka
Google DeepMind
[email protected]
Alex Graves
Google DeepMind
[email protected]
Abstract
We propose a novel approach to reduce memory consumption of the backpropagation through time (BPTT) algorithm when training recurrent neural networks
(RNNs). Our approach uses dynamic programming to balance a trade-off between
caching of intermediate results and recomputation. The algorithm is capable of
tightly fitting within almost any user-set memory budget while finding an optimal
execution policy minimizing the computational cost. Computational devices have
limited memory capacity and maximizing a computational performance given a
fixed memory budget is a practical use-case. We provide asymptotic computational
upper bounds for various regimes. The algorithm is particularly effective for long
sequences. For sequences of length 1000, our algorithm saves 95% of memory
usage while using only one third more time per iteration than the standard BPTT.
1
Introduction
Recurrent neural networks (RNNs) are artificial neural networks where connections between units
can form cycles. They are often used for sequence mapping problems, as they can propagate hidden
state information from early parts of the sequence back to later points. LSTM [9] in particular
is an RNN architecture that has excelled in sequence generation [3, 13, 4], speech recognition
[5] and reinforcement learning [12, 10] settings. Other successful RNN architectures include the
differentiable neural computer (DNC) [6], DRAW network [8], and Neural Transducers [7].
Backpropagation Through Time algorithm (BPTT) [11, 14] is typically used to obtain gradients
during training. One important problem is the large memory consumption required by the BPTT.
This is especially troublesome when using Graphics Processing Units (GPUs) due to the limitations
of GPU memory.
Memory budget is typically known in advance. Our algorithm balances the tradeoff between memorization and recomputation by finding an optimal memory usage policy which minimizes the total
computational cost for any fixed memory budget. The algorithm exploits the fact that the same
memory slots may be reused multiple times. The idea to use dynamic programming to find a provably
optimal policy is the main contribution of this paper.
Our approach is largely architecture agnostic and works with most recurrent neural networks. Being
able to fit within limited memory devices such as GPUs will typically compensate for any increase in
computational cost.
2
Background and related work
In this section, we describe the key terms and relevant previous work for memory-saving in RNNs.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Definition 1. An RNN core is a feed-forward neural network which is cloned (unfolded in time)
repeatedly, where each clone represents a particular time point in the recurrence.
For example, if an RNN has a single hidden layer whose outputs feed back into the same hidden
layer, then for a sequence length of t the unfolded network is feed-forward and contains t RNN cores.
Definition 2. The hidden state of the recurrent network is the part of the output of the RNN core
which is passed into the next RNN core as an input.
In addition to the initial hidden state, there exists a single hidden state per time step once the network
is unfolded.
Definition 3. The internal state of the RNN core for a given time-point is all the necessary information required to backpropagate gradients over that time step once an input vector, a gradient with
respect to the output vector, and a gradient with respect to the output hidden state is supplied. We
define it to also include an output hidden state.
An internal state can be (re)evaluated by executing a single forward operation taking the previous
hidden state and the respective entry of an input sequence as an input. For most network architectures,
the internal state of the RNN core will include a hidden input state, as this is normally required to
evaluate gradients. This particular choice of the definition will be useful later in the paper.
Definition 4. A memory slot is a unit of memory which is capable of storing a single hidden state
or a single internal state (depending on the context).
2.1
Backpropagation through Time
Backpropagation through Time (BPTT) [11, 14] is one of the commonly used techniques to train
recurrent networks. BPTT ?unfolds? the neural network in time by creating several copies of the
recurrent units which can then be treated like a (deep) feed-forward network with tied weights. Once
this is done, a standard forward-propagation technique can be used to evaluate network fitness over
the whole sequence of inputs, while a standard backpropagation algorithm can be used to evaluate
partial derivatives of the loss criteria with respect to all network parameters. This approach, while
being computationally efficient is also fairly intensive in memory usage. This is because the standard
version of the algorithm effectively requires storing internal states of the unfolded network core at
every time-step in order to be able to evaluate correct partial derivatives.
2.2
Trading memory for computation time
The general idea of trading computation time and memory consumption in general computation
graphs has been investigated in the automatic differentiation community [2]. Recently, the rise of
deep architectures and recurrent networks has increased interest in a less general case where the
graph of forward computation is a chain and gradients have to be chained in a reverse order. This
simplification leads to relatively simple memory-saving strategies and heuristics. In the context of
BPTT, instead of storing hidden network states, some of the intermediate results can be recomputed
on demand by executing an extra forward operation.
?
Chen et. al. proposed subdividing the sequence of size t into t equal parts and memorizing only
hidden
? states between the subsequences and all internal states within each segment [1]. This uses
O( t) memory at the cost of making an additional forward pass on average, as once the errors are
backpropagated through the right-side of the sequence, the second-last subsequence
has to be restored
?
by repeating a number of forward operations. We refer to this as Chen?s t algorithm.
The authors also suggest applying the same technique recursively several times by sub-dividing the
sequence into k equal parts and terminating the recursion once the subsequence length becomes less
than k. The authors have established that this would lead to memory consumption of O(k logk+1 (t))
and computational complexity of O(t logk (t)). This algorithm has a minimum possible memory
usage of log2 (t) in the case when k = 1. We refer to this as Chen?s recursive algorithm.
3
Memory-efficient backpropagation through time
We first discuss two simple examples: when memory is very scarce, and when it is somewhat limited.
2
When memory is very scarce, it is straightforward to design a simple but computationally inefficient
algorithm for backpropagation of errors on RNNs which only uses a constant amount of memory.
Every time when the state of the network at time t has to be restored, the algorithm would simply
re-evaluate the state by forward-propagating inputs starting from the beginning until time t. As
backpropagation happens in the reverse temporal order, results from the previous forward steps can
not be reused (as there is no memory to store them). This would require repeating t forward steps
before backpropagating gradients one step backwards (we only remember inputs and the initial state).
This would produce an algorithm requiring t(t + 1)/2 forward passes to backpropagate errors over t
time steps. The algorithm would be O(1) in space and O(t2 ) in time.
When the memory is somewhat limited (but not very scarce) we may store only hidden RNN states
at all time points. When errors have to be backpropagated from time t to t ? 1, an internal RNN
core state can be re-evaluated by executing another forward operation taking the previous hidden
state as an input. The backward operation can follow immediately. This approach can lead to fairly
significant memory savings, as typically the recurrent network hidden state is much smaller than an
internal state of the network core itself. On the other hand this leads to another forward operation
being executed during the backpropagation stage.
3.1
Backpropagation though time with selective hidden state memorization (BPTT-HSM)
The idea behind the proposed algorithm is to compromise between two previous extremes. Suppose
that we want to forward and backpropagate a sequence of length t, but we are only able to store m
hidden states in memory at any given time. We may reuse the same memory slots to store different
hidden states during backpropagation. Also, suppose that we have a single RNN core available for
the purposes of intermediate calculations which is able to store a single internal state. Define C(t, m)
as a computational cost of backpropagation measured in terms of how many forward-operations one
has to make in total during forward and backpropagation steps combined when following an optimal
memory usage policy minimizing the computational cost. One can easily set the boundary conditions:
C(t, 1) = 12 t(t + 1) is the cost of the minimal memory approach, while C(t, m) = 2t ? 1 for all
m ? t when memory is plentiful (as shown in Fig. 3 a). Our approach is illustrated in Figure 1. Once
we start forward-propagating steps at time t = t0 , at any given point y > t0 we can choose to put the
current hidden state into memory (step 1). This step has the cost of y forward operations. States will
be read in the reverse order in which they were written: this allows the algorithm to store states in a
stack. Once the state is put into memory at time y = D(t, m), we can reduce the problem into two
parts by using a divide-and-conquer approach: running the same algorithm on the t > y side of the
sequence while using m ? 1 of the remaining memory slots at the cost of C(t ? y, m ? 1) (step 2),
and then reusing m memory slots when backpropagating on the t ? y side at the cost of C(y, m)
(step 3). We use a full size m memory capacity when performing step 3 because we could release the
hidden state y immediately after finishing step 2.
Legend
Step 1: cost = y
1
2
...
y
y+1
...
Hidden state is propagated
Gradients get back-propagated
Hidden state stored in memory
t
t
Internal state of RNN core at time t
A single forward operation
Step 2: cost = C(t-y, m-1)
y+1
...
t
A single backward operation
Recursive application
of the algorithm
1
2
...
y
Hidden state is read from memory
Step 3: cost = C(y, m)
Hidden state is saved in memory
Hidden state is removed from memory
Figure 1: The proposed divide-and-conquer approach.
The base case for the recurrent algorithm is simply a sequence of length t = 1 when forward and
backward propagation may be done trivially on a single available RNN network core. This step has
the cost C(1, m) = 1.
3
(a) Theoretical computational cost
measured in number of forward operations per time step.
(b) Measured computational cost in
miliseconds.
Figure 2: Computational cost per time-step when the algorithm is allowed to remember 10 (red), 50
(green), 100 (blue), 500 (violet), 1000 (cyan) hidden states. The grey line shows the performance
of standard BPTT without memory constraints; (b) also includes a large constant value caused by a
single backwards step per time step which was excluded from the theoretical computation, which
value makes a relative performance loss much less severe in practice than in theory.
Having established the protocol we may find an optimal policy D(t, m). Define the cost of choosing
the first state to be pushed at position y and later following the optimal policy as:
Q(t, m, y) = y + C(t ? y, m ? 1) + C(y, m)
C(t, m) = Q(t, m, D(t, m))
1
1
time
D(t, m) = argmin Q(t, m, y)
(2)
1
(3)
1?y<t
t
1
1
11
CPTS ? 2
1
(1)
1
1
time
CPTS ? 3
Hidden state stored in memory
1
Forward computation
Backward computation
a)
Order of execution
b)
CPTS cost per time step in the number
of forward operations
Figure 3: Illustration of the optimal policy for m = 4 and a) t = 4 and b) t = 10. Logical sequence
time goes from left to right, while execution happens from top to the bottom.
Equations can be solved exactly by using dynamic programming subject to the boundary conditions
established previously (e.g. as in Figure 2(a)). D(t, m) will determine the optimal policy to follow.
Pseudocode is given in the supplementary material. Figure 3 illustrates an optimal policy found for
two simple cases.
3.2
Backpropagation though time with selective internal state memorization (BPTT-ISM)
Saving internal RNN core states instead of hidden RNN states would allow us to save a single forward
operation during backpropagation in every divide-and-conquer step, but at a higher memory cost.
Suppose we have a memory capacity capable of saving exactly m internal RNN states. First, we need
to modify the boundary conditions: C(t, 1) = 12 t(t + 1) is a cost reflecting the minimal memory
approach, while C(t, m) = t for all m ? t when memory is plentiful (equivalent to standard BPTT).
4
As previously, C(t, m) is defined to be the computational cost for combined forward and backward
propagations over a sequence of length t with memory allowance m while following an optimal
memory usage policy. As before, the cost is measured in terms of the amount of total forward steps
made, because the number of backwards steps is constant. Similarly to BPTT-HSM, the process
can be divided into parts using divide-and-conquer approach (Fig 4). For any values of t and m
position of the first memorization y = D(t, m) is evaluated. y forward operations are executed and
an internal RNN core state is placed into memory. This step has the cost of y forward operations
(Step 1 in Figure 4). As the internal state also contains an output hidden state, the same algorithm can
be recurrently run on the high-time (right) side of the sequence while having one less memory slot
available (Step 2 in Figure 4). This step has the cost of C(t ? y, m ? 1) forward operations. Once
gradients are backpropagated through the right side of the sequence, backpropagation can be done
over the stored RNN core (Step 3 in Figure 4). This step has no additional cost as it involves no more
forward operations. The memory slot can now be released leaving m memory available. Finally, the
same algorithm is run on the left-side of the sequence (Step 4 in Figure 4). This final step has the cost
of C(y ? 1, m) forward operations. Summing the costs gives us the following equation:
Q(t, m, y) = y + C(y ? 1, m) + C(t ? y, m ? 1)
(4)
Recursion has a single base case: backpropagation over an empty sequence is a nil operation which
has no computational cost making C(0, m) = 0.
Legend
Step 1: cost = y
1
...
y-1
y
y+1
...
t
Hidden state gets propagated
t
Internal stat of RNN core at time t
Internal RNN core state stored in
memory (incl. output hidden state)
Step 2:
cost = C(t-y, m-1)
y
y+1
...
t
Hidden state is read from memory
Internal state is saved
Internal state is removed
Step 3:
cost = 0
y
A single backwards
operation, no forward
operations involved.
Recursive application
of the algorithm
Gradients get back-propagated
A single initial RNN hidden state
1
...
y-1
y
Step 4:
cost = C(y-1, m)
A single forward operation
A single backward operation
Figure 4: Illustration of the divide-and-conquer approach used by BPTT-ISM.
Compared to the previous section (20) stays the same while (19) is minimized over 1 ? y ? t instead
of 1 ? y < t. This is because it is meaningful to remember the last internal state while there was
no reason to remember the last hidden state. A numerical solution of C(t, m) for several different
memory capacities is shown in Figure 5(a).
D(t, m) = argmin Q(t, m, y)
(5)
1?y?t
As seen in Figure 5(a), our methodology saves 95% of memory for sequences of 1000 (excluding input
vectors) while using only 33% more time per training-iteration than the standard BPTT (assuming a
single backward step being twice as expensive as a forward step).
3.3
Backpropagation though time with mixed state memorization (BPTT-MSM)
It is possible to derive an even more general model by combining both approaches as described in
Sections 3.1 and 3.2. Suppose we have a total memory capacity m measured in terms of how much a
single hidden states can be remembered. Also suppose that storing an internal RNN core state takes
? times more memory where ? ? 2 is some integer number. We will choose between saving a single
hidden state while using a single memory unit and storing an internal RNN core state by using ?
times more memory. The benefit of storing an internal RNN core state is that we will be able to save
a single forward operation during backpropagation.
Define C(t, m) as a computational cost in terms of a total amount of forward operations when
running an optimal strategy. We use the following boundary conditions: C(t, 1) = 12 t(t + 1) as a
5
(a) BPTT-ISM (section 3.2).
(b) BPTT-MSM (section 3.3).
Figure 5: Comparison of two backpropagation algorithms in terms of theoretical costs. Different
lines show the number of forward operations per time-step when the memory capacity is limited to
10 (red), 50 (green), 100 (blue), 500 (violet), 1000 (cyan) internal RNN core states. Please note that
the units of memory measurement are different than in Figure 2(a) (size of an internal state vs size of
a hidden state). It was assumed that the size of an internal core state is ? = 5 times larger than the
size of a hidden state. The value of ? influences only the right plot. All costs shown on the right plot
should be less than the respective costs shown on the left plot for any value of ?.
cost reflecting the minimal memory approach, while C(t, m) = t for all m ? ?t when memory is
plentiful and C(t ? y, m) = ? for all m ? 0 and C(0, m) = 0 for notational convenience. We use
a similar divide-and-conquer approach to the one used in previous sections.
Define Q1 (t, m, y) as the computational cost if we choose to firstly remember a hidden state at
position y and thereafter follow an optimal policy (identical to ( 18)):
Q1 (t, m, y) = y + C(y, m) + C(t ? y, m ? 1)
(6)
Similarly, define Q2 (t, m, y) as the computational cost if we choose to firstly remember an internal
state at position y and thereafter follow an optimal policy (similar to ( 4) except that now the internal
state takes ? memory units):
Q2 (t, m, y) = y + C(y ? 1, m) + C(t ? y, m ? ?)
(7)
Define D1 as an optimal position of the next push assuming that the next state to be pushed is a
hidden state and define D2 as an optimal position if the next push is an internal core state. Note that
D2 has a different range over which it is minimized, for the same reasons as in equation 5:
D1 (t, m) = argmin Q1 (t, m, y)
D2 (t, m) = argmin Q2 (t, m, y)
1?y<t
1?y?t
(8)
Also define Ci (t, m) = Qi (t, m, D(t, m)) and finally:
C(t, m) = min Ci (t, m)
i
H(t, m) = argmin Ci (t, m)
(9)
i
We can solve the above equations by using simple dynamic programming. H(t, m) will indicate
whether the next state to be pushed into memory in a hidden state or an internal state, while the
respective values if D1 (t, m) and D2 (t, m) will indicate the position of the next push.
3.4
Removing double hidden-state memorization
Definition 3 of internal RNN core state would typically require for a hidden input state to be included
for each memorization. This may lead to the duplication of information. For example, when an
optimal strategy is to remember a few internal RNN core states in sequence, a memorized hidden
output of one would be equal to a memorized hidden input for the other one (see Definition 3).
Every time we want to push an internal RNN core state onto the stack and a previous internal state is
already there, we may omit pushing the input hidden state. Recall that an internal core RNN state
when an input hidden state is otherwise not known is ? times larger than a hidden state. Define ? ? ?
as the space required to memorize the internal core state when an input hidden state is known. A
6
relationship between ? and ? is application-specific, but in many circumstances ? = ? + 1. We only
have to modify (7) to reflect this optimization:
Q2 (t, m, y) = y + C(y ? 1, m) + C(t ? y, m ? 1y>1 ? ? 1y=1 ?)
(10)
1 is an indicator function. Equations for H(t, m), Di (t, m) and C(t, m) are identical to (8) and (9).
3.5
Analytical upper bound for BPTT-HSM
1
We have established a theoretical upper bound for BPTT-HSM algorithm as C(t, m) ? mt1+ m . As
1
the bound is not tight for short sequences, it was also numerically verified that C(t, m) < 4t1+ m for
1
5
3
1+ m
t < 10 and m < 10 , or less than 3t
if the initial forward pass is excluded. In addition to that,
m
we have established a different bound in the regime where t < mm! . For any integer value a and for
a
all t < ma! the computational cost is bounded by C(t, m) ? (a + 1)t. The proofs are given in the
supplementary material. Please refer to supplementary material for discussion on the upper bounds
for BPTT-MSM and BPTT-ISM.
3.6
Comparison of the three different strategies
(a) Using 10? memory
(b) Using 20? memory
Figure 6: Comparison of three strategies in the case when a size of an internal RNN core state is
? = 5 times larger than that of the hidden state, and the total memory capacity allows us remember
either 10 internal RNN states, or 50 hidden states or any arbitrary mixture of those in the left plot
and (20, 100) respectively in the right plot. The red curve illustrates BPTT-HSM, the green curve
- BPTT-ISM and the blue curve - BPTT-MSM. Please note that for large sequence lengths the red
curve out-performs the green one, and the blue curve outperforms the other two.
Computational costs for each previously described strategy and the results are shown in Figure 6.
BPTT-MSM outperforms both BPTT-ISM and BPTT-HSM. This is unsurprising, because the search
space in that case is a superset of both strategy spaces, while the algorothm finds an optimal strategy
within that space. Also, for a fixed memory capacity, the strategy memorizing only hidden states
outperforms a strategy memorizing internal RNN core states for long sequences, while the latter
outperforms the former for relatively short sequences.
4
Discussion
We used an LSTM mapping 256 inputs to 256 with a batch size of 64 and measured execution time for
a single gradient descent step (forward and backward operation combined) as a function of sequence
length (Figure 2(b)). Please note that measured computational time also includes the time taken by
backward operations at each time-step which dynamic programming equations did not take into the
account. A single backward operation is usually twice as expensive than a forward operation, because
it involves evaluating gradients both with respect to input data and internal parameters. Still, as the
number of backward operations is constant it has no impact on the optimal strategy.
4.1
Optimality
The dynamic program finds the optimal computational strategy by construction, subject to memory
constraints and a fairly general model that we impose. As both strategies proposed by [1] are
7
consistent with all the assumptions that we have made in section 3.4 when applied to RNNs, BPTTMSM is guaranteed to perform at least as well under any memory budget and any sequence length.
This is because strategies proposed by [1] can be expressed by providing a (potentially suboptimal)
policy Di (t, m), H(t, m) subject to the same equations for Qi (t, m).
4.2
Numerical comparison with Chen?s
?
t algorithm
?
?
?
Chen?s t algorithm requires to remember t hidden states and t internal RNN states (excluding
input hidden states), while the recursive approach requires to remember at least log2 t hidden states.
In other words, the model does not allow for a fine-grained control over memory usage and rather
saves some memory. In the meantime our proposed BPTT-MSM can fit within almost arbitrary
constant memory constraints, and this is the main advantage of our algorithm.
?
Figure 7: Left: memory consumption divided by t(1 + ?) for a fixed computational
cost C = 2.
?
Right: computational cost per time-step for a fixed memory consumption of t(1 + ?). Red, green
and blue curves correspond to ? = 2, 5, 10 respectively.
?
The non-recursive Chen?s t approach does not allow to match any particular memory budget
making a like-for-like comparison difficult. Instead of fixing the memory budge, it?is possible to fix
computational cost at 2 forwards iterations on average to match the cost of the
? t algorithm and
observe how much memory
would
our
approach
use.
Memory
usage
by
the
t algorithm would
?
?
be equivalent to saving t hidden states and t internal core states. Lets suppose that the internal
RNN core state is ? times larger than hidden states. In this case the size of the internal RNN core
state excluding
state is ? = ? ? 1. This would
give a memory?usage of Chen?s
?
? the input hidden
?
algorithm as t(1 + ?) = t(?), as it needs to remember t hidden states and t internal states
where input hidden states?can be omitted to avoid duplication. Figure 7 illustrates memory usage by
our algorithm divided by t(1 + ?) for a fixed execution speed of 2 as a function of sequence length
and for different values of parameter ?. Values lower than 1 indicate memory savings. As it is seen,
we can save a significant amount of memory for the same computational cost.
?
Another experiment is to measure computational cost for a fixed memory consumption
? of t(1 + ?).
The results are shown in Figure 7. Computational cost of 2 corresponds to Chen?s t algorithm. This
illustrates that our approach
? does not perform significantly faster (although it does not do any worse).
This is because Chen?s t strategy is actually near optimal for this particular memory budget. Still,
as seen from the previous paragraph, this memory budget is already in the regime of diminishing
returns and further memory reductions are possible for almost the same computational cost.
5
Conclusion
In this paper, we proposed a novel approach for finding optimal backpropagation strategies for
recurrent neural networks for a fixed user-defined memory budget. We have demonstrated that the
most general of the algorithms is at least as good as many other used common heuristics. The main
advantage of our approach is the ability to tightly fit to almost any user-specified memory constraints
gaining maximal computational performance.
8
References
[1] Tianqi Chen, Bing Xu, Zhiyuan Zhang, and Carlos Guestrin. Training deep nets with sublinear
memory cost. arXiv preprint arXiv:1604.06174, 2016.
[2] Benjamin Dauvergne and Laurent Hasco?t. The data-flow equations of checkpointing in reverse
automatic differentiation. In Computational Science?ICCS 2006, pages 566?573. Springer,
2006.
[3] Douglas Eck and Juergen Schmidhuber. A first look at music composition using LSTM recurrent
neural networks. Istituto Dalle Molle Di Studi Sull Intelligenza Artificiale, 103, 2002.
[4] Alex Graves. Supervised Sequence Labelling with Recurrent Neural Networks. Studies in
Computational Intelligence. Springer, 2012.
[5] Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep
recurrent neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE
International Conference on, pages 6645?6649. IEEE, 2013.
[6] Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka GrabskaBarwi?nska, Sergio G?mez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou,
Adri? Puigdom?nech Badia, Karl Moritz Hermann, Yori Zwols, Georg Ostrovski, Adam Cain,
Helen King, Christopher Summerfield, Phil Blunsom, Koray Kavukcuoglu, and Demis Hassabis.
Hybrid computing using a neural network with dynamic external memory. Nature, advance
online publication, October 2016.
[7] Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning to
transduce with unbounded memory. In Advances in Neural Information Processing Systems,
pages 1819?1827, 2015.
[8] Karol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. DRAW: A recurrent neural
network for image generation. arXiv preprint arXiv:1502.04623, 2015.
[9] Sepp Hochreiter and J?rgen Schmidhuber. Long short-term memory. Neural computation,
9(8):1735?1780, 1997.
[10] Volodymyr Mnih, Adri? Puigdom?nech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap,
Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Proceedings of the 33rd International Conference on Machine Learning
(ICML), pages 1928?1937, 2016.
[11] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning internal representations by error propagation. Technical report, DTIC Document, 1985.
[12] Ivan Sorokin, Alexey Seleznev, Mikhail Pavlov, Aleksandr Fedorov, and Anastasiia Ignateva.
Deep attention recurrent Q-network. arXiv preprint arXiv:1512.01693, 2015.
[13] Ilya Sutskever, James Martens, and Geoffrey E Hinton. Generating text with recurrent neural
networks. In Proceedings of the 28th International Conference on Machine Learning (ICML-11),
pages 1017?1024, 2011.
[14] Paul J Werbos. Backpropagation through time: what it does and how to do it. Proceedings of
the IEEE, 78(10):1550?1560, 1990.
9
| 6221 |@word version:1 bptt:28 reused:2 grey:1 cloned:1 d2:4 propagate:1 q1:3 recursively:1 reduction:1 initial:4 plentiful:3 contains:2 document:1 reynolds:1 outperforms:4 current:1 com:5 written:1 gpu:1 john:1 ronald:1 numerical:2 plot:5 v:1 intelligence:1 device:2 ivo:3 beginning:1 core:33 short:3 firstly:2 zhang:1 unbounded:1 wierstra:1 transducer:1 fitting:1 paragraph:1 subdividing:1 excelled:1 eck:1 unfolded:4 becomes:1 spain:1 bounded:1 agnostic:1 what:1 sull:1 argmin:5 minimizes:1 deepmind:5 q2:4 finding:3 differentiation:2 temporal:1 remember:11 every:4 exactly:2 control:1 unit:7 normally:1 omit:1 wayne:1 danihelka:4 before:2 t1:1 modify:2 troublesome:1 puigdom:2 aleksandr:1 laurent:1 blunsom:2 rnns:5 twice:2 alexey:1 pavlov:1 limited:5 range:1 practical:1 recursive:5 practice:1 gruslys:1 backpropagation:23 demis:1 rnn:36 significantly:1 word:1 suggest:1 get:3 convenience:1 onto:1 put:2 context:2 applying:1 memorization:7 influence:1 equivalent:2 transduce:1 marten:1 demonstrated:1 phil:2 maximizing:1 helen:1 straightforward:1 go:1 starting:1 sepp:1 williams:1 attention:1 immediately:2 checkpointing:1 d1:3 construction:1 suppose:6 user:3 programming:5 us:3 rumelhart:1 recognition:2 particularly:1 expensive:2 werbos:1 bottom:1 preprint:3 solved:1 cycle:1 trade:1 removed:2 benjamin:1 complexity:1 dynamic:7 chained:1 terminating:1 tight:1 segment:1 compromise:1 easily:1 icassp:1 various:1 train:1 ramalho:1 effective:1 describe:1 artificial:1 choosing:1 whose:1 heuristic:2 supplementary:3 larger:4 solve:1 otherwise:1 ability:1 itself:1 final:1 online:1 sequence:30 differentiable:1 advantage:2 analytical:1 net:1 propose:1 maximal:1 relevant:1 combining:1 sutskever:1 empty:1 double:1 produce:1 generating:1 artificiale:1 executing:3 tianqi:1 adam:1 tim:2 depending:1 recurrent:16 derive:1 fixing:1 stat:1 propagating:2 measured:7 edward:2 dividing:1 involves:2 trading:2 indicate:3 memorize:1 hermann:2 correct:1 saved:2 material:3 memorized:2 require:2 fix:1 molle:1 mm:1 mapping:2 rgen:1 early:1 released:1 omitted:1 purpose:1 rather:1 caching:1 avoid:1 publication:1 release:1 finishing:1 notational:1 typically:5 diminishing:1 hidden:61 selective:2 provably:1 fairly:3 equal:3 once:8 saving:8 having:2 koray:2 identical:2 represents:1 look:1 icml:2 minimized:2 t2:1 nech:2 mirza:1 report:1 few:1 tightly:2 fitness:1 harley:2 interest:1 ostrovski:1 mnih:1 severe:1 mixture:1 extreme:1 behind:1 chain:1 capable:3 partial:2 necessary:1 istituto:1 respective:3 divide:6 re:3 theoretical:4 minimal:3 increased:1 juergen:1 cost:51 violet:2 entry:1 successful:1 graphic:1 unsurprising:1 stored:4 combined:3 lstm:3 clone:1 international:3 stay:1 off:1 ilya:1 recomputation:2 reflect:1 choose:4 worse:1 external:1 creating:1 derivative:2 inefficient:1 return:1 reusing:1 account:1 volodymyr:1 includes:2 caused:1 cpts:3 later:3 red:5 start:1 carlos:1 contribution:1 greg:1 largely:1 correspond:1 kavukcuoglu:2 definition:7 involved:1 mohamed:1 james:1 proof:1 mi:1 di:3 silver:1 propagated:4 logical:1 recall:1 actually:1 back:4 reflecting:2 feed:4 higher:1 supervised:1 follow:4 methodology:1 zwols:1 evaluated:3 done:3 though:3 mez:1 stage:1 until:1 rahman:1 hand:1 christopher:1 mehdi:1 propagation:4 google:10 usage:10 lillicrap:1 requiring:1 former:1 read:3 excluded:2 moritz:2 hsm:6 illustrated:1 during:6 recurrence:1 please:4 backpropagating:2 criterion:1 performs:1 image:1 novel:2 recently:1 common:1 dalle:1 pseudocode:1 numerically:1 refer:3 significant:2 measurement:1 composition:1 automatic:2 rd:1 trivially:1 similarly:2 badia:2 base:2 sergio:1 reverse:4 schmidhuber:2 store:6 remembered:1 cain:1 seen:3 minimum:1 additional:2 somewhat:2 impose:1 guestrin:1 adri:2 determine:1 signal:1 dnc:1 multiple:1 full:1 technical:1 match:2 faster:1 calculation:1 long:3 compensate:1 divided:3 qi:2 impact:1 circumstance:1 arxiv:6 iteration:3 hochreiter:1 background:1 addition:2 want:2 fine:1 leaving:1 suleyman:1 extra:1 nska:1 pass:1 subject:3 duplication:2 legend:2 flow:1 integer:2 near:1 backwards:4 intermediate:3 superset:1 ivan:1 fit:3 architecture:5 suboptimal:1 reduce:2 idea:3 tradeoff:1 intensive:1 t0:2 whether:1 passed:1 reuse:1 speech:3 repeatedly:1 deep:6 useful:1 gravesa:1 amount:4 repeating:2 backpropagated:3 supplied:1 per:9 blue:5 georg:1 key:1 recomputed:1 thereafter:2 douglas:1 verified:1 backward:11 graph:2 run:2 audrunas:2 almost:4 draw:2 allowance:1 lanctot:2 pushed:3 bound:6 layer:2 cyan:2 guaranteed:1 simplification:1 sorokin:1 constraint:4 alex:6 speed:1 min:1 optimality:1 performing:1 relatively:2 gpus:2 smaller:1 making:3 happens:2 memorizing:3 zhiyuan:1 taken:1 computationally:2 equation:8 previously:3 bing:1 discus:1 available:4 operation:31 observe:1 save:6 batch:1 hassabis:1 top:1 running:2 include:3 remaining:1 log2:2 tiago:1 pushing:1 music:1 exploit:1 ism:6 especially:1 conquer:6 gregor:1 already:2 restored:2 strategy:16 gradient:12 capacity:8 consumption:7 intelligenza:1 reason:2 studi:1 assuming:2 length:10 relationship:1 illustration:2 providing:1 balance:2 minimizing:2 grabskabarwi:1 difficult:1 executed:2 october:1 potentially:1 rise:1 design:1 policy:13 perform:2 upper:4 fedorov:1 daan:1 descent:1 hinton:3 excluding:3 stack:2 arbitrary:2 community:1 david:2 required:4 specified:1 connection:1 acoustic:1 established:5 barcelona:1 nip:1 able:5 usually:1 regime:3 program:1 green:5 memory:99 gaining:1 treated:1 hybrid:1 indicator:1 meantime:1 recursion:2 scarce:3 mt1:1 summerfield:1 text:1 icc:1 graf:6 asymptotic:1 relative:1 loss:2 mixed:1 generation:2 limitation:1 msm:6 sublinear:1 geoffrey:3 abdel:1 consistent:1 storing:6 karl:2 placed:1 last:3 copy:1 asynchronous:1 side:6 allow:3 karol:1 taking:2 munos:2 mikhail:1 benefit:1 boundary:4 curve:6 unfolds:1 evaluating:1 forward:43 commonly:1 reinforcement:2 author:2 made:2 agnieszka:1 yori:1 mustafa:1 summing:1 assumed:1 agapiou:1 subsequence:3 search:1 nature:1 investigated:1 marc:1 protocol:1 did:1 main:3 whole:1 paul:1 allowed:1 xu:1 fig:2 sub:1 position:7 tied:1 third:1 grained:1 removing:1 specific:1 recurrently:1 exists:1 effectively:1 ci:3 logk:2 execution:5 labelling:1 budget:9 illustrates:4 push:4 demand:1 dtic:1 chen:10 backpropagate:3 timothy:1 simply:2 expressed:1 springer:2 corresponds:1 ma:1 grefenstette:2 slot:7 king:1 included:1 except:1 total:6 nil:1 pas:2 meaningful:1 colmenarejo:1 incl:1 internal:46 latter:1 evaluate:5 malcolm:1 |
5,772 | 6,222 | Brains on Beats
Umut G??l?
Radboud University, Donders Institute for
Brain, Cognition and Behaviour
Nijmegen, the Netherlands
[email protected]
Jordy Thielen
Radboud University, Donders Institute for
Brain, Cognition and Behaviour
Nijmegen, the Netherlands
[email protected]
Michael Hanke?
Otto-von-Guericke University Magdeburg
Center for Behavioral Brain Sciences
Magdeburg, Germany
[email protected]
Marcel A. J. van Gerven?
Radboud University, Donders Institute for
Brain, Cognition and Behaviour
Nijmegen, the Netherlands
[email protected]
Abstract
We developed task-optimized deep neural networks (DNNs) that achieved state-ofthe-art performance in different evaluation scenarios for automatic music tagging.
These DNNs were subsequently used to probe the neural representations of music.
Representational similarity analysis revealed the existence of a representational
gradient across the superior temporal gyrus (STG). Anterior STG was shown to
be more sensitive to low-level stimulus features encoded in shallow DNN layers
whereas posterior STG was shown to be more sensitive to high-level stimulus
features encoded in deep DNN layers.
1
Introduction
The human sensory system is devoted to the processing of sensory information to drive our perception
of the environment [1]. Sensory cortices are thought to encode a hierarchy of ever more invariant
representations of the environment [2]. A research question that is at the core of sensory neuroscience
is what sensory information is processed as one traverses the sensory pathways from the primary
sensory areas to higher sensory areas.
The majority of the work on auditory cortical representations has remained limited to understanding
the neural representation of hand-designed low-level stimulus features such as spectro-temporal
models [3], spectro-location models [4], timbre, rhythm, tonality [5?7] and pitch [8] or high-level
representations such as music genre [9] and sound categories [10]. For example, Santoro et al. [3]
found that a joint frequency-specific modulation transfer function predicted observed fMRI activity
best compared to frequency-nonspecific and independent models. They showed specificity to fine
spectral modulations along Heschl?s gyrus (HG) and anterior superior temporal gyrus (STG), whereas
coarse spectral modulations were mostly located posterior-laterally to HG, on the planum temporale
(PT), and STG. Preference for slow temporal modulations was found along HG and STG, whereas fast
temporal modulations were observed on PT, and posterior and medially adjacent to HG. Also, it has
been shown that activity in STG, somatosensory cortex, the default mode network, and cerebellum
are sensitive to timbre, while amygdala, hippocampus and insula are more sensitive to rhythmic and
?
http://psychoinformatics.de; supported by the German federal state of Saxony-Anhalt and the European
Regional Development Fund (ERDF), project: Center for Behavioral Brain Sciences.
?
http://www.ccnlab.net; supported by VIDI grant 639.072.513 of the Netherlands Organization for Scientific
Research (NWO).
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
tonality features [5, 7]. However these efforts have not yet provided a complete algorithmic account
of sensory processing in the auditory system.
Since their resurgence, deep neural networks (DNNs) coupled with functional magnetic resonance
imaging (fMRI) have provided a powerful approach to form and test alternative hypotheses about
what sensory information is processed in different brain regions. On one hand, a task-optimized DNN
model learns a hierarchy of nonlinear transformations in a supervised manner with the objective
of solving a particular task. On the other hand, fMRI measures local changes in blood-oxygenlevel dependent hemodynamic responses to sensory stimulation. Subsequently, any subset of the
DNN representations that emerge from this hierarchy of nonlinear transformations can be used to
probe neural representations by comparing DNN and fMRI responses to the same sensory stimuli.
Considering that the sensory systems are biological neural networks that routinely perform the same
tasks as their artificial counterparts, it is not inconceivable that DNN representations are suitable for
probing neural representations.
Indeed, this approach has been shown to be extremely successful in visual neuroscience. To date,
several task-optimized DNN models were used to accurately model visual areas on the dorsal and
ventral streams [11?18], revealing representational gradients where deeper neural network layers
map to more downstream areas along the visual pathways [19, 20]. Recently, [21] has shown that
deep neural networks trained to map speech excerpts to word labels could be used to predict brain
responses to natural sounds. Here, deeper neural network layers were shown to map to auditory brain
regions that were more distant from primary auditory cortex.
In the present work we expand on this line of research where our aim was to model how the human
brain responds to music. We achieve this by probing neural representations of music features across
the superior temporal gyrus using a deep neural network optimized for music tag prediction. We
used the representations that emerged after training a DNN to predict tags of musical excerpts as
candidate representations for different areas of STG in representational similarity analysis. We show
that different DNN layers correspond to different locations along STG such that anterior STG is
shown to be more sensitive to low-level stimulus features encoded in shallow DNN layers whereas
posterior STG is shown to be more sensitive to high-level stimulus features encoded in deep DNN
layers.
2
2.1
Materials and Methods
MagnaTagATune Dataset
We used the MagnaTagATune dataset [22] for DNN estimation. The dataset contains 25.863 music
clips. Each clip is a 29 seconds long excerpt from 5223 songs from 445 albums from 230 artists. Each
excerpt is supplied with a vector of binary annotations of 188 tags. These annotations are obtained
by humans playing the two-player online TagATune game. In this game, the two players are either
presented with the same or a different audio clip. Subsequently, they are asked to come up with
tags for their specific audio clip. Afterward, players view each other?s tags and are asked to decide
whether they were presented the same audio clip. Tags are only assigned when more than two players
agreed. The annotations include tags like ?singer?, ?no singer?, ?violin?, ?drums?, ?classical?, ?jazz?, et
cetera. We restricted our analysis on this dataset to the top 50 most popular tags to ensure that there
is enough training data for each tag. Parts 1-12 were used for training, part 13 was used for validation
and parts 14-16 were used for testing.
2.2
Studyforrest Dataset
We used the existing studyforrest dataset [23] for representational similarity analysis. The dataset
contains fMRI data on the perception of musical genres. Twenty participants (age 21-38 years, mean
age 26.6 years), with normal hearing and no known history of neurological disorders, listened to
twenty-five 6 second, 44.1 kHz music clips. The stimulus set comprised five clips per each of the
five following genres: Ambient, Roots Country, Heavy Metal, 50s Rock ?n Roll, and Symphonic.
Stimuli were selected according to the procedure of [9]. The Ambient and Symphonic genres can
be considered as non-vocal and the others as vocal. Participants completed eight runs, each with all
twenty-five clips.
2
Ultra-high-field (7 Tesla) fMRI images were collected using a Siemens MAGNETOM scanner,
T2*-weighted echo-planar images (gradient-echo, repetition time (TR) = 2000 ms, echo time (TE) =
22 ms, 0.78 ms echo spacing, 1488 Hz/Px bandwidth, generalized auto-calibrating partially parallel
acquisition (GRAPPA), acceleration factor 3, 24 Hz/Px bandwidth in phase encoding direction), and
a 32 channel brain receiver coil. Thirty-six axial slices were acquired (thickness = 1.4 mm, 1.4 ?
1.4 mm in-plane resolution, 224 mm field-of-view (FOV) centered on the approximate location of
Heschl?s gyrus, anterior-to-posterior phase encoding direction, 10% inter-slice gap). Along with the
functional data, cardiac and respiratory traces, and a structural MRI were collected. In our analyses,
we only used the data from the 12 subjects (Subjects 1, 3, 4, 6, 7, 9, 12, 14?18) with no known data
anomalies as reported in [23].
The anatomical and functional scans were preprocessed as follows: Functional scans were realigned
to the first scan of the first run and next to the mean scan. Anatomical scans were coregistered
to the mean functional scan. Realigned functional scans were slice-time corrected to correct for
the differences in image acquisition times between the slices. Realigned and slice-time corrected
functional scans were normalized to MNI space. Finally, a general linear model was used to remove
noise regressors derived from voxels unrelated to the experimental paradigm and estimate BOLD
response amplitudes [24]. We restricted our analyses to the superior temporal gyrus (STG).
2.3
Deep Neural Networks
128
192
9
192
128
128
BN
pool1
conv2
pool2
conv3
conv4
conv5
50
4096
DO
DO
full7
full8
94
376
376
376
376
9
9/4
4096
1.5k
1.5k
6k
conv1
128
9/4
BN
96k
input
48
9
48
25
1
9/4
channel
stride
k size
i/o size
121/16
We developed three task-optimized DNN models for tag prediction. Two of the models comprised
five convolutional layers followed by three fully-connected layers (DNN-T model and DNN-F model).
The inputs to the models were 96000-dimensional time (DNN-T model) and frequency (DNN-F
model) domain representations of six second-long audio signals, respectively. One of the models
comprised two streams of five convolutional layers followed by three fully connected layers (DNN-TF
model). The inputs to the streams were given by the time and frequency representations. The outputs
of the convolutional streams were merged and fed into first fully-connected layer. Figure 1 illustrates
the architecture of the one-stream models.
pool5
full6
Figure 1: Architecture of the one-stream models. First seven layers are followed by parametric
softplus units [25], and the last layer is followed by sigmoid units. The architecture is similar to that
of AlexNet [26] except for the following modifications: (i) The number of convolutional kernels
are halved. (ii) The (convolutional and pooling) kernels and strides are flattened. That is, an n ? n
kernel is changed to an n2 ? 1 kernel and an m ? m stride is changed to an m2 ? 1 stride. (iii)
Local response normalization is replaced with batch normalization [27]. (iv) Rectified linear units are
replaced with parametric softplus units with initial ? = 0.2 and initial ? = 0.5. (v) Softmax units are
replaced with sigmoid units.
We used Adam [28] with parameters ? = 0.0002, ?1 = 0.5, ?2 = 0.999, = 1e?8 and a mini batch
size of 36 to train the models by minimizing the binary cross-entropy loss function. Initial model
parameters were drawn from a uniform distribution as described in [29]. Songs in each training
mini-batch were randomly cropped to six seconds (96000 samples). The epoch in which the validation
performance was the highest was taken as the final model (53, 12 and 12 for T, F and TF models,
respectively). The DNN models were implemented in Keras [30].
Once trained, we first tested the tag prediction performance of the models and identified the model
with the highest performance. To predict the tags of a 29 second long song excerpt in the test split of
the MagnaTagaTune dataset, we first predicted the tags of 24 six-second-long overlapping segments
separated by a second and averaged the predictions.
We then used the model with the highest performance for nonlinearly transforming the stimuli to
eight layers of hierarchical representations for subsequent analyses. Note that the artificial neurons in
the convolutional layers locally filtered their inputs (1D convolution), nonlinearly transformed them
3
and returned temporal representations per stimulus. These representations were further processed by
averaging them over time. In contrast, the artificial neurons in the fully-connected layers globally
filtered their inputs (dot product), non-linearly transformed them and returned scalar representations
per stimulus. These representations were not further processed. These transformations resulted in n
matrices of size m ? pi where n is the number of layers (8), m is the number of stimuli (25) and pi is
the number of artificial neurons in the ith layer (48 or 96, 128 or 256, 192 or 384, 192 or 384, 128 or
256, 4096, 4096 and 50 for i = 1, . . . , 8, respectively).
2.4
Representational Similarity Analysis
We used Representational Similarity Analysis (RSA) [31] to investigate how well the representational
structures of DNN model layers match with that of the response patterns in STG. In RSA, models
and brain regions are characterized by n ? n representational dissimilarity matrices (RDMs), whose
elements represent the dissimilarity between the neural or model representations of a pair of stimuli.
In turn, computing the overlap between the model and neural RDMs provides evidence about how
well a particular model explains the response patterns in a particular brain region. Specifically, we
performed a region of interest analysis as well as a searchlight analysis by first constructing the RDMs
of STG (target RDM) and the model layers (candidate RDM). In the ROI analysis, this resulted in
one target RDM per subject and eight candidate RDMs. For each subject, we correlated the upper
triangular parts of the target RDM with the candidate RDMs (Spearman correlation). We quantified
the similarity of STG representations with the model representations as the mean correlation. For the
searchlight analysis, this resulted in 27277 target RDMs (each derived from a spherical neighborhood
of 100 voxels) and 8 candidate RDMs. For each subject and target RDM, we correlated the upper
triangular parts of the target RDM with the candidate RDMs (Spearman correlation). Then, the layers
which resulted in the highest correlation were assigned to the voxels at the center of the corresponding
neighborhoods. Finally, the layer assignments were averaged over the subjects and the result was
taken as the final layer assignment of the voxels.
2.5
Control Models
To evaluate the importance of task optimization for modeling STG representations, we compared the
representational similarities of the entire STG region and the task-optimized DNN-TF model layers
with the representational similarities of the entire STG region and two sets of control models.
The first set of control models transformed the stimuli to the following 48-dimensional model
representations3 :
? Mel-frequency spectrum (mfs) representing a mel-scaled short-term power spectrum inspired by
human auditory perception where frequencies organized by equidistant pitch locations. These
representations were computed by applying (i) a short-time Fourier transform and (ii) a mel-scaled
frequency-domain filterbank.
? Mel-frequency cepstral coefficients (mfccs) representing both broad-spectrum information (timbre)
and fine-scale spectral structure (pitch). These representations were computed by (i) mapping the
mfs to a decibel amplitude scale and (ii) multiplying them by the discrete cosine transform matrix.
? Low-quefrency mel-frequency spectrum (lq_mfs) representing timbre. These representations were
computed by (i) zeroing the high-quefrency mfccs, (ii) multiplying them by the inverse of discrete
cosine transform matrix and (iii) mapping them back from the decibel amplitude scale.
? High-quefrency mel-frequency spectrum (hq_mfs) representing pitch. These representations were
computed by (i) zeroing the low-quefrency mfccs, (ii) multiplying them by the inverse of discrete
cosine transform matrix and (iii) mapping them back from the decibel amplitude scale.
The second set of control models were 10 random DNN models with the same architecture as the
DNN-TF model, but with parameters drawn from a zero mean and unit variance multivariate Gaussian
distribution.
3
These are provided as part of the studyforrest dataset [23].
4
3
Results
In the first set of experiments, we analyzed the task-optimized DNN models. The tag prediction
performance of the models for the individual tags was defined as the area under the receiver operator
characteristics (ROC) curve (AUC).
We first compared the mean performance of the models over all tags (Figure 2). The performance of
all models was significantly above chance level (p 0.001, Student?s t-test, Bonferroni correction).
The highest performance was achieved by the DNN-TF model (0.8939), followed by the DNN-F
model (0.8905) and the DNN-T model (0.8852). To the best of our knowledge, this is the highest tag
prediction performance of an end-to-end model evaluated on the same split of the same dataset [32].
The performance was further improved by averaging the predictions of the DNN-T and DNN-F
models (0.8982) as well as those of the DNN-T, DNN-F and DNN-TF models (0.9007). To the best
of our knowledge, this is the highest tag prediction performance of any model (ensemble) evaluated
on the same split of the same dataset [33, 32, 34]. For the remainder of the analyses, we considered
only the DNN-TF model since it achieved the highest single-model performance.
Figure 2: Tag prediction performance of the task-optimized DNN models. Bars show AUCs over
all tags for the corresponding task-optimized DNN models. Error bars show ? SE. All pairwise
differences are significant except for the pairs 1 and 2, and 2 and 3 (p < 0.05, paired-sample t-test,
Bonferroni correction).
We then compared the performance of the DNN-TF model for the individual tags (Figure 3). Visual
inspection did not reveal a prominent pattern in the performance distribution over tags. The performance was not significantly correlated with tag popularity (p > 0.05, Student?s t-test). The only
exception was that the performance for the positive tags were significantly higher than that for the
negative tags (p 0.001, Student?s t-test).
Figure 3: Tag prediction performance of the task-optimized DNN-TF model. Bars show AUCs
for the corresponding tags. Red band shows the mean ? SE for the task-optimized DNN-TF model
over all tags.
In the second set of experiments, we analyzed how closely the representational geometry of STG is
related to the representational geometries of the task-optimized DNN-TF model layers.
First, we constructed the candidate RDMs of the layers (Figure 4). Visual inspection revealed
similarity structure patterns that became increasingly prominent with increasing layer depth. The
most prominent pattern was the non-vocal and vocal subdivision.
5
Figure 4: RDMs of the task-optimized DNN-TF model layers. Matrix elements show the dissimilarity (1 - Spearman?s r) between the model layer representations of the corresponding trials. Matrix
rows and columns are sorted according to the genres of the corresponding trials.
Second, we performed a region of interest analysis by comparing the reference RDM of the entire
STG region with the candidate RDMs (Figure 5). While none of the correlations between the
reference RDM and the candidate RDMs reached the noise ceiling (expected correlation between the
reference RDM and the RDM of the true model given the noise in the analyzed data [31]), they were
all significantly above chance level (p < 0.05, signed-rank test with subject RFX, FDR correction).
The highest correlation was found for Layer 1 (0.6811), whereas the lowest correlation was found for
Layer 8 (0.4429).
Figure 5: Representational similarities of the entire STG region and the task-optimized DNNTF model layers. Bars show the mean similarity (Spearman?s r) of the target RDM and the corresponding candidate RDMs over all subjects. Error bars show ? SE. Red band shows the expected
representational similarity of the STG and the true model given the noise in the analyzed data (noise
ceiling). All pairwise differences are significant except for the pairs 1 and 5, 2 and 6, and 3 and 4
(p < 0.05, signed-rank test with subject RFX, FDR correction).
Third, we performed a searchlight analysis [35] by comparing the reference RDMs of multiple STG
voxel neighborhoods with the candidate RDMs (Figure 6). Each neighborhood center was assigned
a layer such that the corresponding target and candidate RDM were maximally correlated. This
analysis revealed a systematic change in the mean layer assignments over subjects along STG. They
increased from anterior STG to posterior STG such that most voxels in the region of the transverse
temporal gyrus were assigned to the shallower layers and most voxels in the region of the angular
gyrus were assigned to the deeper layers. The corresponding mean correlations between the target
and the candidate RDMs decreased from anterior to posterior STG.
In order to quantify the gradient in layer assignment, we correlated the mean layer assignment of the
STG voxels in each coronal slice with the slice position, which was taken to be the slice number. As a
result, it was found that layer and position are significantly correlated for the voxels along the anterior
- posterior STG direction (r = 0.7255, Pearson?s r, p 0.001, Student?s t-test). Furthermore, the
mean correlations between the target and the candidate RDMs for the majority (85.53%) of the STG
voxels were significant (p < 0.05, signed-rank test with subject RFX, FDR correction for the number
of voxels followed by Bonferroni correction for the number of layers). However, the correlations
of many voxels at the posterior end of STG were not highly significant in contrast to their central
counterparts and ceased to be significant as the (multiple comparisons corrected) critical value was
decreased from 0.05 to 0.01, which reduced the number of voxels surviving the critical value from
85.53% to 75.32%. Nevertheless, the gradient in layer assignment was maintained even when the
voxels that did not survive the new critical value were ignored (r = 0.7332, Pearson?s r, p 0.001,
Student?s t-test).
6
Figure 6: Representational similarities of the spherical STG voxel clusters and the taskoptimized DNN-TF model layers. Only the STG voxels that survived the (multiple comparisons
corrected) critial value of 0.05 are shown. Those that did not survive the critical value of 0.01 are
indicated with transparent white masks and black outlines. (A) Mean representational similarities
over subjects. (B) Mean layer assignments over subjects.
These results show that increasingly posterior STG voxels can be modeled with increasingly deeper
DNN layers optimized for music tag prediction. This observation is in line with the visual neuroscience literature where it was shown that increasingly deeper layers of DNNs optimized for visual
object and action recognition can be used to model increasingly downstream ventral and dorsal
stream voxels [19, 20]. It also agrees with previous work showing a gradient in auditory cortex with
DNNs optimized for speech-to-word mapping [21]. It would be of particular interest to compare the
respective gradients and use the music and speech DNNs as each other?s control model such as to
disentangle speech- and music-specific representations in auditory cortex.
In the last set of experiments, we analyzed the control models. We first constructed the RDMs of the
control models (Figure 7). Visual inspection revealed considerable differences between the RDMs of
the task-optimized DNN-TF model and those of the control models.
Figure 7: RDMs of the random DNN model layers (top row) and the baseline models (bottom
row). Matrix elements show the dissimilarity (1 - Spearman?s r) between the model layer representations of the corresponding trials. Matrix rows and columns are sorted according to the genres of the
corresponding trials.
We then compared the similarities of the task-optimized candidate RDMs and the target RDM
versus the similarities of the control RDMs and the target RDM (Figure 8). The layers of the taskoptimized DNN model significantly outperformed the corresponding layers of the random DNN model
(?r = 0.21, p < 0.05, signed-rank test with subject RFX, FDR correction) and the four baseline
models (?r = 0.42 for mfs, ?r = 0.21 for mfcc, ?r = 0.44 for lq_mfs and ?r = 0.34 for hq_mfs,
signed-rank test with subject RFX, FDR correction). Furthermore, we performed the searchlight
analysis with the random DNN model to determine whether the gradient in layer assignment is a
consequence of model architecture or model representation. We found that the random DNN model
failed to maintain the gradient in layer assignment (r = ?0.2175, Pearson?s r, p = 0.0771, Student?s
t-test), suggesting that the gradient is in the representation that emerges from task optimization.
These results show the importance of task optimization for modeling STG representations. This
observation also is line with visual neuroscience literature where similar analyses showed the
importance of task optimization for modeling ventral stream representations [19, 17].
7
A
B
Figure 8: Control analyses. (A) Representational similarities of the entire STG region and the
task-optimized DNN-TF model versus the representational similarities of the entire STG region and
the control models. Different colors show different control models: Random DNN model, mfs model,
mfcc model, lq_mfs model and hq_mfs model. Bars show mean similarity differences over subjects.
Error bars show ? SE. (B) Mean layer assignments over subjects for the random DNN model. Voxels,
masks and outlines are the same as those in Figure 6.
4
Conclusion
We showed that task-optimized DNNs that use time and/or frequency domain representations of
music achieved state-of-the-art performance in various evaluation scenarios for automatic music
tagging. Comparison of DNN and STG representations revealed a representational gradient in STG
with anterior STG being more sensitive to low-level stimulus features (shallow DNN layers) and
posterior STG being more sensitive to high-level stimulus features (deep DNN layers). These results,
in conjunction with previous results on the visual and auditory cortical representations, suggest
the existence of multiple representational gradients that process increasingly complex conceptual
information as we traverse sensory pathways of the human brain.
References
[1] B. L. Schwartz and J. H. Krantz, Sensation and Perception. SAGE Publications, 2015.
[2] J. M. Fuster, Cortex and Mind: Unifying Cognition. Oxford University Press, 2003.
[3] R. Santoro, M. Moerel, F. D. Martino, R. Goebel, K. Ugurbil, E. Yacoub, and E. Formisano, ?Encoding
of natural sounds at multiple spectral and temporal resolutions in the human auditory cortex,? PLOS
Computational Biology, vol. 10, p. e1003412, jan 2014.
[4] M. Moerel, F. D. Martino, K. U?gurbil, E. Yacoub, and E. Formisano, ?Processing of frequency and location
in human subcortical auditory structures,? Scientific Reports, vol. 5, p. 17048, nov 2015.
[5] V. Alluri, P. Toiviainen, I. P. J??skel?inen, E. Glerean, M. Sams, and E. Brattico, ?Large-scale brain
networks emerge from dynamic processing of musical timbre, key and rhythm,? NeuroImage, vol. 59,
pp. 3677?3689, feb 2012.
[6] V. Alluri, P. Toiviainen, T. E. Lund, M. Wallentin, P. Vuust, A. K. Nandi, T. Ristaniemi, and E. Brattico,
?From vivaldi to beatles and back: Predicting lateralized brain responses to music,? NeuroImage, vol. 83,
pp. 627?636, dec 2013.
[7] P. Toiviainen, V. Alluri, E. Brattico, M. Wallentin, and P. Vuust, ?Capturing the musical brain with lasso:
Dynamic decoding of musical features from fMRI data,? NeuroImage, vol. 88, pp. 170?180, mar 2014.
[8] R. D. Patterson, S. Uppenkamp, I. S. Johnsrude, and T. D. Griffiths, ?The processing of temporal pitch and
melody information in auditory cortex,? Neuron, vol. 36, pp. 767?776, nov 2002.
[9] M. Casey, J. Thompson, O. Kang, R. Raizada, and T. Wheatley, ?Population codes representing musical
timbre for high-level fMRI categorization of music genres,? in MLINI, 2011.
[10] N. Staeren, H. Renvall, F. D. Martino, R. Goebel, and E. Formisano, ?Sound categories are represented as
distributed patterns in the human auditory cortex,? Current Biology, vol. 19, pp. 498?502, mar 2009.
8
[11] D. L. K. Yamins, H. Hong, C. F. Cadieu, E. A. Solomon, D. Seibert, and J. J. DiCarlo, ?Performanceoptimized hierarchical models predict neural responses in higher visual cortex,? Proceedings of the National
Academy of Sciences, vol. 111, pp. 8619?8624, may 2014.
[12] P. Agrawal, D. Stansbury, J. Malik, and J. L. Gallant, ?Pixels to voxels: Modeling visual representation in
the human brain,? arXiv:1407.5104, 2014.
[13] S.-M. Khaligh-Razavi and N. Kriegeskorte, ?Deep supervised, but not unsupervised, models may explain
IT cortical representation,? PLOS Computational Biology, vol. 10, p. e1003915, nov 2014.
[14] C. F. Cadieu, H. Hong, D. L. K. Yamins, N. Pinto, D. Ardila, E. A. Solomon, N. J. Majaj, and J. J. DiCarlo,
?Deep neural networks rival the representation of primate IT cortex for core visual object recognition,?
PLOS Computational Biology, vol. 10, p. e1003963, dec 2014.
[15] T. Horikawa and Y. Kamitani, ?Generic decoding of seen and imagined objects using hierarchical visual
features,? arXiv:1510.06479, 2015.
[16] R. M. Cichy, A. Khosla, D. Pantazis, A. Torralba, and A. Oliva, ?Deep neural networks predict hierarchical
spatio-temporal cortical dynamics of human visual object recognition,? arXiv:1601.02970, 2016.
[17] D. Seibert, D. L. Yamins, D. Ardila, H. Hong, J. J. DiCarlo, and J. L. Gardner, ?A performance-optimized
model of neural responses across the ventral visual stream,? bioRxiv, 2016.
[18] R. M. Cichy, A. Khosla, D. Pantazis, and A. Oliva, ?Dynamics of scene representations in the human brain
revealed by magnetoencephalography and deep neural networks,? NeuroImage, apr 2016.
[19] U. G??l? and M. A. J. van Gerven, ?Deep neural networks reveal a gradient in the complexity of neural
representations across the ventral stream,? Journal of Neuroscience, vol. 35, pp. 10005?10014, jul 2015.
[20] U. G??l? and M. A. J. van Gerven, ?Increasingly complex representations of natural movies across the
dorsal stream are shared between subjects,? NeuroImage, dec 2015.
[21] A. Kell, D. Yamins, S. Norman-Haignere, and J. McDermott, ?Speech-trained neural networks behave like
human listeners and reveal a hierarchy in auditory cortex,? in COSYNE, 2016.
[22] E. Law, K. West, M. Mandel, M. Bay, and J. S. Downie, ?Evaluation of algorithms using games: The case
of music tagging,? in ISMIR, 2009.
[23] M. Hanke, R. Dinga, C. H?usler, J. S. Guntupalli, M. Casey, F. R. Kaule, and J. Stadler, ?Highresolution 7-tesla fMRI data on the perception of musical genres ? an extension to the studyforrest
dataset,? F1000Research, jun 2015.
[24] K. N. Kay, A. Rokem, J. Winawer, R. F. Dougherty, and B. A. Wandell, ?GLMdenoise: a fast, automated
technique for denoising task-based fMRI data,? Frontiers in Neuroscience, vol. 7, 2013.
[25] J. M. McFarland, Y. Cui, and D. A. Butts, ?Inferring nonlinear neuronal computation based on physiologically plausible inputs,? PLOS Computational Biology, vol. 9, p. e1003143, jul 2013.
[26] A. Krizhevsky, I. Sutskever, and G. E. Hinton, ?ImageNet classification with deep convolutional neural
networks,? in NIPS, 2012.
[27] S. Ioffe and C. Szegedy, ?Batch normalization: Accelerating deep network training by reducing internal
covariate shift,? arXiv:1502.03167, 2015.
[28] D. Kingma and J. Ba, ?Adam: A method for stochastic optimization,? arXiv:1412.6980, 2014.
[29] X. Glorot and Y. Bengio, ?Understanding the difficulty of training deep feedforward neural networks,? in
AISTATS, 2010.
[30] F. Chollet, ?Keras.? https://github.com/fchollet/keras, 2015.
[31] N. Kriegeskorte, ?Representational similarity analysis ? connecting the branches of systems neuroscience,?
Frontiers in Systems Neuroscience, 2008.
[32] S. Dieleman and B. Schrauwen, ?End-to-end learning for music audio,? in ICASSP, 2014.
[33] S. Dieleman and B. Schrauwen, ?Multiscale approaches to music audio feature learning,? in ISMIR, 2013.
[34] A. van den Oord, S. Dieleman, and B. Schrauwen, ?Transfer learning by supervised pre-training for
audio-based music classification,? in ISMIR, 2014.
[35] N. Kriegeskorte, R. Goebel, and P. Bandettini, ?Information-based functional brain mapping,? Proceedings
of the National Academy of Sciences, vol. 103, pp. 3863?3868, feb 2006.
9
| 6222 |@word trial:4 mri:1 hippocampus:1 kriegeskorte:3 bn:2 tr:1 initial:3 contains:2 existing:1 current:1 comparing:3 anterior:8 com:1 yet:1 subsequent:1 distant:1 remove:1 designed:1 fund:1 selected:1 plane:1 inspection:3 ith:1 pool2:1 core:2 short:2 filtered:2 coarse:1 provides:1 location:5 traverse:2 preference:1 five:6 along:7 constructed:2 pathway:3 behavioral:2 manner:1 pairwise:2 acquired:1 mask:2 expected:2 inter:1 tagging:3 indeed:1 grappa:1 brain:20 inspired:1 globally:1 spherical:2 considering:1 increasing:1 project:1 spain:1 provided:3 unrelated:1 alexnet:1 lowest:1 what:2 psych:1 developed:2 transformation:3 temporal:12 laterally:1 scaled:2 filterbank:1 schwartz:1 control:12 unit:7 grant:1 positive:1 local:2 consequence:1 encoding:3 oxford:1 modulation:5 signed:5 black:1 quantified:1 fov:1 rdms:22 limited:1 averaged:2 thirty:1 testing:1 rsa:2 procedure:1 jan:1 survived:1 area:6 majaj:1 thought:1 revealing:1 significantly:6 word:2 pre:1 vocal:4 specificity:1 griffith:1 suggest:1 mandel:1 operator:1 applying:1 www:1 map:3 center:4 nonspecific:1 conv4:1 thompson:1 resolution:2 disorder:1 m2:1 kay:1 population:1 hierarchy:4 pt:2 target:12 anomaly:1 hypothesis:1 element:3 recognition:3 located:1 observed:2 bottom:1 region:14 connected:4 plo:4 highest:9 environment:2 transforming:1 complexity:1 asked:2 dynamic:4 trained:3 solving:1 segment:1 patterson:1 icassp:1 joint:1 routinely:1 various:1 genre:8 pool1:1 represented:1 listener:1 train:1 separated:1 fast:2 vidi:1 pool5:1 radboud:3 coronal:1 artificial:4 neighborhood:4 pearson:3 whose:1 encoded:4 emerged:1 plausible:1 otto:1 triangular:2 dougherty:1 transform:4 echo:4 final:2 online:1 agrawal:1 net:1 rock:1 product:1 remainder:1 date:1 achieve:1 representational:22 academy:2 razavi:1 ceased:1 sutskever:1 cluster:1 categorization:1 adam:2 downie:1 object:4 axial:1 conv5:1 implemented:1 predicted:2 marcel:1 somatosensory:1 come:1 quantify:1 direction:3 sensation:1 merged:1 correct:1 closely:1 subsequently:3 stochastic:1 centered:1 human:12 material:1 melody:1 explains:1 behaviour:3 dnns:7 transparent:1 ultra:1 biological:1 magnetom:1 extension:1 frontier:2 correction:8 scanner:1 mm:3 considered:2 normal:1 roi:1 cognition:4 algorithmic:1 predict:5 mapping:5 dieleman:3 ventral:5 torralba:1 estimation:1 outperformed:1 jazz:1 label:1 tonality:2 nwo:1 guntupalli:1 sensitive:8 agrees:1 repetition:1 tf:15 weighted:1 federal:1 gaussian:1 aim:1 realigned:3 publication:1 conjunction:1 encode:1 derived:2 casey:2 martino:3 rank:5 contrast:2 stg:41 baseline:2 lateralized:1 dependent:1 entire:6 santoro:2 dnn:54 expand:1 transformed:3 germany:1 pixel:1 classification:2 cetera:1 development:1 resonance:1 art:2 softmax:1 field:2 once:1 cadieu:2 biology:5 broad:1 survive:2 unsupervised:1 fmri:10 others:1 stimulus:16 t2:1 report:1 randomly:1 resulted:4 national:2 individual:2 replaced:3 phase:2 geometry:2 maintain:1 organization:1 guclu:1 interest:3 investigate:1 highly:1 evaluation:3 analyzed:5 nl:3 devoted:1 hg:4 ambient:2 respective:1 iv:1 biorxiv:1 planum:1 increased:1 column:2 modeling:4 assignment:10 hearing:1 subset:1 uniform:1 comprised:3 stadler:1 successful:1 krizhevsky:1 listened:1 reported:1 thickness:1 oord:1 systematic:1 decoding:2 michael:2 connecting:1 schrauwen:3 von:1 central:1 solomon:2 cosyne:1 bandettini:1 szegedy:1 insula:1 account:1 suggesting:1 de:2 stride:4 bold:1 student:6 coefficient:1 kamitani:1 stream:11 performed:4 view:2 root:1 red:2 reached:1 temporale:1 participant:2 parallel:1 annotation:3 jul:2 hanke:3 roll:1 musical:7 convolutional:7 variance:1 characteristic:1 correspond:1 ofthe:1 ensemble:1 became:1 accurately:1 artist:1 none:1 multiplying:3 mfcc:2 drive:1 rectified:1 history:1 explain:1 acquisition:2 frequency:12 pp:8 auditory:13 dataset:12 popular:1 nandi:1 knowledge:2 emerges:1 color:1 organized:1 agreed:1 amplitude:4 back:3 higher:3 supervised:3 pantazis:2 planar:1 response:10 improved:1 maximally:1 evaluated:2 mar:2 furthermore:2 angular:1 correlation:11 hand:3 beatles:1 nonlinear:3 overlapping:1 multiscale:1 mode:1 reveal:3 indicated:1 scientific:2 calibrating:1 normalized:1 true:2 counterpart:2 norman:1 assigned:5 white:1 adjacent:1 cerebellum:1 game:3 bonferroni:3 auc:3 maintained:1 rhythm:2 mel:6 cosine:3 m:3 generalized:1 prominent:3 hong:3 highresolution:1 outline:2 complete:1 image:3 recently:1 superior:4 sigmoid:2 wandell:1 functional:8 stimulation:1 khz:1 imagined:1 significant:5 goebel:3 automatic:2 zeroing:2 mfccs:3 dot:1 similarity:20 cortex:12 feb:2 disentangle:1 posterior:11 halved:1 showed:3 multivariate:1 khaligh:1 scenario:2 binary:2 mcdermott:1 seen:1 determine:1 paradigm:1 signal:1 ii:5 branch:1 multiple:5 sound:4 match:1 characterized:1 full6:1 long:4 cross:1 rokem:1 paired:1 pitch:5 prediction:11 vivaldi:1 oliva:2 arxiv:5 kernel:4 normalization:3 represent:1 achieved:4 dec:3 whereas:5 cropped:1 fine:2 spacing:1 decreased:2 country:1 regional:1 hz:2 subject:18 pooling:1 surviving:1 structural:1 kera:3 gerven:3 revealed:6 iii:3 enough:1 split:3 automated:1 bengio:1 feedforward:1 equidistant:1 rdm:14 architecture:5 bandwidth:2 identified:1 lasso:1 drum:1 shift:1 whether:2 six:4 ugurbil:1 accelerating:1 effort:1 song:3 returned:2 speech:5 action:1 deep:16 ignored:1 se:4 netherlands:4 rival:1 locally:1 band:2 clip:8 processed:4 category:2 gyrus:8 http:3 reduced:1 supplied:1 neuroscience:8 per:4 popularity:1 anatomical:2 discrete:3 vol:14 key:1 four:1 nevertheless:1 blood:1 drawn:2 preprocessed:1 imaging:1 chollet:1 downstream:2 year:2 run:2 inverse:2 powerful:1 decide:1 ismir:3 excerpt:5 capturing:1 layer:55 followed:6 activity:2 mni:1 scene:1 tag:29 fourier:1 extremely:1 px:2 according:3 cui:1 spearman:5 across:5 cardiac:1 increasingly:7 sam:1 shallow:3 modification:1 primate:1 den:1 invariant:1 restricted:2 taken:3 ceiling:2 turn:1 german:1 singer:2 mind:1 yamins:4 violin:1 fed:1 end:5 probe:2 eight:3 hierarchical:4 spectral:4 generic:1 magnetic:1 alternative:1 batch:4 existence:2 symphonic:2 top:2 include:1 ensure:1 completed:1 unifying:1 music:19 johnsrude:1 inconceivable:1 classical:1 objective:1 malik:1 question:1 parametric:2 primary:2 responds:1 gradient:13 majority:2 seven:1 collected:2 ru:3 code:1 dicarlo:3 modeled:1 mini:2 minimizing:1 mostly:1 trace:1 nijmegen:3 resurgence:1 negative:1 sage:1 ba:1 fdr:5 twenty:3 perform:1 conv2:1 upper:2 shallower:1 neuron:4 convolution:1 observation:2 gallant:1 behave:1 beat:1 hinton:1 ever:1 transverse:1 e1003963:1 nonlinearly:2 pair:3 searchlight:4 optimized:22 imagenet:1 kang:1 barcelona:1 kingma:1 nip:2 bar:7 mcfarland:1 perception:5 pattern:6 lund:1 power:1 suitable:1 overlap:1 natural:3 critical:4 ardila:2 predicting:1 difficulty:1 representing:5 movie:1 github:1 gardner:1 jun:1 coupled:1 auto:1 full7:1 epoch:1 understanding:2 voxels:18 literature:2 law:1 fully:4 loss:1 afterward:1 subcortical:1 e1003915:1 versus:2 age:2 validation:2 metal:1 conv1:1 playing:1 pi:2 heavy:1 row:4 changed:2 fchollet:1 supported:2 last:2 deeper:5 institute:3 conv3:1 formisano:3 cepstral:1 emerge:2 rhythmic:1 van:4 slice:8 curve:1 default:1 cortical:4 amygdala:1 donders:5 depth:1 distributed:1 sensory:14 regressors:1 voxel:2 approximate:1 spectro:2 nov:3 umut:1 butt:1 ioffe:1 receiver:2 conceptual:1 spatio:1 inen:1 spectrum:5 physiologically:1 fuster:1 khosla:2 bay:1 channel:2 transfer:2 thielen:2 european:1 complex:2 constructing:1 domain:3 did:3 apr:1 aistats:1 linearly:1 noise:5 n2:1 tesla:2 respiratory:1 neuronal:1 west:1 e1003143:1 roc:1 slow:1 probing:2 neuroimage:5 position:2 inferring:1 candidate:15 third:1 learns:1 remained:1 specific:3 decibel:3 covariate:1 showing:1 timbre:6 evidence:1 glorot:1 flattened:1 importance:3 dissimilarity:4 te:1 album:1 illustrates:1 gap:1 mf:4 entropy:1 coregistered:1 visual:16 failed:1 yacoub:2 neurological:1 partially:1 scalar:1 pinto:1 chance:2 coil:1 sorted:2 magnetoencephalography:1 acceleration:1 seibert:2 shared:1 considerable:1 change:2 cichy:2 specifically:1 except:3 corrected:4 reducing:1 averaging:2 denoising:1 experimental:1 subdivision:1 player:4 siemens:1 exception:1 internal:1 softplus:2 scan:8 dorsal:3 evaluate:1 hemodynamic:1 audio:7 tested:1 correlated:6 |
5,773 | 6,223 | Identification and Overidentification of
Linear Structural Equation Models
Bryant Chen
University of California, Los Angeles
Computer Science Department
Los Angeles, CA, 90095-1596, USA
Abstract
In this paper, we address the problems of identifying linear structural equation
models and discovering the constraints they imply. We first extend the half-trek
criterion to cover a broader class of models and apply our extension to finding
testable constraints implied by the model. We then show that any semi-Markovian
linear model can be recursively decomposed into simpler sub-models, resulting
in improved identification and constraint discovery power. Finally, we show that,
unlike the existing methods developed for linear models, the resulting method
subsumes the identification and constraint discovery algorithms for non-parametric
models.
1
Introduction
Many researchers, particularly in economics, psychology, and the social sciences, use linear structural
equation models (SEMs) to describe the causal and statistical relationships between a set of variables,
predict the effects of interventions and policies, and to estimate parameters of interest. When modeling
using linear SEMs, researchers typically specify the causal structure (i.e. exclusion restrictions and
independence restrictions between error terms) from domain knowledge, leaving the structural
coefficients (representing the strength of the causal relationships) as free parameters to be estimated
from data. If these coefficients are known, then total effects, direct effects, and counterfactuals
can be computed from them directly (Balke and Pearl, 1994). However, in some cases, the causal
assumptions embedded in the model are not enough to uniquely determine one or more coefficients
from the probability distribution, and therefore, cannot be estimated using data. In such cases, we say
that the coefficient is not identified or not identifiable1 .
In other cases, a coefficient may be overidentified in addition to being identified, meaning that there
are at least two minimal sets of logically independent assumptions in the model that are sufficient
for identifying a coefficient, and the identified expressions for the coefficient are distinct functions
of the covariance matrix (Pearl, 2004). As a result, the model imposes a testable constraint on the
probability distribution that the two (or more) identified expressions for the coefficient are equal.
As compact and transparent representations of the model?s structure, causal graphs provide a convenient tool to aid in the identification of coefficients. First utilized as a causal inference tool by
Wright (1921), graphs have more recently been applied to identify causal effects in non-parametric
causal models (Pearl, 2009) and enabled the development of causal effect identification algorithms
that are complete for non-parametric models (Huang and Valtorta, 2006; Shpitser and Pearl, 2006).
These algorithms can be applied to the identification of coefficients in linear SEMs by identifying
non-parametric direct effects, which are closely related to structural coefficients (Tian, 2005; Chen
and Pearl, 2014). Algorithms designed specifically for the identification of linear SEMs were de1
We will also use the term ?identified" with respect to individual variables and the model as a whole.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
veloped by Brito and Pearl (2002), Brito (2004), Tian (2005, 2007, 2009), Foygel et al. (2012), and
Chen et al. (2014).
Graphs have also proven to be valuable tools in the discovery of testable implications. It is well
known that conditional independence relationships can be easily read from the causal graph using dseparation (Pearl, 2009), and Kang and Tian (2009) gave a procedure for linear SEMs that enumerates
a set of conditional independences that imply all others. In non-parametric models without latent
variables or correlated error terms, these conditional independence constraints represent all of the
testable implications of the model (Pearl, 2009). In models with latent variables and/or correlated
error terms, there may be additional constraints implied by the model. These non-independence
constraints, often called Verma constraints, were first noted by Verma and Pearl (1990), and Tian
and Pearl (2002b) and Shpitser and Pearl (2008) developed graphical algorithms for systematically
discovering such constraints in non-parametric models. In the case of linear models, Chen et al. (2014)
applied their aforementioned identification method to the discovery of overidentifying constraints,
which in some cases are equivalent to the non-parametric constraints enumerated in Tian and Pearl
(2002b) and Shpitser and Pearl (2008).
Surprisingly, naively applying algorithms designed for non-parametric models to linear models
enables the identification of coefficients and constraints that the aforementioned methods developed
for linear models are unable to, despite utilizing the additional assumption of linearity. In this paper,
we first extend the half-trek identification method of Foygel et al. (2012) and apply it to the discovery
of half-trek constraints, which generalize the overidentifying constraints given in Chen et al. (2014).
Our extensions can be applied to Markovian, semi-Markovian, and non-Markovian models. We then
demonstrate how recursive c-component decomposition, which was first utilized in identification
algorithms developed for non-parametric models (Tian, 2002; Huang and Valtorta, 2006; Shpitser
and Pearl, 2006), can be incorporated into our linear identification and constraint discovery methods
for Markovian and semi-Markovian models. We show that doing so allows the identification of
additional models and constraints. Further, we will demonstrate that, unlike existing algorithms, our
method subsumes the aforementioned identification and constraint discovery methods developed for
non-parametric models when applied to linear SEMs.
2
Preliminaries
A linear structural equation model consists of a set of equations of the form, X = ?X + , where
X = [x1 , ..., xn ]t is a vector containing the model variables, ? is a matrix containing the coefficients
of the model, which convey the strength of the causal relationships, and = [1 , ..., n ]t is a vector
of error terms, which represents omitted or latent variables. The matrix ? contains zeroes on the
diagonal, and ?ij = 0 whenever xi is not a cause of xj . The error terms are normally distributed
random variables and induce the probability distribution over the model variables. The covariance
matrix of X will be denoted by ? and the covariance matrix over the error terms, , by ?.
An instantiation of a model M is an assignment of values to the model parameters (i.e. ? and the
non-zero elements of ?). For a given instantiation mi , let ?(mi ) denote the covariance matrix
implied by the model and ?k (mi ) be the value of coefficient ?k .
Definition 1. A coefficient, ?k , is identified if for any two instantiations of the model, mi and mj ,
we have ?k (mi ) = ?k (mj ) whenever ?(mi ) = ?(mj ).
In other words, ?k is identified if it can be uniquely determined from the covariance matrix, ?. Now,
we define when a structural coefficient, ?k , is overidentified.
Definition 2. (Pearl, 2004) A coefficient, ?k is overidentified if there are two or more distinct sets of
logically independent assumptions in M such that
(i) each set is sufficient for deriving ?k as a function of ?, ?k = f (?),
(ii) each set induces a distinct function ?k = f (?), and
(iii) each assumption set is minimal, that is, no proper subset of those assumptions is sufficient for
the derivation of ?k .
The causal graph or path diagram of an SEM is a graph, G = (V, D, B), where V are vertices or
nodes, D directed edges, and B bidirected edges. The vertices represent model variables. Directed
2
eges represent the direction of causality, and for each coefficient ?ij 6= 0, an edge is drawn from
xi to xj . Each directed edge, therefore, is associated with a coefficient in the SEM, which we
will often refer to as its structural coefficient. The error terms, i , are not represented in the graph.
However, a bidirected edge between two variables indicates that their corresponding error terms
may be statistically dependent while the lack of a bidirected edge indicates that the error terms are
independent. When the causal graph is acyclic without bidirected edges, then we say that the model
is Markovian. Graphs with bidirected edges are non-Markovian, while acyclic graphs with bidirected
edges are additionally called semi-Markovian.
We will use standard graph terminology with P a(y) denoting the parents of y, Anc(y) denoting
the ancestors of y, De(y) denoting the descendants of y, and Sib(y) denoting the siblings of y, the
variables that are connected to y via a bidirected edge. He(E) denotes the heads of a set of directed
edges, E, while T a(E) denotes the tails. Additionally, for a node v, the set of edges for which
He(E) = v is denoted Inc(v). Lastly, we will utilize d-separation (Pearl, 2009).
Lastly, we establish a couple preliminary definitions around half-treks. These definitions and
illustrative examples can also be found in Foygel et al. (2012) and Chen et al. (2014).
Definition 3. (Foygel et al., 2012) A half-trek, ?, from x to y is a path from x to y that either begins
with a bidirected arc and then continues with directed edges towards y or is simply a directed path
from x to y.
We will denote the set of nodes that are reachable by half-trek from v htr(v).
Definition 4. (Foygel et al., 2012) For any half-trek, ?, let Right(?) be the set of vertices in ? that
have an outgoing directed edge in ? (as opposed to bidirected edge) union the last node in the trek. In
other words, if the trek is a directed path then every node in the path is a member of Right(?). If the
trek begins with a bidirected edge then every node other than the first node is a member of Right(?).
Definition 5. (Foygel et al., 2012) A system of half-treks, ?1 , ..., ?n , has no sided intersection if for
all ?i , ?j ? {?1 , ..., ?n } such that ?i 6= ?j , Right(?i )?Right(?j )= ?.
Definition 6. (Chen et al., 2014) For an arbitrary variable, v, let P a1 , P a2 , ..., P ak be the unique
partition of Pa(v) such that any two parents are placed in the same subset, P ai , whenever they are
connected by an unblocked path (given the empty set). A connected edge set with head v is a set of
directed edges from P ai to v for some i ? {1, 2, ..., k}.
3
General Half-Trek Criterion
The half-trek criterion is a graphical condition that can be used to determine the identifiability of
recursive and non-recursive linear models (Foygel et al., 2012). Foygel et al. (2012) use the half-trek
criterion to identify the model variables one at a time, where each identified variable may be able
to aid in the identification of other variables. If any variable is not identifiable using the half-trek
criterion, then their algorithm returns that the model is not HTC-identifiable. Otherwise the algorithm
returns that the model is identifiable. Their algorithm subsumes the earlier methods of Brito and Pearl
(2002) and Brito (2004). In this section, we extend the half-trek criterion to allow the identification
of arbitrary subsets of edges belonging to a variable. As a result, our algorithm can be utilized to
identify as many coefficients as possible, even when the model is not identified. Additionally, this
extension improves our ability to identify entire models, as we will show.
Definition 7. (General Half-Trek Criterion) Let E be a set of directed edges sharing a single head y.
A set of variables Z satisfies the general half-trek criterion with respect to E, if
(i) |Z| = |E|,
(ii) Z ? (y ? Sib(y)) = ?,
(iii) There is a system of half-treks with no sided intersection from Z to T a(E), and
(iv) (P a(y) \ T a(E)) ? htr(Z) = ?.
A set of directed edges, E, sharing a head y is identifiable if there exists a set, ZE , that satisfies
the general half-trek criterion (g-HTC) with respect to E, and ZE consists only of ?allowed? nodes.
Intuitively, a node z is allowed if Ezy is identified or empty, where Ezy ? Inc(z) is the set of edges
3
Figure 1: The above model is identified using the g-HTC but not the HTC.
belonging to z that lie on half-treks from y to z or lie on unblocked paths (given the empty set)
between z and P a(y) \ T a(E).2 The following definition formalizes this notion.
Definition 8. A node, z, is g-HT allowed (or simply allowed) for directed edges E with head y if
Ezy = ? or there exists sequences of sets of nodes, (Z1 , ...Zk ), and sets of edges, (E1 , ..., Ek ), with
Ezy ? E1 ? ... ? Ek such that
(i) Zi satisfies the g-HTC with respect to Ei for all i ? {1, ..., k},
(ii) EZ1 y1 = ?, where yi = He(Ei ) for all i ? {1, ..., k}, and
(iii) EZi yi ? (E1 ? ... ? Ei?1 ) for all i ? {1, ...k}.
When a set of allowed nodes, ZE , satisfies the g-HTC for a set of edges E, then we will say that ZE
is a g-HT admissible set for E.
Theorem 1. If a g-HT admissible set for directed edges Ey with head y exists then Ey is g-HT
identifiable. Further, let ZEy = {z1 , ..., zk } be a g-HT admissible set for Ey , T a(Ey ) = {p1 , ..., pk },
and ? be the covariance matrix of the model variables. Define A as
[(I ? ?)T ?]zi pj , Ezi y 6= ?
Aij =
(1)
?zi pj ,
Ezi y = ?
and b as
bi =
[(I ? ?)T ?]zi y , Ezi y =
6 ?
?zi y ,
Ezi y = ?
(2)
Then A is an invertible matrix and A ? ?T a(Ey ),y = b.
Proof. See Appendix for proofs of all theorems and lemmas.
The g-HTC impoves upon the HTC because subsets of a variable?s coefficients may be identifiable
even when the variable is not. By identifying subsets of a variable?s coefficients, we not only allow
the identification of as many coefficients as possible in unidentified models, but we also are able to
identify additional models as a whole. For example, Figure 1 is not identifiable using the HTC. In
order to identify Y , Z2 needs to be identified first as it is the only variable with a half-trek to X2
without being a sibling of Y . However, to identify Z2 , either Y or W1 needs to be identified. Finally
to identify W1 , Y needs to be identified. This cycle implies that the model is not HTC-identifiable.
It is, however, g-HTC identifiable since the g-HTC allows d to be identified independently of f ,
using {Z1 } as a g-HT admissible set, which in turn allows {Y } to be a g-HT admissible set for W1 ?s
coefficient, a.
Finding a g-HT admissible set for directed edges, E, with head, y, from a set of allowed nodes, AE ,
can be accomplished by utilizing the max-flow algorithm described in Chen et al. (2014)3 , which we
call MaxFlow(G, E, AE ). This algorithm returns a maximal set of allowed nodes that satisfies (ii) (iv) of the g-HTC.
0
0
In some cases, there may be no g-HT admissible set for E but there may be one for E ? E . In
other cases, there may be no g-HT admissible set of variables for a set of edges E but there may be a
2
3
We will continue to use the EZy notation and allow Z to be a set of nodes.
Brito (2004) utilized a similar max-flow construction in his identification algorithm.
4
(a)
(b)
(c)
(d)
Figure 2: (a) The graph is not identified using the g-HTC and cannot be decomposed (b) After
removing V6 we are able to decompose the graph (c) Graph for c-component, {V2 , V3 , V5 } (d) Graph
for c-component, {V1 , V4 }
0
0
g-HT admissible set of variables for E with E ? E . As a result, if a HT-admissible set does not
exist for Ey , where Ey = Inc(y) for some node y, we may have to check whether such a set exists
for all possible subsets of Ey in order to identify as many coefficients in Ey as possible. This process
can be somewhat simplified by noting that if E is a connected edge set with no g-HT admissible set,
0
then there is no superset E with a g-HT admissible set.
An algorithm that utilizes the g-HTC and Theorem 1 to identify as many coefficients as possible in
recursive or non-recursive linear SEMs is given in the Appendix. Since we may need to check the
identifiability of all subsets of a node?s edges, the algorithm?s complexity is polynomial time if the
degree of each node is bounded.
4
Generalizing Overidentifying Constraints
Chen et al. (2014) discovered overidentifying constraints by finding two HT-admissible sets for
a given connected edge set. When two such sets exist, we obtain two distinct expressions for the
identified coefficients, and equating the two expressions gives the overidentifying constraint. However,
we may be able to obtain constraints even when |ZE | < |E| and E is not identified. The algorithm,
MaxFlow, returns a maximal set, ZE , for which the equations, A ? ?T a(E),y = b, are linearly
independent, regardless of whether |ZE | = |E| and E is identified or not. Therefore, if we are able
to find an allowed node w that satisfies the conditions below, then the equation aw ? ?T a(E),y = bw
will be a linear combination of the equations, A ? ?T a(E),y = b.
Theorem 2. Let ZE be a set of maximal size that satisfies conditions (ii)-(iv) of the g-HTC for a
set of edges, E, with head y. If there exists a node w such that there exists a half-trek from w to
Ta(E), w ?
/ (y ? Sib(y)), and w is g-HT allowed for E, then we obtain the equality constraint,
?1
aw A?1
b
right = bw , where Aright is the right inverse of A.
We will call these generalized overidentifying constraints, half-trek constraints or HT-constraints. An
algorithm that identifies coefficients and finds HT-constraints for a recursive or non-recursive linear
SEM is given in the Appendix.
5
Decomposition
Tian showed that the identification problem could be simplified in semi-Markovian linear structural
equation models by decomposing the model into sub-models according to their c-components (Tian,
2005). Each coefficient is identifiable if and only if it is identifiable in the sub-model to which it
belongs (Tian, 2005). In this section, we show that the c-component decomposition can be applied
recursively to the model after marginalizing certain variables. This idea was first used to identify
interventional distributions in non-parameteric models by Tian (2002) and Tian and Pearl (2002a)
and adapting this technique for linear models will allow us to identify models that the g-HTC, even
coupled with (non-recursive) c-component decomposition, is unable to identify. Further, it ensures the
identification of all coefficients identifiable using methods developed for non-parametric models?a
guarantee that none of the existing methods developed for linear models satisfy.
5
The graph in Figure 2a consists of a single c-component, and we are unable to decompose it.
As a result, we are able to identify a but no other coefficients using the g-HTC. Moreover,
f = ?v? 4 E[v5 |do(v6 , v4 , v3 , v2 , v1 )] is identified using identification methods developed for nonparametric models (e.g. do-calculus) but not the g-HTC or other methods developed for linear
models.
However, if we remove v6 from the analysis, then the resulting model can be decomposed. Let M be
0
the model depicted in Figure 2a, P (v) be the distribution induced by M , and M Rbe a model that is
0
identical to M except the equation for v6 is removed. M induces the distribution v6 P (V )dv6 , and
0
its associated graph G yields two c-components, as shown in Figure 2b.
0
Now, decomposing G according to these c-components yields the sub-models depicted by Figures 2c
and 2d. Both of these sub-models are identifiable using the half-trek criterion. Thus, all coefficients
other than h have been shown to be identifiable. Returning to the graph prior to removal, depicted in
Figure 2a, we are now able to identify h because both v4 and v5 are now allowed nodes for h, and the
model is identified4 .
As a result, we can improve our identification and constraint-discovery algorithm by recursively
decomposing, using the g-HTC and Theorem 2, and removing descendant sets5 . Note, however,
that we must consider every descendant set for removal. It is possible that removing D1 will allow
identification of a coefficient but removing a superset D2 with D1 ? D2 will not. Additionally, it is
possible that removing D2 will allow identification but removing a subset D1 will not.
After recursively decomposing the graph, if some of the removed variables were unidentified, we
may be able to identify them by returning to the original graph prior to removal since we may have a
larger set of allowed nodes. For example, we were able to identify h in Figure 2a by ?un-removing"
v6 after the other coefficients were identified. In some cases, however, we may need to again
recursively decompose and remove descendant sets. As a result, in order to fully exploit the powers
of decomposition and the g-HTC, we must repeat the recursive decomposition process on the original
model until all marginalized nodes are identified or no new coefficients are identified in an iteration.
Clearly, recursive decomposition also aids in the discovery of HT-constraints in the same way that it
aids in the identification of coefficients using the g-HTC. However, note that recursive decomposition
may also introduce additional d-separation constraints. Prior to decomposition, if a node Z is dseparated from a node V then we trivially obtain the constraint that ?ZV = 0. However, in some
cases, Z may become d-separated from V after decomposition. In this case, the independence
constraint on the covariance matrix of the decomposed c-component corresponds to a non-conditional
independence constraint in the original joint distribution P (V ). It is for this reason that we output
independence constraints in Algorithm 2 (see Appendix).
For example, consider the graph depicted in Figure 3a. Theorem 2 does not yield any constraints for
the edges of V7 . However, after decomposing the graph we obtain the c-component for {V2 , V5 , V7 },
shown in Figure 3b. In this graph, V1 is d-separated from V7 yielding a non-independence constraint
in the original model.
We can systematically identify coefficients and HT-constraints using recursive c-component decomposition by repeating the following steps for the model?s graph G until the model has been identified
or no new coefficients are identified in an iteration:
(i) Decompose the graph into c-components, {Si }
(ii) For each c-component, utilize the g-HTC and Theorems 1 and 2 to identify coefficients and find
HT-constraints
(iii) For each descendant set, marginalize the descendant set and repeat steps (i)-(iii) until all
variables have been marginalized
4
While v4 and v5 are technically not allowed according to Definition 8, they can be used in g-HT admissible
sets to identify h using Theorem 1 since their coefficients have been identified.
5
Only removing descendant sets have the ability to break up components. For example, removing {v2 } from
Figure 2a does not break the c-component because removing v2 would relegate its influence to the error term of
its child, v3 . As a result, the graph of the resulting model would include a bidirected arc between v3 and v6 , and
we would still have a single c-component.
6
(a)
(b)
Figure 3: (a) V1 cannot be d-separated from V7 (b) V1 is d-separated from V7 in the graph of the
c-component, {V2 , V5 , V7 }
If a coefficient ? can be identified using the above method (see also Algorithm 3 in the Appendix,
which utilizes recursive decomposition to identify coefficients and output HT-constraints), then we
will say that ? is g-HTC identifiable.
We now show that any direct effect identifiable using non-parametric methods is also g-HTC identifiable.
0
Theorem 3. Let M be a linear SEM with variables V . Let M be a non-parametric SEM with
0
identical structure to M . If the direct effect of x on y for x, y ? V is identified in M then the
coefficient ?xy in M is g-HTC identifiable and can be identified using Algorithm 3 (see Appendix).
6
Non-Parametric Verma Constraints
Tian and Pearl (2002b) and Shpitser and Pearl (2008) provided algorithms for discovering Verma
constraints in recursive, non-parametric models. In this section, we will show that the constraints
obtained by the above method and Algorithm 3 (see Appendix) subsume the constraints discovered
by both methods when applied to linear models. First, we will show that the constraints identified in
(Tian and Pearl, 2002b), which we call Q-constraints, are subsumed by HT-constraints. Second, we
will show that the constraints given by Shpitser and Pearl (2008), called dormant independences, are,
in fact, equivalent to the constraints given by Tian and Pearl (2002b) for linear models. As a result,
both dormant independences and Q-constraints are subsumed by HT-constraints.
6.1
Q-Constraints
We refer to the constraints enumerated in (Tian and Pearl, 2002b) as Q-constraints since they are
discovered by identifying Q-factors, which are defined below.
Definition 9. For any subset, S ? V , the Q-factor, QS , is given by
Z
Y
QS =
P (vi |pai , i )P (S )dS ,
(3)
S i|V ?S
i
where S contains the error terms of the variables in S.
A Q-factor, QS , is identifiable whenever S is a c-component (Tian and Pearl, 2002a).
Lemma 1. (Tian and Pearl, 2002a) Let {v1 , ..., vn } be sorted topologically, S be a ccomponent,
V (i) = {v1 , ..., vi }, and V (0) = ?. Then QS can be computed as QS =
Q
(i?1)
), j = 1, ..., k.
{i|vi ?S} P (vi |V
For example, consider again Figure 2b. We have that Q1 = P (v1 )P (v4 |v3 , v2 , v1 ) and Q2 =
P (v2 |v1 )P (v3 |v2 , v1 )P (v5 |v4 , v3 , v2 , v1 ).
A Q-factor can also be identified by marginalizing out descendant sets (Tian and Pearl, 2002a).
Suppose that QS is identified and D is a descendant set in GS , then
X
QSi \D =
QSi .
(4)
D
If the marginalization over D yields additional c-components in the marginalized graph, then we can
again compute each of them from QS\D (Tian and Pearl, 2002b).
7
Figure 4: The above graph induces the Verma constraint, Q[v4 ] is not a function of v1 , and equivalently,
v4 ? v1 |do(v3 ).
Tian?s method recursively computes the Q-factors associated with c-components, marginalizes
descendant sets in the graph for the computed Q-factor, and again computes Q-factors associated
with c-components in the marginalized graph. The Q-constraint is obtained in the following way.
The definition of a Q-factor, QS , given by Equation 3 is a function of P a(S) only. However, the
equivalent expression given by Lemma 1 and Equation 4 may be functions of additional variables.
For example, in Figure 4, {v2 , v4 } is a c-component so we can identify Qv2 v4 =
P (v4 |v3 , v2 , v1 )P (v2 |v1R). The decomposition also makes v2 a leaf node in Gv2 v4 . As a result,
we can identify Qv4 = v2 P (v4 |v3 , v2 , v1 )P (v2 |v1 )dv2 . Since v1 is not a parent of v4 in Gv4 , we
R
have that Qv4 = v2 P (v4 |v3 , v2 , v1 )P (v2 |v1 )dv2 ? v1 .
Theorem 4. Any Q-constraint, QS ? Z, in a linear SEM, has an equivalent set of HT-constraints
that can be discovered using Algorithm 3 (see Appendix).
6.2
Dormant Independences
Dormant independences have a natural interpretation as independence and conditional independence
constraints within identifiable interventional distributions (Shpitser and Pearl, 2008). For example,
in Figure 4, the distribution after intervention on v3 can be represented graphically by removing
the edge from v2 to v3 since v3 is no longer a function of v2 but is instead a constant. In the
resulting graph, v4 is d-separated from v1 implying that v4 is independent of v1 in the distribution,
P (v4 , v2 , v1 |do(v3 )). In other words, P (v4 |do(v3 ),P
v1 ) = P (v4 |do(v3 )). Now, it is not hard to show
that P (v4 |v1 , do(v3 )) is identifiable and equal to v2 P (v4 |v3 , v2 , v1 )P (v2 |v1 ) and we obtain the
P
constraint that v2 P (v4 |v3 , v2 , v1 )P (v2 |v1 ) is not a function of v1 , which is exactly the Q-constraint
we obtained above.
Lemma 2. Any dormant independence, x
Q-constraint and vice versa.
|=
It turns out that dormant independences among singletons and Q-constraints are equivalent, as stated
by the following lemma.
y|w, do(Z), with x and y singletons has an equivalent
|=
Since pairwise independence implies independence in normal distributions, Lemma 2 and Theorem 4
imply the following theorem.
Theorem 5. Any dormant independence among sets, x y|W, do(Z), in a linear SEM, has an
equivalent set of HT-constraints that can be discovered by incorporating recursive c-component
decomposition with Algorithm 3 (see Appendix).
7
Conclusion
In this paper, we extend the half-trek criterion (Foygel et al., 2012) and generalize the notion of
overidentification to discover constraints using the generalized half-trek criterion, even when the
coefficients are not identified. We then incorporate recursive c-component decomposition and show
that the resulting identification method is able to identify more models and constraints than the
existing linear and non-parameteric algorithms.
Finally, we note that while we were preparing this manuscript for submission, Drton and Weihs
(2016) independently introduced a similar idea to the recursive decomposition discussed in this paper,
which they called ancestor decomposition. While ancestor decomposition is more efficient, recursive
decomposition is more general in that it enables the identification of a larger set of coefficients.
8
8
Acknowledgments
I would like to thank Jin Tian and Judea Pearl for helpful comments and discussions. This research
was supported in parts by grants from NSF #IIS-1302448 and #IIS-1527490 and ONR #N00014-131-0153 and #N00014-13-1-0153.
References
BALKE , A. and P EARL , J. (1994). Probabilistic evaluation of counterfactual queries. In Proceedings of the
Twelfth National Conference on Artificial Intelligence, vol. I. MIT Press, Menlo Park, CA, 230?237.
B RITO , C. (2004). Graphical methods for identification in structural equation models. Ph.D. thesis, Computer
Science Department, University of California, Los Angeles, CA.
URL {$<$http://ftp.cs.ucla.edu/pub/stat_ser/r314.pdf$>$}
B RITO , C. and P EARL , J. (2002). Generalized instrumental variables. In Uncertainty in Artificial Intelligence,
Proceedings of the Eighteenth Conference (A. Darwiche and N. Friedman, eds.). Morgan Kaufmann, San
Francisco, 85?93.
C HEN , B. and P EARL , J. (2014). Graphical tools for linear structural equation modeling. Tech. Rep. R-432,
<http://ftp.cs.ucla.edu/pub/stat_ser/r432.pdf>, Department of Computer Science, University of California,
Los Angeles, CA. Forthcoming, Psychometrika.
C HEN , B., T IAN , J. and P EARL , J. (2014). Testable implications of linear structual equation models. In
Proceedings of the Twenty-eighth AAAI Conference on Artificial Intelligence (C. E. Brodley and P. Stone,
eds.). AAAI Press, Palo, CA. <http://ftp.cs.ucla.edu/pub/stat_ser/r428-reprint.pdf>.
D RTON , M. and W EIHS , L. (2016). Generic identifiability of linear structural equation models by ancestor
decomposition. Scandinavian Journal of Statistics n/a?n/a10.1111/sjos.12227.
URL http://dx.doi.org/10.1111/sjos.12227
F OYGEL , R., D RAISMA , J. and D RTON , M. (2012). Half-trek criterion for generic identifiability of linear
structural equation models. The Annals of Statistics 40 1682?1713.
H UANG , Y. and VALTORTA , M. (2006). Pearl?s calculus of intervention is complete. In Proceedings of the
Twenty-Second Conference on Uncertainty in Artificial Intelligence (R. Dechter and T. Richardson, eds.).
AUAI Press, Corvallis, OR, 217?224.
K ANG , C. and T IAN , J. (2009). Markov properties for linear causal models with correlated errors. The Journal
of Machine Learning Research 10 41?70.
P EARL , J. (2004). Robustness of causal claims. In Proceedings of the Twentieth Conference Uncertainty in
Artificial Intelligence (M. Chickering and J. Halpern, eds.). AUAI Press, Arlington, VA, 446?453.
P EARL , J. (2009). Causality: Models, Reasoning, and Inference. 2nd ed. Cambridge University Press, New
York.
S HPITSER , I. and P EARL , J. (2006). Identification of conditional interventional distributions. In Proceedings of
the Twenty-Second Conference on Uncertainty in Artificial Intelligence (R. Dechter and T. Richardson, eds.).
AUAI Press, Corvallis, OR, 437?444.
S HPITSER , I. and P EARL , J. (2008). Dormant independence. In Proceedings of the Twenty-Third Conference
on Artificial Intelligence. AAAI Press, Menlo Park, CA, 1081?1087.
T IAN , J. (2002). Studies in Causal Reasoning and Learning. Ph.D. thesis, Computer Science Department,
University of California, Los Angeles, CA.
T IAN , J. (2005). Identifying direct causal effects in linear models. In Proceedings of the National Conference
on Artificial Intelligence, vol. 20. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999.
T IAN , J. (2007). A criterion for parameter identification in structural equation models. In Proceedings of the
Twenty-Third Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-07). AUAI Press,
Corvallis, Oregon.
T IAN , J. (2009). Parameter identification in a class of linear structural equation models. In Proceedings of the
Twenty-First International Joint Conference on Artificial Intelligence (IJCAI-09).
9
T IAN , J. and P EARL , J. (2002a). A general identification condition for causal effects. In Proceedings of the
Eighteenth National Conference on Artificial Intelligence. AAAI Press/The MIT Press, Menlo Park, CA,
567?573.
T IAN , J. and P EARL , J. (2002b). On the testable implications of causal models with hidden variables. In Proceedings of the Eighteenth Conference on Uncertainty in Artificial Intelligence (A. Darwiche and N. Friedman,
eds.). Morgan Kaufmann, San Francisco, CA, 519?527.
V ERMA , T. and P EARL , J. (1990). Equivalence and synthesis of causal models. In Proceedings of the Sixth
Conference on Uncertainty in Artificial Intelligence. Cambridge, MA. Also in P. Bonissone, M. Henrion,
L.N. Kanal and J.F. Lemmer (Eds.), Uncertainty in Artificial Intelligence 6, Elsevier Science Publishers, B.V.,
255?268, 1991.
W RIGHT, S. (1921). Correlation and causation. Journal of Agricultural Research 20 557?585.
10
| 6223 |@word polynomial:1 instrumental:1 nd:1 twelfth:1 calculus:2 d2:3 covariance:7 decomposition:20 q1:1 recursively:6 contains:2 pub:3 denoting:4 existing:4 z2:2 si:1 dx:1 must:2 dechter:2 partition:1 enables:2 remove:2 designed:2 implying:1 half:25 discovering:3 leaf:1 de1:1 v1r:1 intelligence:14 node:26 org:1 simpler:1 direct:5 become:1 descendant:10 consists:3 darwiche:2 introduce:1 pairwise:1 veloped:1 p1:1 decomposed:4 agricultural:1 psychometrika:1 spain:1 begin:2 linearity:1 notation:1 bounded:1 moreover:1 provided:1 discover:1 q2:1 developed:9 finding:3 formalizes:1 guarantee:1 every:3 auai:4 bryant:1 exactly:1 returning:2 normally:1 grant:1 intervention:3 despite:1 ak:1 path:7 equating:1 equivalence:1 tian:21 statistically:1 bi:1 directed:14 unique:1 acknowledgment:1 recursive:18 union:1 parameteric:2 procedure:1 maxflow:2 adapting:1 convenient:1 word:3 induce:1 cannot:3 marginalize:1 unidentified:2 applying:1 influence:1 restriction:2 equivalent:7 eighteenth:3 graphically:1 economics:1 regardless:1 independently:2 sets5:1 identifying:6 q:9 utilizing:2 deriving:1 his:1 enabled:1 notion:2 structual:1 annals:1 construction:1 suppose:1 pa:1 element:1 ze:8 particularly:1 utilized:4 continues:1 submission:1 ensures:1 connected:5 cycle:1 removed:2 valuable:1 complexity:1 halpern:1 htc:26 technically:1 upon:1 easily:1 joint:2 represented:2 derivation:1 separated:5 distinct:4 describe:1 london:1 doi:1 query:1 artificial:14 larger:2 say:4 otherwise:1 bonissone:1 ability:2 statistic:2 richardson:2 sequence:1 maximal:3 los:5 parent:3 empty:3 ijcai:1 ftp:3 ij:2 c:3 implies:2 direction:1 dormant:8 closely:1 transparent:1 preliminary:2 decompose:4 enumerated:2 extension:3 around:1 wright:1 normal:1 predict:1 claim:1 dseparation:1 a2:1 omitted:1 palo:1 vice:1 tool:4 mit:3 clearly:1 broader:1 indicates:2 logically:2 check:2 tech:1 helpful:1 inference:2 elsevier:1 dependent:1 typically:1 entire:1 hidden:1 ancestor:4 aforementioned:3 among:2 denoted:2 development:1 equal:2 sjos:2 identical:2 represents:1 pai:1 preparing:1 park:4 others:1 causation:1 national:3 individual:1 bw:2 friedman:2 subsumed:2 drton:1 interest:1 evaluation:1 yielding:1 implication:4 edge:34 xy:1 iv:3 causal:20 minimal:2 modeling:2 earlier:1 markovian:10 cover:1 bidirected:11 assignment:1 vertex:3 subset:9 aw:2 international:1 v4:23 probabilistic:1 invertible:1 synthesis:1 brito:5 w1:3 again:4 thesis:2 aaai:5 containing:2 huang:2 opposed:1 marginalizes:1 v7:6 ek:2 shpitser:7 return:4 de:1 singleton:2 subsumes:3 coefficient:46 inc:3 oregon:1 satisfy:1 vi:4 break:2 doing:1 counterfactuals:1 identifiability:4 kaufmann:2 yield:4 identify:24 generalize:2 identification:32 none:1 researcher:2 whenever:4 sharing:2 ed:8 definition:14 a10:1 sixth:1 associated:4 mi:6 proof:2 couple:1 judea:1 counterfactual:1 knowledge:1 enumerates:1 improves:1 manuscript:1 ta:1 arlington:1 specify:1 improved:1 lastly:2 until:3 correlation:1 ei:3 lack:1 unblocked:2 effect:10 usa:1 equality:1 read:1 uniquely:2 noted:1 illustrative:1 criterion:14 generalized:3 stone:1 pdf:3 complete:2 demonstrate:2 reasoning:2 meaning:1 recently:1 extend:4 he:3 interpretation:1 discussed:1 tail:1 refer:2 corvallis:3 versa:1 cambridge:3 ai:2 valtorta:3 trivially:1 rton:2 reachable:1 scandinavian:1 longer:1 ezi:5 exclusion:1 showed:1 belongs:1 certain:1 n00014:2 onr:1 continue:1 rep:1 yi:2 sems:7 accomplished:1 morgan:2 additional:7 somewhat:1 ey:9 determine:2 v3:20 semi:5 ii:8 sib:3 e1:3 ez1:1 a1:1 va:1 ae:2 iteration:2 represent:3 addition:1 diagram:1 leaving:1 publisher:1 unlike:2 comment:1 induced:1 member:2 flow:2 call:3 structural:15 noting:1 iii:5 trek:28 enough:1 superset:2 xj:2 independence:21 psychology:1 gave:1 zi:5 identified:33 marginalization:1 forthcoming:1 idea:2 sibling:2 angeles:5 lemmer:1 whether:2 expression:5 url:2 york:1 cause:1 nonparametric:1 repeating:1 ang:1 ph:2 induces:3 http:4 balke:2 exist:2 nsf:1 estimated:2 vol:2 zv:1 terminology:1 drawn:1 interventional:3 pj:2 ht:27 utilize:2 v1:31 graph:32 inverse:1 uncertainty:8 topologically:1 vn:1 separation:2 utilizes:2 appendix:9 identifiable:21 g:1 annual:1 strength:2 constraint:65 x2:1 ucla:3 department:4 according:3 combination:1 weihs:1 belonging:2 intuitively:1 sided:2 equation:19 foygel:9 turn:2 decomposing:5 apply:2 v2:29 generic:2 robustness:1 original:4 denotes:2 include:1 graphical:4 marginalized:4 exploit:1 testable:6 establish:1 implied:3 v5:7 parametric:15 diagonal:1 unable:3 thank:1 reason:1 relationship:4 equivalently:1 stated:1 proper:1 policy:1 twenty:6 markov:1 arc:2 jin:1 subsume:1 incorporated:1 head:8 y1:1 discovered:5 arbitrary:2 introduced:1 z1:3 california:4 uang:1 kang:1 pearl:31 barcelona:1 nip:1 address:1 able:10 below:2 eighth:1 max:2 power:2 natural:1 representing:1 improve:1 brodley:1 imply:3 identifies:1 reprint:1 coupled:1 prior:3 discovery:9 removal:3 hen:2 marginalizing:2 embedded:1 fully:1 proven:1 acyclic:2 earl:11 degree:1 sufficient:3 imposes:1 systematically:2 verma:5 surprisingly:1 last:1 free:1 placed:1 repeat:2 dv2:2 aij:1 supported:1 allow:6 distributed:1 xn:1 computes:2 san:2 simplified:2 social:1 compact:1 instantiation:3 uai:1 francisco:2 xi:2 un:1 latent:3 additionally:4 mj:3 zk:2 ca:10 menlo:4 sem:7 kanal:1 anc:1 domain:1 pk:1 linearly:1 whole:2 allowed:12 child:1 convey:1 x1:1 causality:2 aid:4 sub:5 lie:2 chickering:1 third:2 htr:2 admissible:14 ian:8 theorem:13 removing:11 naively:1 exists:6 incorporating:1 chen:9 intersection:2 generalizing:1 depicted:4 simply:2 relegate:1 twentieth:1 v6:7 corresponds:1 satisfies:7 ma:2 conditional:6 sorted:1 towards:1 hard:1 henrion:1 specifically:1 determined:1 except:1 lemma:6 total:1 called:4 incorporate:1 outgoing:1 d1:3 correlated:3 |
5,774 | 6,224 | Assortment Optimization Under the Mallows model
Antoine D?sir
IEOR Department
Columbia University
[email protected]
Vineet Goyal
IEOR Department
Columbia University
[email protected]
Srikanth Jagabathula
IOMS Department
NYU Stern School of Business
[email protected]
Danny Segev
Department of Statistics
University of Haifa
[email protected]
Abstract
We consider the assortment optimization problem when customer preferences
follow a mixture of Mallows distributions. The assortment optimization problem
focuses on determining the revenue/profit maximizing subset of products from a
large universe of products; it is an important decision that is commonly faced by
retailers in determining what to offer their customers. There are two key challenges:
(a) the Mallows distribution lacks a closed-form expression (and requires summing
an exponential number of terms) to compute the choice probability and, hence, the
expected revenue/profit per customer; and (b) finding the best subset may require an
exhaustive search. Our key contributions are an efficiently computable closed-form
expression for the choice probability under the Mallows model and a compact
mixed integer linear program (MIP) formulation for the assortment problem.
1
Introduction
Determining the subset (or assortment) of items to offer is a key decision problem that commonly
arises in several application contexts. A concrete setting is that of a retailer who carries a large
universe of products U but can offer only a subset of the products in each store, online or offline. The
objective of the retailer is typically to choose the offer set that maximizes the expected revenue/profit1
earned from each arriving customer. Determining the best offer set requires: (a) a demand model and
(b) a set optimization algorithm. The demand model specifies the expected revenue from each offer
set, and the set optimization algorithm finds (an approximation of) the revenue maximizing subset.
In determining the demand, the demand model must account for product substitution behavior,
whereby customers substitute to an available product (say, a dark blue shirt) when her most preferred
product (say, a black one) is not offered. The substitution behavior makes the demand for each
offered product a function of the entire offer set, increasing the complexity of the demand model.
Nevertheless, existing work has shown that demand models that incorporate substitution effects
provide significantly more accurate predictions than those that do not. The common approach to
capturing substitution is through a choice model that specifies the demand as the probability P(a|S)
of a random customer choosing product a from offer set S. The most general and popularly studied
class of choice models is the rank-based class [9, 24, 12], which models customer purchase decisions
through distributions over preference lists or rankings. These models assume that in each choice
instance, a customer samples a preference list specifying a preference ordering over a subset of the
1
As elaborated below, conversion-rate maximization can be obtained as a special case of revenue/profit
maximization by setting the revenue/profit of all the products to be equal.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
products, and chooses the first available product on her list; the chosen product could very well be
the no-purchase option.
The general rank-based model accommodates distributions with exponentially large support sizes
and, therefore, can capture complex substitution patterns; however, the resulting estimation and
decision problems become computationally intractable. Therefore, existing work has focused on
various parametric models over rankings. By exploiting the particular parametric structures, it has
designed tractable algorithms for estimation and decision-making. The most commonly studied
models in this context are the Plackett-Luce (PL) [22] model and its variants, the nested logit (NL)
model and the mixture of PL models. The key reason for their popularity is that the assumptions
made in these models (such as the Gumbel assumption for the error terms in the PL model) are
geared towards obtaining closed-form expressions for the choice probabilities P(a|S). On the other
hand, other popular models in the machine learning literature such as the Mallows model have
largely been ignored because computing choice probabilities under these models has been generally
considered to be computationally challenging, requiring marginalization of a distribution with an
exponentially-large support size.
In this paper, we focus on solving the assortment optimization problem under the Mallows model.
The Mallows distribution was introduced in the mid-1950?s [17] and is the most popular member
of the so-called distance-based ranking models, which are characterized by a modal ranking !
and a concentration parameter ?. The probability that a ranking is sampled falls exponentially
as e ??d( ,!) . Different distance functions result in different models. The Mallows model uses
the Kendall-Tau distance, which measures the number of pairwise disagreements between the two
rankings. Intuitively, the Mallows model assumes that consumer preferences are concentrated around
a central permutation, with the likelihood of large deviations being low.
We assume that the parameters of the model are given. Existing techniques in machine learning
may be applied to estimate the model parameters. In settings of our interest, data are in the form of
choice observations (item i chosen from offer set S), which are often collected as part of purchase
transactions. Existing techniques focus on estimating the parameters of the Mallows model when the
observations are complete rankings [8], partitioned preferences [14] (which include top-k/bottom-k
items), or a general partial-order specified in the form of a collection of pairwise preferences [15].
While the techniques based on complete rankings and partitioned preferences don?t apply to this
context, the techniques proposed in [15] can be applied to infer the model parameters.
Our results. We address the two key computational challenges that arise in solving our problem:
(a) efficiently computing the choice probabilities and hence, the expected revenue/profit, for a given
offer set S and (b) finding the optimal offer set S ? . Our main contribution is to propose two alternate
procedures to to efficiently compute the choice probabilities P(a|S) under the Mallows model. As
elaborated below, even computing choice probabilities is a non-trivial computational task because it
requires marginalizing the distribution by summing it over an exponential number of rankings. In
fact, computing the probability of a general partial order under the Mallows model is known to be a
#P hard problem [15, 3]. Despite this, we show that the Mallows distribution has rich combinatorial
structure, which we exploit to derive a closed-form expression for the choice probabilities that takes
the form of a discrete convolution. Using the fast Fourier transform, the choice probability expression
can be evaluated in O(n2 log n) time (see Theorem 3.2), where n is the number of products. In
Section 4, we exploit the repeated insertion method (RIM) [7] for sampling rankings according to
the Mallows distribution to obtain a dynamic program (DP) for computing the choice probabilities
in O(n3 ) time (see Theorem 4.2). The key advantage of the DP specification is that the choice
probabilities are expressed as the unique solution to a system of linear equations. Based on this
specification, we formulate the assortment optimization problem as a compact mixed linear integer
program (MIP) with O(n) binary variables and O(n3 ) continuous variables and constraints. The
MIP provides a framework to model a large class of constraints on the assortment (often called
?business constraints") that are necessarily present in practice and also extends to mixture of Mallows
model. Using a simulation study, we show that the MIP provides accurate assortment decisions in a
reasonable amount of time for practical problem sizes.
The exact computation approaches that we propose for computing choice probabilities are necessary
building blocks for our MIP formulation. They also provide computationally efficient alternatives to
computing choice probabilities via Monte-Carlo simulations using the RIM sampling method. In fact,
the simulation approach will require exponentially many samples to obtain reliable estimates when
2
products have exponentially small choice probabilities. Such products commonly occur in practice
(such as the tail products in luxury retail). They also often command high prices because of which
discarding them can significantly lower the revenues.
Literature review. A large number of parametric models over rankings have been extensively
studied in the areas of statistics, transportation, marketing, economics, and operations management
(see [18] for a detailed survey of most of these models). Our work particularly has connections to the
work in machine learning and operations management. The existing work in machine learning has
focused on designing computationally efficient algorithms for estimating the model parameters from
commonly available observations (complete rankings, top-k/bottom-k lists, pairwise comparisons,
etc.). The developed techniques mainly consist of efficient algorithms for computing the likelihood of
the observed data [14, 11] and sampling techniques for sampling from the distributions conditioned
on observed data [15, 20]. The Plackett-Luce (PL) model, the Mallows model, and their variants have
been, by far, the most studied models in this literature. On the other hand, the work in operations
management has mainly focused on designing set optimization algorithms to find the best subset
efficiently. The multinomial logit (MNL) model has been the most commonly studied model in this
literature. The MNL model was made popular by the work of [19] and has been shown by [25] to be
equivalent to the PL model, introduced independently by Luce [16] and Plackett [22]. When given
the model parameters, the assortment optimization problem has been shown to be efficiently solvable
for the MNL model by [23], variants of the nested logit (NL) model by [6, 10], and the Markov chain
model by [2]. The problem is known to be hard for most other choice models [4], so [13] studies the
performance of the local search algorithm for some of the assortment problems that are known to be
hard. As mentioned, the literature in operations management has restricted itself to only those models
for which choice probabilities are known to be efficiently computed. In the context of the literature,
our key contribution is to extend a popular model in the machine learning literature to choice contexts
and the assortment problem.
2
Model and problem statement
Notation. We consider a universe U of n products. In order to distinguish products from their
corresponding ranks, we let U = {a1 , . . . , an } denote the universe of products, under an arbitrary
indexing. Preferences over this universe are captured by an anti-reflexive, anti-symmetric, and
transitive relation , which induces a total ordering (or ranking) over all products; specifically,
a b means that a is preferred to b. We represent preferences through rankings or permutations. A
complete ranking (or simply a ranking) is a bijection : U ! [n] that maps each product a 2 U to
its rank (a) 2 [n], where [j] denotes the set {1, 2, . . . , j} for any integer j. Lower ranks indicate
higher preference so that (a) < (b) if and only if a
b, where
denotes the preference
relation induced by the ranking . For simplicity of notation, we also let i denote the product ranked
at position i. Thus, 1 2 ? ? ? n is the list of the products written by increasing order of their ranks.
Finally, for any two integers i ? j, let [i, j] denote the set {i, i + 1, . . . , j}.
Mallows model. The Mallows model is a member of the distance-based ranking family models
[21]. This model is described by a location parameter !, which denotes the central permutation, and
a scale parameter ? 2 R+ , such that the probability of each permutation is given by
( )=
e
??d( ,!)
(?)
,
P
where (?) =
exp( ? ? d( , !)) is the normalization constant, and
P d(?, ?) is the KendallTau metric of distance between permutations defined as d( , !) =
(aj )) ?
i<j 1l[( (ai )
(!(ai ) !(aj )) < 0]. In other words, the d( , !) counts the number of pairwise disagreements
between the permutations and !. It can be verified that d(?, ?) is a distance function that is rightinvariant under the composition of the symmetric group, i.e., d(?1 , ?2 ) = d(?1 ?, ?2 ?) for every
?, ?1 , ?2 , where the composition ? is defined as ?(a) = (?(a)). This symmetry can be exploited to show that the normalization constant (?) has a closed-form expression [18] given by
Qn
i??
(?) = i=1 11 ee ? . Note that (?) depends only on the scale parameter ? and does not depend on
the location parameter. Intuitively, the Mallows model defines a set of consumers whose preferences
are ?similar?, in the sense of being centered around a common permutation, where the probability for
3
deviations thereof are decreasing exponentially. The similarity of consumer preferences is captured
by the Kendall-Tau distance metric.
Problem statement. We first focus on efficiently computing the probability that a product a will
be chosen from an offer set S ? U under the Mallows model. When offered the subset S, the
customer is assumed to sample a preference list according to the Mallows model and then choose the
most preferred product from S according to the sampled list. Therefore, the probability of choosing
product a from the offer set S is given by
X
P(a|S) =
( ) ? 1l[ , a, S],
(1)
where 1l[ , a, S] indicates whether (a) < (a0 ) for all a0 2 S, a0 6= a. Note that the above sum runs
over n! preference lists, meaning that it is a priori unclear if P(a|S) can be computed efficiently.
Once we are able to compute the choice probabilities, we consider the assortment optimization
problem. For that, we extend the product universe to include an additional product aq that represents
the outside (no-purchase) option and extend the Mallows model to n + 1 products. Each product a
has an exogenously fixed price ra with the price rq of the outside option set to 0. Then, the goal is to
solve the following decision problem:
X
max
P(a|S [ {rq }) ? ra .
(2)
S?U
a2S
The above optimization problem is hard to approximate within O(n1
model [1].
3
?
) under a general choice
Choice probabilities: closed-form expression
We now show that the choice probabilities can be computed efficiently under the Mallows model.
Without loss of generality, we assume from this point on that the products are indexed such that the
central permutation ! ranks product ai at position i, for all i 2 [n]. The next theorem shows that,
when the offer set is contiguous, the choice probabilities enjoy a rather simple form. Using these
expressions as building blocks, we derive a closed-form expressions for general offer sets.
Theorem 3.1 (Contiguous offer set) Suppose S = a[i,j] = {ai , . . . , aj } for some 1 ? i ? j ? n.
Then, the probability of choosing product ak 2 S under the Mallows model with location parameter
! and scale parameter ? is given by
P(ak |S) =
e ??(k i)
1 + e ? + ??? + e
??(j i)
.
The proof of Theorem 3.1 is in Appendix A. The expression for the choice probability under a general
offer set is more involved. For that, we need the following additional notation. For a pair of integers
1 ? m ? q ? n, define
q X
s 1
Y
(q, ?) =
e ??` and (q, m, ?) = (m, ?) ? (q m, ?).
s=1 `=0
In addition, for a collection of M discrete functions hm : Z ! R, m = 1, . . . , M such that hm (r) = 0
for any r < 0, their discrete convolution is defined as
X
(h1 ? ? ? ? ? hm ) (r) =
h1 (r1 ) ? ? ? hM (rM ).
r1 ,...,rM :
P
m rm =r
Theorem 3.2 (General offer set) Suppose S = a[i1 ,j1 ] [ ? ? ? [ a[iM ,jM ] where im ? jm for 1 ?
m ? M and jm < im+1 for 1 ? m ? M 1. Let Gm = a[jm ,im+1 ] for 1 ? m ? M 1,
G = G1 [ ? ? ? [ GM , and C = a[i1 ,jM ] . Then, the probability of choosing ak 2 a[i` ,j` ] can be written
as
QM 1
(|Gm | , ?)
P(ak |S) = e ??(k i1 ) ? m=1
? (f0 ? f?1 ? ? ? ? ? f?` ? f`+1 ? ? ? ? ? fM )(|G|),
(|C| , ?)
where:
4
??r?(jm i1 +1+r/2)
? fm (r) = e
if 0 ? r ? |Gm |, for 1 ? m ? M .
1
(|Gm |,r,?) ,
?
? f?m (r) = e??r ? fm (r), for 1 ? m ? M .
? f0 (r) = (|C| , |G|
r, ?) ?
e??(|G|
1+e
? +???+e
r)2 /2
??(|S|
1+r)
, for 0 ? r ? |G|.
? fm (r) = 0, for 0 ? m ? M and any r outside the ranges described above.
Proof. At a high level, deriving the choice probability expression for a general offer set involves
breaking down the probabilistic event of choosing ak 2 S into simpler events for which we can use
the expression given in Theorem 3.1, and then combining these expressions using the symmetries of
the Mallows distribution.
For a given vector R = (r0 , . . . , rM ) 2 RM +1 such that r0 + . . . rM = |G|, let h(R) be the set of
permutations which satisfy the following two conditions: i) among all the products of S, ak is the
most preferred, and ii) for all m 2 [M ], there are exactly rm products from Gm which are preferred
? m for all m 2 [M ]. This implies that there are r0
to ak . We denote this subset of products by G
products from G which are less preferred than ak . With this notation, we can write
P(ak |S) =
X
X
( ), where recall that ( ) =
e
R:r0 +...rM =|G| 2h(R)
??
P
i,j
?( ,i,j)
(?)
with ?( , i, j) = 1l[( (aiP
)
(aj )) ? (!(ai ) !(aj )) < 0]. For all , we break down the sum in the
exponential as follows: i,j ?( , i, j) = C1 ( ) + C2 ( ) + C3 ( ), where C1 ( ) contains pairs of
? m for some m 2 [M ] and aj 2 S, C2 ( ) contains pairs of products
products (i, j) such that ai 2 G
? m for some m 2 [M ] and aj 2 Gm0 \G
? m0 for some m 6= m0 , and C3 ( )
(i, j) such that ai 2 G
contains the remaining pairs of products. For a fixed R, we show that C1 ( ) and C2 ( ) are constant
for all 2 h(R).
Part 1. C1 ( ) counts the number of disagreements (i.e., number of pairs of products that are oppositely
? m for any m 2 [M ]. For all
ranked in and !) between some product in S and some product in G
?
m 2 [M ], a product in ai 2 Gm induces a disagreement with all product aj 2 S such that j < i.
Therefore, the sum of all these disagreements is equal to,
C1 ( ) =
M
X
X
?( , i, j) =
m=1 aj 2S
?m
ai 2 G
M
X
rm
m=1
m
X
j=1
|Sj |,
where Sm = a[im ,jm ] .
? m and some
Part 2. C2 ( ) counts the number of disagreements between some product in any G
0
?
0
0
product in any Gm \Gm for m 6= m. The sum of all these disagreements is equal to,
C2 ( ) =
X
m6=m0
=
M
X
m=2
=
M
X
m=2
Consequently, for all
X
?( , i, j) =
m=2
?m
ai 2 G
? 0
aj 2Gm0 \G
m
rm ?
rm
m
X1
j=1
m
X1
j=1
M
X
|Gj |
M
X
m=2
rm ?
1
(|G|
2
|Gj |
rm ?
m
X1
m
X1
j=1
(|Gj |
rj )
rj
j=1
m0 )2 +
M
1 X 2
r .
2 m=1 m
2 h(R), we can write d( , !) = C1 (R) + C2 (R) + C3 ( ) and therefore,
P(ak |S) =
X
e
??(C1 (R)+C2 (R))
(?)
R:r0 +???+rM =|G|
5
?
X
2h(R)
e
?.C3 ( )
.
Computing the inner sum requires a similar but more involved partitioning of the permutations as
well as using Theorem
3.1. The details are presented in Appendix B. In particular, we can show that
P
?.C3 ( )
for a fixed R,
is equal to
2h(R) e
(|G|
m0 , ?) ? (|S| + m0 , ?) ?
e ??(k 1
1 + ??? + e
P`
1
m=1
rm )
??(|S|+m0 1)
?
Putting all the pieces together yields the desired result.
M
Y
m=1
(|Gm |, ?)
.
(rm , ?) ? (|G|m rm , ?)
Due to representing P(ak |S) as a discrete convolution, we can efficiently compute this probability
using fast Fourier transform in O(n2 log n) time [5], which is a dramatic improvement over the
exponential sum (1) that defines the choice probabilities. Although Theorem 3.2 allows us to
compute the choice probabilities in polynomial time, it is not directly useful in solving the assortment
optimization problem under the Mallows model. To this end, we present an alternative (and slightly
less efficient) method for computing the choice probabilities by means of dynamic programming.
4
Choice probabilities: a dynamic programming approach
In what follows, we present an alternative algorithm for computing the choice probabilities. Our
approach is based on an efficient procedure to sample a random permutation according to a Mallows
distribution with location parameter ! and scale parameter ?. The random permutation is constructed
sequentially using a repeated insertion method (RIM) as follows. For i = 1, . . . , n and s = 1, . . . , i,
insert ai at position s with probability pi,s = e ??(i s) /(1 + e ? + ? ? ? + e ??(i 1) ).
Lemma 4.1 (Theorem 3 in [15]) The random insertion method generates a random sample from a
Mallows distribution with location parameter ! and scale parameter ?.
Based on the correctness of this procedure, we describe a dynamic program to compute the choice
probabilities of a general offer set S. The key idea is to decompose these probabilities to include
the position at which a product is chosen. In particular, for i ? k and s 2 [k], let ?(i, s, k) be the
probability that product ai is chosen (i.e., first among products in S) at position s after the k-th step
of the RIM. In other words, ?(i, s, k) corresponds to a choice probability when restricting U to the
n
P
first k products, a1 , . . . , ak . With this notation, we have for all i 2 [n], P(ai |S) =
?(i, s, n).
s=1
We compute ?(i, s, k) iteratively for k = 1, . . . , n. In particular, in order to compute ?(i, s, k + 1),
we use the correctness of the sampling procedure. Specifically, starting from a permutation that
includes the products a1 , . . . , ak , the product ak+1 is inserted at position j with probability pk+1,j ,
and we have two cases to consider.
Case 1: ak+1 2
/S
S. In this case, ?(k + 1, s, k + 1) = 0 for all s = 1, . . . , k + 1. Consider a product
ai for i ? k. In order for ai to be chosen at position s after ak+1 is inserted, one of the following
events has to occur: i) ai was already chosen at position s before ak+1 is inserted, and ak+1 is
inserted at a position ` > s, or ii) ai was chosen at position s 1, and ak+1 is inserted at a position
` ? s 1. Consequently, we have for all i ? k
k+1
s 1
X
X
?(i, s, k + 1) =
pk+1,` ? ?(i, s, k) +
pk+1,` ? ?(i, s 1, k)
`=s+1
where
k,s
=
Ps
`=1
= (1
`=1
k+1,s )
pk,` for all k, s.
? ?(i, s, k) +
k+1,s 1
? ?(i, s
1, k),
Case 2 : ak+1 2 S
S. Consider a product ai with i ? k. This product is chosen at position s only
if it was already chosen at position s and ak+1 is inserted at a position ` > s. Therefore, for all
i ? k, ?(i, s, k + 1) = (1
k+1,s ) ? ?(i, s, k). For product ak+1 , it is chosen at position s only if
all products ai for i ? k are at positions ` s and ak+1 is inserted at position s, implying that
n
XX
?(k + 1, s, k + 1) = pk+1,s ?
?(i, `, k).
i?k `=s
Algorithm 1 summarizes this procedure.
6
Algorithm 1 Computing choice probabilities
1: Let S be a general offer set. Without loss of generality, we assume that a1 2 S.
2: Let ?(1, 1, 1) = 1.
3: For k = 1, . . . , n 1,
(a) For all i ? k and s = 1, . . . k + 1, let
?(i, s, k + 1) = (1
k+1,s )
(b) For s = 1, . . . , k + 1, let
? ?(i, s, k) + 1l[ak+1 2
/ S] ?
?(k + 1, s, k + 1) = 1l[ak+1 2 S] ? pk+1,s ?
4: For all i 2 [n], return P(ai |S) =
Pn
s=1
k+1,s 1
n
XX
? ?(i, s
1, k).
?(i, `, k).
i?k `=s
?(i, s, n).
Theorem 4.2 For any offer set S, Algorithm 1 returns the choice probabilities under a Mallows
distribution with location parameter ! and scale parameter ?.
This dynamic programming approach provides an O(n3 ) time algorithm for computing P(a|S) for
all products a 2 S simultaneously. Moreover, as explained in the next section, these ideas lead to an
algorithm to solve the assortment optimization problem.
5
Assortment optimization: integer programming formulation
In the assortment optimization problem, each product a has an exogenously fixed price ra . Moreover,
there is an additional product aq that represents the outside option (no-purchase), with price rq = 0
that is always included. The goal is to determine the subset of products that maximizes the expected
revenue, i.e., solve (2). Building on Algorithm 1 and introducing a binary variable for each product,
we formulate 2 as an MIP with O(n3 ) variables and constraints, of which only n variables are binary.
We assume for simplicity that the first product of S (say a1 ) is known. Since this product is generally
not known a-priori, in order to obtain an optimal solution to problem (2), we need to guess the first
offered product and solve the above integer program for each of the O(n) guesses. We note that the
MIP formulation is quite powerful and can handle a large class of constraints on the assortment (such
as cardinality and capacity constraints) and also extends to the case of the mixture of Mallows model.
Theorem 5.1 Conditional on a1 2 S, the optimal solution to 2 is given by S ? = {i 2 [n] : x?i = 1},
n
where x? 2 {0, 1} is the optimal solution to the following MIP:
X
max
ri ? ?(i, s, n)
x,?,y,z
i,s
8s = 2, . . . , n
8i, s, 8k 2
s.t. ?(1, 1, 1) = 1, ?(1, s, 1) = 0,
?(i, s, k + 1) = (1 wk+1,s ) ? ?(i, s, k) + yi,s,k+1 ,
?(k + 1, s, k + 1) = zs,k+1 ,
yi,s,k ? k+1,s 1 ? ?(i, s 1, k 1),
0 ? yi,s,k ? k+1,s 1 ? (1 xk ),
zs,k ? pk+1,s ?
n k
X
X1
?(i, `, k
8s, 8k
8i, s, 8k
8i, s, 8k
2
2
2
8s, 8k
2
8s, 8k
2
1),
`=s i=1
0 ? zs,k ? pk+1,s ? xk ,
x1 = 1, xq = 1, xk 2 {0, 1}
We present the proof of correctness for this formulation in Appendix C.
7
6
Numerical experiments
In this section, we examine how the MIP performs in terms of the running time. We considered the
following simulation setup. Product prices are sampled independently and uniformly at random from
the interval [0, 1]. The modal ranking is fixed to the identity ranking with the outside option ranked
at the top. The outside option being ranked at the top is characteristic of applications in which the
retailer captures a small fraction of the market and the outside option represents the (much larger) rest
of the market. Because the outside option is always offered, we need to solve only a single instance
of the MIP (described in Theorem 5.1). Note that in the more general setting, the number of MIPs
that must be solved is equal the minimum of the rank of the outside option and the rank of the highest
revenue item2 . Because the MIPs are independent of each other, they can be solved in parallel. We
solved the MIPs using the Gurobi Optimizer version 6.0.0 on a computer with processor 2.4GHz
Intel Core i5, RAM of 8GB, and operating system Mac OSX El Capitan.
Strengthening of the MIP formulation. We use structural properties of the optimal solution
to tighten some of the upper bounds involving the binary variables in the MIP formulation. In
particular, for all i, s, and m, we replace the constraint yi,s,m ? m+1,s 1 ? (1 xm ) with the
constraint yi,s,m ? m+1,s 1 ? ui,s,m ? (1 xm ), where ui,s,m is the probability that product ai
is selected at position (s 1) after the mth step of the RIM when the offer set is S = {ai? , aq },
i.e. when only the highest priced product is offered. Since we know that the highest price product
is always offered in the optimal assortment, this is a valid upper bound to ?(i, s 1, m 1) and,
therefore, a valid strengthening of the constraint. Similarly, for all s and m, we replace the constraint,
zs,m ? ?m+1,s ? xm with the constraint zs,m ? ?m+1,s ? vs,m ? xm , where vs,m is equal to the
probability that product that product i is selected at position ` = s, . . . , n when the offer set is
S = {aq } if ai w ai? , and S = {aq , ai? } otherwise. Again using the fact that the highest price
product is always offered in the optimal assortment, we can show that this is a valid upper bound.
Results and discussion. Table 1 shows the running time of the strengthened MIP formulation for
different values of e ? and n. For each pair of parameters, we generated 50 different instances. We
n
10
15
20
25
Average running time (s)
e ? = 0.8 e ? = 0.9
4.60
4.72
19.04
21.30
48.08
105.30
143.21
769.78
e
Max running time (s)
?
= 0.8 e ? = 0.9
5.64
5.80
27.08
28.79
58.09
189.93
183.78
1,817.98
Table 1: Running time of the strengthened MIP for various values of e
?
and n.
note that the strengthening improves the running time considerably. Under the initial formulation,
the MIP did not terminate after several hours for n = 25 whereas it was able to terminate in a few
minutes with the additional strengthening. Our MIP obtains the optimal solution in a reasonable
amount of time for the considered parameter values. Outside of this range, i.e. when e ? is too small
or when n is too large, there are potential numerical instabilities. The strengthening we propose is
one way to improve the running time of the MIP but other numerical optimization techniques may be
applied to improve the running time even further. Finally, we emphasize that the MIP formulation
is necessary because of its flexibility to handle versatile business constraints (such as cardinality or
capacity constraints) that naturally arise in practice.
Extensions and future work. Although the entire development was focused on a single Mallows
model, our results extend to a finite mixture of Mallows model. Specifically, for a Mallows model
PT
with T mixture components, we can compute the choice probability by setting ?(i, s, n) = t=1 ?t ?
?t (i, s, n), where ?(i, s, n) is the probability term defined in Section 4, ?t (?, ?, ?) is thePprobability for
n
mixture component t, and ?t > 0 are the mixture weights. We then have P(a|S) = s=1 ?(i, s, n)
for the mixture model. Correspondingly, the MIP in Section 5 also naturally extends. The natural
next step is to develop special purpose algorithm to solve the MIP that exploit the structure of the
Mallows distributions allowing to scale to large values of n.
2
It can be shown that the highest revenue item is always part of the optimal subset.
8
References
[1] A. Aouad, V. Farias, R. Levi, and D. Segev. The approximability of assortment optimization under ranking
preferences. Available at SSRN: http://ssrn.com/abstract=2612947, 2015.
[2] Jose H Blanchet, Guillermo Gallego, and Vineet Goyal. A markov chain approximation to choice modeling.
In EC, pages 103?104, 2013.
[3] G. Brightwell and P. Winkler. Counting linear extensions is #P-complete. In STOC ?91 Proceedings of the
twenty-third annual ACM Symposium on Theory of Computing, pages 175?181, 1991.
[4] Juan Jos? Miranda Bront, Isabel M?ndez-D?az, and Gustavo Vulcano. A column generation algorithm for
choice-based network revenue management. Operations Research, 57(3):769?784, 2009.
[5] J. Cooley and J. Tukey. An algorithm for the machine calculation of complex fourier series. Mathematics
of Computation, 19(90):297?301, 1965.
[6] James M Davis, Guillermo Gallego, and Huseyin Topaloglu. Assortment optimization under variants of
the nested logit model. Operations Research, 62(2):250?273, 2014.
[7] J. Doignon, A. Peke?c, and M. Regenwetter. The repeated insertion model for rankings: Missing link
between two subset choice models. Psychometrika, 69(1):33?54, 2004.
[8] C. Dwork, R. Kumar, M. Naor, and D. Sivakumar. Rank aggregation methods for the web. In ACM, editor,
Proceedings of the 10th international conference on World Wide Web, pages 613?622, 2001.
[9] V. Farias, S. Jagabathula, and D. Shah. A nonparametric approach to modeling choice with limited data.
Management Science, 59(2):305?322, 2013.
[10] Guillermo Gallego and Huseyin Topaloglu. Constrained assortment optimization for the nested logit model.
Management Science, 60(10):2583?2601, 2014.
[11] John Guiver and Edward Snelson. Bayesian inference for plackett-luce ranking models. In proceedings of
the 26th annual international conference on machine learning, pages 377?384. ACM, 2009.
[12] D. Honhon, S. Jonnalagedda, and X. Pan. Optimal algorithms for assortment selection under ranking-based
consumer choice models. Manufacturing & Service Operations Management, 14(2):279?289, 2012.
[13] S. Jagabathula. Assortment optimization under general choice. Available at SSRN 2512831, 2014.
[14] G. Lebanon and Y. Mao. Non-parametric modeling of partially ranked data. Journal of Machine Learning
Research, 9:2401?2429, 2008.
[15] T. Lu and C. Boutilier. Learning mallows models with pairwise preferences. In Proceedings of the 28th
International Conference on Machine Learning, pages 145?152, 2011.
[16] R.D. Luce. Individual Choice Behavior: A Theoretical Analysis. Wiley, 1959.
[17] C. Mallows. Non-null ranking models. Biometrika, 44(1):114?130, 1957.
[18] J. Marden. Analyzing and Modeling Rank Data. Chapman and Hall, 1995.
[19] D. McFadden. Modeling the choice of residential location. Transportation Research Record, (672):72?77,
1978.
[20] M. Meila, K. Phadnis, A. Patterson, and J. Bilmes. Consensus ranking under the exponential model. arXiv
preprint arXiv:1206.5265, 2012.
[21] T. Murphy and D. Martin. Mixtures of distance-based models for ranking data. Computational Statistics &
Data Analysis, 41(3):645?655, 2003.
[22] R.L. Plackett. The analysis of permutations. Applied Statistics, 24(2):193?202, 1975.
[23] K. Talluri and G. Van Ryzin. Revenue management under a general discrete choice model of consumer
behavior. Management Science, 50(1):15?33, 2004.
[24] G. van Ryzin and G. Vulcano. A market discovery algorithm to estimate a general class of nonparametric
choice models. Management Science, 61(2):281?300, 2014.
[25] John I Yellott. The relationship between luce?s choice axiom, thurstone?s theory of comparative judgment,
and the double exponential distribution. Journal of Mathematical Psychology, 15(2):109?144, 1977.
9
| 6224 |@word version:1 polynomial:1 logit:5 simulation:4 dramatic:1 profit:5 versatile:1 carry:1 initial:1 substitution:5 contains:3 ndez:1 series:1 existing:5 com:1 danny:1 must:2 written:2 john:2 numerical:3 j1:1 designed:1 v:2 implying:1 selected:2 guess:2 item:4 xk:3 core:1 record:1 provides:3 bijection:1 location:7 preference:18 simpler:1 mathematical:1 c2:7 constructed:1 become:1 symposium:1 naor:1 pairwise:5 ra:3 market:3 expected:5 behavior:4 examine:1 shirt:1 decreasing:1 jm:7 cardinality:2 increasing:2 psychometrika:1 spain:1 estimating:2 notation:5 xx:2 maximizes:2 moreover:2 null:1 what:2 z:5 developed:1 finding:2 every:1 exactly:1 biometrika:1 rm:17 qm:1 partitioning:1 enjoy:1 before:1 service:1 local:1 despite:1 ak:25 analyzing:1 sivakumar:1 black:1 studied:5 specifying:1 challenging:1 limited:1 range:2 unique:1 practical:1 mallow:36 practice:3 goyal:2 block:2 procedure:5 area:1 axiom:1 significantly:2 word:2 selection:1 context:5 instability:1 equivalent:1 map:1 customer:9 transportation:2 maximizing:2 missing:1 economics:1 exogenously:2 independently:2 starting:1 focused:4 formulate:2 survey:1 simplicity:2 guiver:1 deriving:1 marden:1 handle:2 thurstone:1 pt:1 suppose:2 gm:10 exact:1 programming:4 us:1 designing:2 particularly:1 bottom:2 observed:2 inserted:7 preprint:1 solved:3 capture:2 earned:1 ordering:2 highest:5 mentioned:1 rq:3 complexity:1 insertion:4 ui:2 dynamic:5 depend:1 solving:3 patterson:1 farias:2 isabel:1 various:2 fast:2 describe:1 monte:1 choosing:5 outside:10 exhaustive:1 whose:1 quite:1 larger:1 solve:6 say:3 otherwise:1 statistic:4 winkler:1 g1:1 transform:2 itself:1 online:1 advantage:1 propose:3 product:74 strengthening:5 combining:1 topaloglu:2 flexibility:1 thepprobability:1 az:1 exploiting:1 double:1 p:1 r1:2 comparative:1 derive:2 develop:1 ac:1 stat:1 school:1 edward:1 involves:1 indicate:1 implies:1 popularly:1 centered:1 require:2 decompose:1 im:5 insert:1 pl:5 extension:2 around:2 considered:3 hall:1 exp:1 m0:7 optimizer:1 purpose:1 estimation:2 combinatorial:1 correctness:3 always:5 rather:1 pn:1 command:1 srikanth:1 focus:4 improvement:1 rank:11 likelihood:2 mainly:2 indicates:1 sense:1 inference:1 plackett:5 el:1 typically:1 entire:2 a0:3 her:2 relation:2 mth:1 i1:4 among:2 priori:2 development:1 constrained:1 special:2 equal:6 once:1 sampling:5 chapman:1 represents:3 purchase:5 future:1 aip:1 few:1 simultaneously:1 individual:1 murphy:1 luxury:1 n1:1 interest:1 dwork:1 mixture:10 nl:2 chain:2 accurate:2 partial:2 necessary:2 indexed:1 haifa:2 desired:1 mip:20 theoretical:1 instance:3 column:1 modeling:5 contiguous:2 maximization:2 mac:1 reflexive:1 deviation:2 subset:12 introducing:1 too:2 considerably:1 chooses:1 international:3 vineet:2 probabilistic:1 jos:1 together:1 concrete:1 again:1 central:3 management:11 choose:2 juan:1 ieor:4 return:2 account:1 potential:1 ioms:1 wk:1 includes:1 satisfy:1 ranking:27 depends:1 piece:1 h1:2 break:1 closed:7 kendall:2 tukey:1 aggregation:1 option:9 parallel:1 elaborated:2 contribution:3 il:1 who:1 efficiently:10 largely:1 yield:1 characteristic:1 judgment:1 bayesian:1 lu:1 carlo:1 bilmes:1 processor:1 involved:2 james:1 thereof:1 naturally:2 proof:3 sampled:3 popular:4 recall:1 improves:1 rim:5 higher:1 oppositely:1 follow:1 modal:2 formulation:10 evaluated:1 generality:2 marketing:1 hand:2 web:2 lack:1 defines:2 aj:10 building:3 effect:1 talluri:1 requiring:1 hence:2 symmetric:2 iteratively:1 davis:1 whereby:1 complete:5 performs:1 meaning:1 snelson:1 common:2 multinomial:1 exponentially:6 tail:1 extend:4 composition:2 ai:25 meila:1 mathematics:1 similarly:1 aq:5 specification:2 geared:1 similarity:1 f0:2 gj:3 etc:1 operating:1 store:1 retailer:4 binary:4 yi:5 exploited:1 huseyin:2 captured:2 minimum:1 additional:4 r0:5 determine:1 ii:2 rj:2 infer:1 characterized:1 calculation:1 offer:24 a1:6 prediction:1 variant:4 involving:1 metric:2 arxiv:2 represent:1 normalization:2 retail:1 c1:7 addition:1 whereas:1 interval:1 rest:1 induced:1 member:2 integer:7 ee:1 structural:1 counting:1 mips:3 m6:1 marginalization:1 psychology:1 fm:4 inner:1 idea:2 computable:1 luce:6 whether:1 expression:13 gb:1 ignored:1 generally:2 useful:1 detailed:1 boutilier:1 ryzin:2 amount:2 nonparametric:2 dark:1 mid:1 extensively:1 concentrated:1 induces:2 http:1 specifies:2 per:1 popularity:1 blue:1 discrete:5 write:2 group:1 key:8 putting:1 levi:1 nevertheless:1 miranda:1 verified:1 ram:1 fraction:1 sum:6 residential:1 run:1 jose:1 powerful:1 i5:1 extends:3 family:1 reasonable:2 decision:7 appendix:3 summarizes:1 capturing:1 bound:3 distinguish:1 annual:2 occur:2 constraint:13 segev:2 n3:4 ri:1 generates:1 fourier:3 approximability:1 kumar:1 ssrn:3 martin:1 department:4 according:4 alternate:1 slightly:1 pan:1 partitioned:2 making:1 intuitively:2 restricted:1 indexing:1 explained:1 jagabathula:3 computationally:4 equation:1 count:3 know:1 tractable:1 end:1 available:5 operation:7 assortment:25 apply:1 phadnis:1 disagreement:7 alternative:3 shah:1 substitute:1 assumes:1 top:4 include:3 denotes:3 remaining:1 running:8 exploit:3 gallego:3 objective:1 already:2 parametric:4 concentration:1 antoine:2 unclear:1 dp:2 distance:8 link:1 accommodates:1 capacity:2 collected:1 consensus:1 trivial:1 reason:1 consumer:5 relationship:1 setup:1 statement:2 stoc:1 stern:2 twenty:1 allowing:1 conversion:1 upper:3 observation:3 convolution:3 markov:2 sm:1 finite:1 anti:2 cooley:1 arbitrary:1 introduced:2 pair:6 specified:1 c3:5 connection:1 gurobi:1 barcelona:1 hour:1 nip:1 yellott:1 address:1 able:2 below:2 pattern:1 xm:4 challenge:2 program:5 reliable:1 tau:2 max:3 event:3 business:3 ranked:5 natural:1 solvable:1 representing:1 improve:2 hm:4 transitive:1 columbia:4 xq:1 faced:1 review:1 literature:7 discovery:1 determining:5 marginalizing:1 sir:1 loss:2 permutation:14 mcfadden:1 mixed:2 generation:1 revenue:14 offered:8 blanchet:1 editor:1 pi:1 guillermo:3 arriving:1 offline:1 fall:1 wide:1 correspondingly:1 priced:1 ghz:1 van:2 valid:3 world:1 rich:1 qn:1 commonly:6 made:2 collection:2 far:1 tighten:1 ec:1 transaction:1 lebanon:1 sj:1 approximate:1 compact:2 obtains:1 preferred:6 emphasize:1 sequentially:1 summing:2 assumed:1 don:1 search:2 continuous:1 table:2 vgoyal:1 terminate:2 obtaining:1 symmetry:2 complex:2 necessarily:1 did:1 mnl:3 main:1 pk:8 universe:6 arise:2 n2:2 brightwell:1 repeated:3 x1:6 intel:1 strengthened:2 wiley:1 position:19 mao:1 exponential:6 breaking:1 third:1 theorem:13 down:2 minute:1 discarding:1 nyu:2 list:8 intractable:1 consist:1 restricting:1 gustavo:1 conditioned:1 demand:8 gumbel:1 simply:1 expressed:1 partially:1 nested:4 corresponds:1 acm:3 conditional:1 goal:2 identity:1 consequently:2 towards:1 manufacturing:1 price:8 replace:2 hard:4 included:1 specifically:3 uniformly:1 lemma:1 called:2 total:1 support:2 arises:1 incorporate:1 |
5,775 | 6,225 | Variational Inference in
Mixed Probabilistic Submodular Models
Josip Djolonga
Sebastian Tschiatschek
Andreas Krause
Department of Computer Science, ETH Z?urich
{josipd,tschiats,krausea}@inf.ethz.ch
Abstract
We consider the problem of variational inference in probabilistic models with both
log-submodular and log-supermodular higher-order potentials. These models can
represent arbitrary distributions over binary variables, and thus generalize the
commonly used pairwise Markov random fields and models with log-supermodular
potentials only, for which efficient approximate inference algorithms are known.
While inference in the considered models is #P-hard in general, we present efficient approximate algorithms exploiting recent advances in the field of discrete
optimization. We demonstrate the effectiveness of our approach in a large set of
experiments, where our model allows reasoning about preferences over sets of
items with complements and substitutes.
1
Introduction
Probabilistic inference is one of the main building blocks for decision making under uncertainty. In
general, however, this problem is notoriously hard even for deceptively simple-looking models and
approximate inference techniques are necessary. There are essentially two large classes in which
we can categorize approximate inference algorithms ? those based on variational inference or on
sampling. However, these methods typically do not scale well to large numbers of variables, or
exhibit an exponential dependence on the model order, rendering them intractable for models with
large factors, which can naturally arise in practice.
In this paper we focus on the problem of inference in point processes, i.e. distributions P (A) over
subsets A ? V of some finite ground set V . Equivalently, these models can represent arbitrary
distributions over |V | binary variables1 . Specifically, we consider models that arise from submodular
functions. Recently, Djolonga and Krause [1] discussed inference in probabilistic submodular models
(PSMs), those of the form P (A) ? exp(?F (A)), where F is submodular. These models are called
log-submodular (with the plus) and log-supermodular (with the minus) respectively. They generalize
widely used models, e.g., pairwise purely attractive or repulsive Ising models and determinantal point
processes (DPPs) [2]. Approximate inference in these models via variational techniques [1, 3] and
sampling based methods [4, 5] has been investigated.
However, many real-world problems have neither purely log-submodular nor log-supermodular
formulations, but can be naturally expressed in the form P (A) ? exp(F (A) ? G(A)), where both
F (A) and G(A) are submodular functions ? we call these types of models mixed PSMs. For instance,
in a probabilistic model for image segmentation there can be both attractive (log-supermodular)
potentials, e.g., potentials modeling smoothness in the segmentation, and repulsive (log-submodular)
potentials, e.g., potentials indicating that certain pixels should not be assigned to the same class.
While the sampling based approaches for approximate inference are in general applicable to models
1
Distributions over sets A ? V are isomorphic to distributions over |V | binary variables, where each binary
variable corresponds to an indicator whether a certain element is included in A or not.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
with both types of factors, fast mixing is only guaranteed for a subclass of all possible models and
these methods may not scale well to large ground sets. In contrast, the variational inference techniques
were only developed for either log-submodular or log-supermodular models.
In this paper we close this gap and develop variational inference techniques for mixed PSMs. Note
that these models can represent arbitrary positive distributions over sets as any set function can be
represented as the difference of a submodular and a supermodular function [6].2 By exploiting recent
advances in submodular optimization we formulate efficient algorithms for approximate inference
that easily scale to large ground sets and enable the usage of large mixed factors.
Applications/Models. Mixed PSMs are natural models for a variety of applications ? modeling of
user preferences, 3D stereo reconstruction [7], and image segmentation [8, 9] to name a few. For
instance, user preferences over items can be used for recommending products in an online marketing
application and naturally capture the economic notions of substitutes and complements. Informally,
item a is a substitute for another item b if, given item b, the utility of a diminishes (log-submodular
potentials); on the other hand, an item c is a complement for item d if, given item d, the utility
of c increases (log-supermodular potentials). Probabilistic models that can model substitutes of
items are for example DPPs [2] and the facility location diversity (FLID) model [10]. In ?4 we
extend FLID to model both substitutes and complements which results in improved performance on a
real-world product recommendation task. In terms of computer vision problems, non-submodular
binary pairwise MRFs are widely used [8], e.g., as discussed above in image segmentation.
Our contributions. We generalize the variational inference procedure proposed in [1] to models
containing both log-submodular and log-supermodular potentials, enabling inference in arbitrary
distributions over binary variables. Furthermore, we provide efficient approximate algorithms for
factor-wise coordinate descent updates enabling faster inference for certain types of models, in
particular for rich scalable diversity models. In a large set of experiments we demonstrate the
effectiveness of mixed higher-order models on a product recommendation task and illustrate the merit
of the proposed variational inference scheme.
2
Background: Variational Inference in PSMs
Submodularity. Let F : 2V ? R be a set function, i.e., a function mapping sets A ? V to real
numbers. We will furthermore w.l.o.g assume that V = {1, 2, . . . , n}. Formally, a function F is
called submodular if it satisfies the following diminishing returns property for all A ? B ? V \ {i}:
F (A ? {i}) ? F (A) ? F (B ? {i}) ? F (B).
Informally, this property states that the gain of an item i in the context of a smaller set A is larger than
its gain in the context of a larger set B. A function G is called supermodular if ?G is submodular.
A function F is modular,
if it is both submodular and supermodular. Modular functions F can be
P
written as F (A) = i?A mi for some numbers mi ? R, and
P can be thus parameterized by vectors
m ? Rn . As a shorthand we will frequently use m(A) = i?A mi .
Probabilistic submodular models (PSMs). PSMs are distributions over sets of the form
P (A) =
1
exp(?F (A)),
Z
P
where Z = A?V exp(?F (A)) ensures that P (A) is normalized, and is often called the partition
function. The distribution P (A) is called log-submodular if the sign in the above definition is positive
and log-supermodular if the sign is negative. These distributions generalize many well known
classical models and have been effectively used for image segmentation [11], and for modeling
diversity of item sets in recommender systems [10]. When F (A) = m(A) is a modular function,
P (A) ? exp(F (A)) is called log-modular and corresponds to a fully factorized distribution over n
binary random variables X1 , . . . , Xn , where we have for each element i ? V an associated variable
Xi indicating if this element is included in A or not. The resulting distribution can be written as
Y
Y
1
P (A) = exp(m(A)) =
?(mi )
?(?mi ),
Z
i?A
2
i?A
/
As the authors in [6] note, such a decomposition can be in general hard to find.
2
where ?(u) = 1/(1 + e?u ) is the sigmoid function.
Variational inference and submodular polyhedra. Djolonga and Krause [1] considered variational
inference for PSMs, whose idea we will present here in a slightly generalized manner. Their
approach starts by bounding F (A) using functions of the form m(A) + t, where m(A) is a modular
function and t ? R. Let us first analyze the log-supermodular case. If for all A ? V it holds that
m(A) + t ? F (A), then we can bound the partition function Z as
n
X
X
X
log Z = log
e?F (A) ? log
e?m(A)?t =
log(1 + e?mi ) ? t.
A?V
i=1
A?V
Then, the idea is to optimize over the free parameters m and t to find the best upper bound, or to
equivalently solve the optimization problem
n
X
min
log(1 + exp(?mi )) ? t,
(1)
(m,t)?L(F )
i=1
where L(F ) is the set of all lower bounds of F , also known as the generalized submodular lower
polyhedron [12]
L(F ) := {(x, t) ? Rn+1 | ?A ? V : x(A) + t ? F (A)}.
(2)
Djolonga and Krause [1] show that one obtains the same optimum if we restrict ourselves to t = 0
and one additional constraint, i.e., if we instead of L(F ) use the base polytope B(F ) defined as
B(F ) := L(F ) ? {(x, 0) ? Rn+1 | x(V ) = F (V )}.
In words, it contains all modular lower bounds of F that are tight at V and ?. Thanks to the celebrated
result of Edmonds [13], one can optimize linear functions over B(F ) in time O(n log n). This,
together with the fact that log(1 + e?u ) is 14 -smooth, in turn renders the optimization problem (1)
solvable via the Frank-Wolfe procedure [14, 15].
In the log-submodular case, we have to replace in problem (1) the minuses with pluses and use instead
of L(F ) the set of upper bounds. This set, denoted as U(F ), defined by reversing the inequality
sign in Equation (2), is called the generalized submodular upper polyhedron [12]. Unfortunately, in
contrast to L(F ), one can not easily optimize over U(F ) and asking membership queries is an NPhard problem. As discussed by Iyer and Bilmes [12] there are some special cases, like M \ -concave
functions [16], where one can describe U(F ), which we will discuss in ? 3. Alternatively, which is
the approach taken by [3], one can select a specific subfamily of U(F ) and optimize over them.
3
Inference in Mixed PSMs
We consider mixed PSMs, i.e. probability distributions over sets that can be written in the form
P (A) ? exp(F (A) ? G(A)),
where F (A) and G(A)P
are both submodular functions.
PmG Furthermore, we assume that F and G
mF
decompose as F (A) = i=1
Fi (A), and G(A) = i=1
Gi (A), where the functions Fi and Gi are
all submodular. Note that this is not a limiting assumption, as submodular functions are closed under
addition and we can always take mF = mG = 1, but such a decomposition will sometimes allow
us to obtain better bounds. The corresponding distribution has the form
m
m
F
G
Y
Y
P (A) ?
exp(Fj (A))
exp(?Gj (A)).
(3)
j=1
j=1
Similarly to the approach by Djolonga and Krause [1], we perform variational inference by upper
bounding F (A) ? G(A) by a modular function parameterized by m and a constant t such that
F (A) ? G(A) ? m(A) + t for all A ? V.
(4)
This upper bound induces the log-modular distribution Q(A) ? exp(m(A) + t). Ideally, we would
like to select (m, t) such that the partition function of Q(A) is as small as possible (and thus our
approximation of the partition function of P (A) is as tight as possible), i.e., we aim to solve
min
(m,t)?U (F ?G)
t+
|V |
X
i=1
3
log(1 + exp(mi )).
(5)
Optimization (and even membership checks) over U(F ? G) is in general difficult, mainly because
of the structure of U(F ? G), which is given by 2n inequalities. Thus, we seek to perform a series
of inner approximations of U(F ? G) that make the optimization more tractable.
Approximating U(F ?G). In a first step we approximate U(F ?G) as U(F )?L(G) ? U(F ?G),
where the summation is understood as a Minkowski sum. Then, we can replace L(G) by B(G)
without losing any expressive power, as shown by the following lemma (see [3][Lemma 6]).
Lemma 1. Optimizing problem (5) over U(F ) ? L(G) and over U(F ) ? B(G) yields the same
optimum value.
This lemma will turn out to be helpful when we shortly describe our strategy for minimizing (5) over
U(F ) ? B(G) as it will render some of our subproblems convex optimization problems over B(G)?
these subproblems can then be efficiently solved using the Frank-Wolfe algorithm as proposed in [1]
by noting that a greedy algorithm can be used to solve linear optimization problems over B(G) [17].
PmG
By assumption, F (A) and G(A) are composed of simpler functions. First, because G = j=1
Gj ,
PmG
B(Gj ) (see e.g. [18]). Second, even though it is hard to describe U(F ),
it holds that B(G) = j=1
it might hold
PmFthat U(Fi ) has a tractable description, which leads to the natural inner approximation
U(F ) ? j=1
U(Fj ). To wrap up, we performed the following series of inner approximations
U(F ? G) ?
U(F )
?
B(G)
PmF
j=1
=
?
?
U(Fj ) ?
PmG
j=1
,
B(Gj )
which we then use to approximate U(F ? G) in problem (5) before solving it.
Optimization. To solve the resulting problem we use a block coordinate descent procedure. Let us
first rewrite the problem in a form that enables us to easily describe the algorithm. Let us write our
resulting approximation as
mF
mG
X
X
(m, t) =
(fj , tj ) ?
(gj , 0),
j=1
j=1
where we have constrained (fj , tj ) ? U(Fj ) and gj ? B(Gj ). The resulting problem is then to solve
mF
mF
mG
n
h
i
X
X
X
X
(6)
min
tj +
log 1 + exp(
fj,i ?
gj,i ) .
(fj ,tj )?U (Fj ),gj ?B(Gj )
j=1
|
i=1
j=1
{z
j=1
=:T ((fj ,tj )j=1,...,mF ,(gj )j=1,...,mG )
}
Then, until convergence, we pick one of the mG + mF blocks uniformly at random and solve the
resulting optimization problem, which we now show how to do.
Log-supermodular blocks. For a log-supermodular block j, minimizing (6) over gj is a smooth
convex optimization problem and we can either use the Frank-Wolfe procedure as in [1], or the
divide-and-conquer algorithm (see e.g. [19]). In particular, if we use the Frank-Wolfe procedure we
perform a block coordinate descent step with respect to (6) by iterating the following until we achieve
some desired precision : Given the current gj , we compute ?gj T and use the greedy algorithm to
2
2
solve arg minx?B(Gj ) hx, ?gj T i in O(n log n) time. We then update gj to (1 ? k+2
)x + k+2
gj ,
where k is the iteration number.
Log-submodular blocks. As we have already mentioned, this optimization step is much more
challenging. One procedure, which is taken by [1], is to consider a set of 2n points inside U(Fj )
and optimize over them, which turns out to be a submodular minimization problem. However, for
specific subfamilies, we can better describe U(Fj ). One particularly interesting subfamily is that of
M \ -concave functions [16], which have been studied in economics [20].
A set function H is called M \ -concave if ?A, B ? V, i ? A \ B it satisfies
H(A) + H(B) ? H(A \ {i}) + H(B ? {i}) or
?j ? B \ A : H(A) + H(B) ? H((A \ {i}) ? {j}) + H((B ? {i}) \ {j}).
4
Equivalently, these functions can be defined through the so called gross substitutability property
known in economics. It turns out that M \ -concave set functions are also submodular. Examples of
these functions include facility location functions, matroid rank functions, monotone concave over
cardinality functions, etc. [16]. For example, H(A) = maxi?A hi for hi ? 0 is M \ -concave, which
we will exploit in our models in the experimental section.
Returning to our discussion of optimizing (6), if Fj is an M \ -concave function, we can minimize (6)
over (fj , tj ) ? U(Fj ) to arbitrary precision in polynomial time. Therefore, we can, similarly
as in [1], use the Frank-Wolfe algorithm by noting that a polynomial time algorithm for computing arg minx?U (Fj ) hx, ?(fj ,tj ) T i exists [20]. Although the minimization can be performed in
polynomial time, it is a very involved algorithm. We therefore consider an inner approximation
? j ) := {(m, 0) ? Rn+1 | ?A ? V : F (A) ? m(A)} ? U(Fj ) of U(Fj ) over which we
U(F
can more efficiently approximately minimize (6). As pointed out by Iyer and Bilmes [12], for M \
? j ) can be characterized by O(n2 ) inequalities as follows:
functions Fj the polyhedron U(F
? j ) := ?A?V {(m, 0) ? Rn+1 | ?i ? A : mi ? Fj (A) ? Fj (A \ {i}),
U(F
?k 6? A : mj ? Fj (A ? {k}) ? Fj (A),
?i ? A, k 6? A : mi ? mk ? Fj (A) ? Fj ((X ? {i}) \ {k})}.
? j ). Given a set A where we want our modular
We propose to use Algorithm 1 for minimizing over U(F
approximation to be exact at, the algorithm iteratively minimizes the partition function of a modular
upper bound on Fj . Clearly, after the first iteration of the algorithm (m, 0) is an upper bound on
Fj . Furthermore, the partition function corresponding to that bound decreases monotonically over
the iterations of the algorithm. Several heuristics can be used to select A?in the experiments we
determined A as follows: We initialized B = ? and then, while 0 < maxi?V \A F (B ? {i}) ? F (B),
added i to B, i.e. B ? {arg maxi?V \B F (B ? {i}) ? F (B)}. We used the final B of this iteration
as our tight set A.
Algorithm 1 Modular upper bound for M \ -concave functions
Require: M \ function F , tight set A s.t. m(A) = F (A) for the returned m
Initialize m randomly
for l = 1, 2, . . . , max. nr. of iterations do
. Alt. minimize m over coeff. corresponding to A and V \ A
?i ? A : mi = min{F (A) ? F (A \ {i}), mink?V \A mk + F (A) ? F ((A ? {i}) \ {k})}
?k 6? A : mk = max{F (A ? {k}) ? F (A), maxi?A mi ? F (A) + F ((A ? {i}) \ {k})}
end for
return Modular upper bound m on F
4
Examples of Mixed PSMs for Modelling Substitutes and Complements
In our experiments we consider probabilistic models that take the following form:
H(A; ?, ?) =
X
i?A
ui + ?
L
X
max rl,i ?
i?A
l=1
|
X
K
X
X
max ak,i ?
ak,i ,
rl,i ??
i?A
{z
Fl (A)
i?A
k=1
}
|
(7)
i?A
{z
Gk (A)
}
where ?, ? ? {0, 1} switch on/off the repulsive and attractive capabilities of the model, respectively.
PL
We would like to point out that even though l=1 Fl (A) is not M \ -concave, each summand Fl is,
which we will exploit in the next section. The model is parameterized by the vector u ? R|V | , and
|V |
|V |
the weights (rl )l?[L] , rl ? R?0 and (ak )k?[K] , ak ? R?0 , which will be explained shortly. From
the general model (7) we instantiate four different models as explained in the following.
Log-modular model. The log-modular model Pmod (A) is instantiated from (7) by setting ? = ? =
0, i.e. Fmod (A) := H(A; 0, 0) and serves as a baseline model. This model cannot capture any
dependencies between items and corresponds to a fully factorized distribution over the items in V .
Facility location diversity model (FLID). This model is instantiated from (7) by setting ? = 1, ? = 0,
i.e. FFLID (A) := H(A; 1, 0), and is known as facility location diversity model (FLID) [10]. Note that
5
this induces a log-submodular distribution. The FLID model parameterizes all items i by an item
quality ui and an L-dimensional vector r?,i ? RL
?0 of latent properties. The model assigns a negative
P
penalty Fl (A) = maxi?A rl,i ? i?A rl,i whenever at least two items in A have the same latent
property (the corresponding dimensions of rl are > 0) ? thus the model explicitly captures repulsive
dependencies between items.3 Speaking in economic terms, items with similar latent representations
can be considered as substitutes for each other. The FLID model has been shown to perform on par
with DPPs on product recommendation tasks [10].
Facility location complements model (FLIC). This model is instantiated from (7) by setting ? =
0, ? = 1, i.e. FFLIC (A) := H(A; 0, 1) and defines a log-supermodular probability distribution.
Similar to FLID, the model parameterizes all items i by an item quality ui and
P a K-dimensional vector
a?,i ? RK
of
latent
properties.
In
particular,
there
is
a
gain
of
G
(A)
=
k
?0
i?A ak,i ? maxi?A ak,i
if at least two items in A have the same property k (i.e. for both items the corresponding dimensions
of ak are > 0). In this way, FLIC captures attractive dependencies among items and assigns high
probabilities to sets of items that have similar latent representations ? items with similar latent
representations would be considered as complements in economics.
Facility location diversity and complements model (FLDC). This model is instantiated from (7)
via FFLDC (A) := H(A; 1, 1). Hence it combines the modelling power of the log-submodular and
log-supermodular models and can explicitly represent attractive and repulsive dependencies. In this
way, FLDC can represent complements and subsitutes for the items in V . The induced probability
distribution is neither log-submodular nor log-supermodular.
5
Experiments
5.1
Experimental Setup
Dataset. We use the Amazon baby registry dataset [21] for evaluating our proposed variational
inference scheme. This dataset is a standard dataset for benchmarking diversity models and consists
of baby registries collected from Amazon. These registries are split into sub-registries according to
13 different product categories, e.g. safety and carseats. Every category contains 32 to 100 different
items and there are ? 5.000 to ? 13.300 sub-registries per category.
Product recommendation task. We construct a realistic product recommendation task from the
registries of every category as follows. Let D = (S1 , . . . , Sn ) denote the registries from one category.
From this data, we create a new dataset
? = {(S \ {i}, i) | S ? D, |S| ? 2, i ? S},
D
(8)
? consists of tuples, where the first element is a registry from D with one item removed, and
i.e., D
the second element is the removed item. The product recommendation task is to predict i given
S \ {i}. For evaluating the performance of different models on this task we use the following
two metrics: accuracy and mean reciprocal rank. Let us denote the recommendations of a model
given a partial basket A by ?A : V ? [n], where ?A (a) = 1 means that product a is recommended
highest, ?A (b) = 2 means P
that product b is recommended second highest, etc. Then, the accuracy
?1
1
is computed as Acc = |D|
? [i = ?S 0 (1)]. The mean reciprocal rank (MRR) is defined as
?
(S 0 ,i)?D
P
1
1
MRR = |D|
? ? ?1 (i) . For our models we consider predictions according to the posterior
?
(S 0 ,i)?D
S0
probability of the model given a partial basket A under the constraint that exactly a single item is to
be added, i.e. ?A (i) = k if product i achieves the k-th largest value of P (j|A) = P 0 P ({j}?A)
P ({j 0 }?A)
j ?V \A
for j ? V \ A (ties are broken arbitrarily).
5.2
Mixed Models for Product Recommendation
We learned the models described in the previous section using the training data of the different
categories. In case of the modular model, the parameters u were set according to the item frequencies
in the training data. FLID, FLIC and FLDC were learned using noise contrastive estimation (NCE) [22,
10]. We used stochastic gradient descent for optimizing the NCE objective, created 200.000 noise
3
Clearly, also attractive dependencies between items can thereby be modeled implicitly.
6
modular
DPP
FLID (L = 10)
FLIC (K = 10)
FLDC (L = 10, K = 10)
feeding
N=100
gear
N=100
apparel
N=100
diaper
N=100
bedding
N=100
bath
N=100
toys
N=62
health
N=62
media
N=58
strollers
N=40
safety
N=36
carseats
N=34
furniture
N=32
0
5 10 15 20 25 30 35 40
(a) Accuracy
0
10
20
30
40
50
(b) Mean reciprocal rank
(c) Accuracy of the mixed model
for varying L and K
Figure 1: (a,b) Accuracy and MRR on the product recommendation task. For all datasets, the mixed
FLDC model has the best performance. For datasets with small ground set (furniture, carseats, safety)
FLID performs better than FLIC. For most other datasets, FLIC outperforms FLID. (c) Accuracy of
FLDC for different numbers of latent dimensions L and K on the diaper dataset. FLDC (L, K > 0)
performs better than FLID (K = 0) and FLIC (L = 0) for the same value of L + K.
samples from the modular model and made 100 passes through the data and noise samples. We then
used the trained models for the product recommendation task from the previous section and estimated
the performance metrics using 10-fold cross-validation. We used K = 10, L = 10 dimensions for the
weights (if applicable for the corresponding model). The results are shown in Figure 1. For reference,
we also report the performance of DPPs trained with EM [21]. Note that for all categories the mixed
FLDC models achieve the best performance, followed by FLIC and FLID. For categories with more
than 40 items (with the exception of health), FLIC performs better than FLID. The modular model
performs worst in all cases. As already observed in the literature, the performance of FLID is similar
to that of DPPs [10]. For categories with small ground sets (safety, furniture, carseats), there is no
advantage of using the higher-order attractive potentials but the repulsive higher-order potentials
improve performance significantly. However, in combination with repulsive potentials the attractive
potentials enable the model to improve performance over models with only repulsive higher-order
potentials.
5.3
Impact of the Dimension Assignment
In Figure 1c we show the accuracy of FLDC for different numbers of latent dimensions L and K for
the category diaper, averaged over the 10 cross-validation folds. Similar results can be observed for
the other categories (not shown here because of space constraints). We note that the best performance
is achieved only for models that have both repulsive and attractive components (i.e. L, K > 0). For
instance, if one is constrained to use only 10 latent dimensions in total, i.e. L + K = 10, the best
performance is achieved for the settings L = 3, K = 7 and L = 2, K = 8.
5.4
Quality of the Marginals
In this section we analyze the quality of the marginals obtained by the algorithm proposed in Section 3.
Therefore we repeat the following experiment for all baskets S, |S| ? 2 in the the held out test
data. We randomly select a subset S 0 ? S of 1 to |S| ? 1 items and a subset S 00 ? V \ S with
|S 00 | = b| V \ S|/2c, of items not present in the basket. Then we condition our distribution on
the event that the items in S 0 are present and the items S 00 are not present i.e. we consider the
distribution P (A | S 0 ? A, S 00 ? A = ?). This conditioning is supposed to resemble a fictitious
product recommendation task in which we condition on items already selected by a user and exclude
items which are of no interest to the user (for instance, according to the user?s preferences). We then
compute a modular approximation to the posterior distribution using the algorithm from Section 3,
7
Table 1: AUC for the considered models on the product recommendation task based on the posterior
marginals. The best result for every dataset is printed in bold. For datasets with at most 62 items,
FLDC has the highest AUC, while for larger datasets FLIC and FLDC have similar AUC. This
indicates a good quality of the marginals computed by the proposed approximate inference procedure.
Dataset
Modular
FLID
FLIC
FLDC
safety
furniture
carseats
strollers
health
bath
media
toys
bedding
apparel
diaper
gear
feeding
0.731304
0.701840
0.717463
0.727055
0.750271
0.692423
0.666509
0.724763
0.741786
0.700694
0.685051
0.687686
0.686240
0.756981
0.739646
0.770085
0.794655
0.754185
0.705051
0.667848
0.729089
0.744443
0.696010
0.700543
0.688116
0.686845
0.731269
0.702100
0.735472
0.827800
0.756873
0.730443
0.758552
0.765474
0.771159
0.778067
0.787457
0.687501
0.744043
0.761168
0.759979
0.781642
0.849767
0.758586
0.732407
0.780634
0.777729
0.764595
0.779665
0.787274
0.688885
0.739921
Average
0.708698
0.725653
0.752016
0.766327
and recommend items according to these approximate marginals. For evaluation, we compute the
AUC for the product recommendation task and average over the test set data. We found that for
different models different modular upper/lower bounds gave the best results. In particular, for FLID
we used the upper bound given by Algorithm 1 to bound each summand Fl (A) in the facility location
term separately. For FLIC and FLID we optimized the lower bound on the partition function by lowerPL
PK
bounding l=1 Fl (A) and upper bounding k=1 Gk (A), as suggested in [1]. For approximate
inference in FLIC and FLDC we did not split the facility location terms and bounded them as a whole.
The results are summarized in Table 1. We observe that FLDC has the highest AUC for all datasets
with at most 62 items. For larger datasets, FLDC and FLIC have roughly the same performance and
are superior to FLID and the modular model. These findings are similar to those from the previous
section and confirm a good quality of the marginals computed from FLDC and FLIC by the proposed
approximate inference procedure.
6
Related Work
Variational inference in general probabilistc log-submodular models has been first studied in [1].
The authors propose L-F IELD, an approach for approximate inference in both log-submodular and
log-supermodular models based on super- and sub-differentials of submodular functions. In [3] they
extended their work by relating L-F IELD to the minimum norm problem for submodular minimization,
rendering better scalable algorithms applicable to variational inference in log-submodular models.
The forementioned works can only be applied to models that contain either log-submodular or
log-supermodular potentials and hence do not cover the models considered in this paper.
While the MAP solution in mixed models is known to be NP-hard, there are approximate methods
for its computation based on iterative majorization-minimization (or minorization-maximization)
procedures [23, 24]. In [9] the authors consider mixed models in which the supermodular component
is restricted to a tree-structured cut, and provide several algorithms for approximate MAP computation.
In contrast to our work, these methods are non-probabilistic and only provide an approximate MAP
solution without any notion of uncertainty.
7
Conclusion
We proposed efficient algorithms for approximate inference in mixed submodular models based
on inner approximations of the set of modular bounds on the corresponding energy functions. For
many higher-order potentials, optimizing a modular bound over this inner approximation is tractable.
As a consequence, the approximate inference problem can be approached by a block coordinate
descent procedure, tightening a modular upper bound over the individual higher-order potentials in
an iterative manner. Our approximate inference algorithms enable the computation of approximate
marginals and can easily scale to large ground sets. In a large set of experiments, we demonstrated
the effectiveness of our approach.
8
Acknowledgements. The authors acknowledge fruitful discussions with Diego Ballesteros. This research
was supported in part by SNSF grant CRSII2-147633, ERC StG 307036, a Microsoft Research Faculty Fellowship
and a Google European Doctoral Fellowship.
References
[1] Josip Djolonga and Andreas Krause. From MAP to Marginals: Variational Inference in Bayesian Submodular Models. In Advances in Neural Information Processing Systems (NIPS), pages 244?252, 2014.
[2] Alex Kulesza and Ben Taskar. Determinantal Point Processes for Machine Learning. Foundations and
R in Machine Learning, 5(2?3):123?286, 2012.
Trends
[3] Josip Djolonga and Andreas Krause. Scalable Variational Inference in Log-supermodular Models. In
International Conference on Machine Learning (ICML), 2015.
[4] Alkis Gotovos, Hamed Hassani, and Andreas Krause. Sampling from Probabilistic Submodular Models.
In Advances in Neural Information Processing Systems (NIPS), pages 1936?1944, 2015.
[5] Patrick Rebeschini and Amin Karbasi. Fast Mixing for Discrete Point Processes. In Proceedings of the
Conference on Learning Theory (COLT), pages 1480?-1500, 2015.
[6] Mukund Narasimhan and Jeff Bilmes. A Submodular-Supermodular Procedure with Applications to
Discriminative Structure Learning. In Uncertainty in Artificial Intelligence (UAI), 2005.
[7] Oliver Woodford, Philip Torr, Ian Reid, and Andrew Fitzgibbon. Global stereo reconstruction under
second-order smoothness priors. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 31
(12), 2009.
[8] Carsten Rother, Vladimir Kolmogorov, Victor Lempitsky, and Martin Szummer. Optimizing binary mrfs
via extended roof duality. In Computer Vision and Pattern Recognition, 2007. CVPR?07. IEEE Conference
on. IEEE, 2007.
[9] Yoshinobu Kawahara, Rishabh Iyer, and Jeffrey A. Bilmes. On Approximate Non-submodular Minimization
via Tree-Structured Supermodularity. In Proceedings of the Eighteenth International Conference on
Artificial Intelligence and Statistics (AISTATS), 2015.
[10] Sebastian Tschiatschek, Josip Djolonga, and Andreas Krause. Learning probabilistic submodular diversity
models via noise contrastive estimation. In Proceedings of the International Conference on Artificial
Intelligence and Statistics (AISTATS), 2016.
[11] Jian Zhang, Josip Djolonga, and Andreas Krause. Higher-order inference for multi-class log-supermodular
models. In International Conference on Computer Vision (ICCV), 2015.
[12] Rishabh Iyer and Jeff Bilmes. Polyhedral aspects of submodularity, convexity and concavity. arXiv preprint
arXiv:1506.07329, 2015.
[13] Jack Edmonds. Matroids and the greedy algorithm. Mathematical programming, 1(1):127?136, 1971.
[14] Marguerite Frank and Philip Wolfe. An algorithm for quadratic programming. Naval research logistics
quarterly, 3(1-2):95?110, 1956.
[15] Martin Jaggi. Revisiting frank-wolfe: Projection-free sparse convex optimization. In Proceedings of the
30th International Conference on Machine Learning (ICML-13), pages 427?435, 2013.
[16] Kazuo Murota. Discrete convex analysis. SIAM, 2003.
[17] Jack Edmonds. Submodular functions, matroids, and certain polyhedra. In Combinatorial Optimization?Eureka, You Shrink!, pages 11?26. Springer, 2003.
[18] S. Fujishige. Submodular Functions and Optimization. Elsevier, 2005.
[19] Francis R. Bach. Learning with submodular functions: A convex optimization perspective. Foundations
and Trends in Machine Learning, 6(2-3):145?373, 2013.
[20] Akiyoshi Shioura. Polynomial-time approximation schemes for maximizing gross substitutes utility under
budget constraints. Mathematics of Operations Research, 40(1), 2014.
[21] Jennifer A Gillenwater, Alex Kulesza, Emily Fox, and Ben Taskar. Expectation-maximization for learning
determinantal point processes. In Advances in Neural Information Processing Systems, 2014.
[22] Michael U Gutmann and Aapo Hyv?arinen. Noise-contrastive estimation of unnormalized statistical models,
with applications to natural image statistics. The Journal of Machine Learning Research, 13(1), 2012.
[23] Mukund Narasimhan and Jeff Bilmes. A submodular-supermodular procedure with applications to
discriminative structure learning. arXiv preprint arXiv:1207.1404, 2012.
[24] Rishabh Iyer and Jeff Bilmes. Algorithms for approximate minimization of the difference between
submodular functions, with applications. arXiv preprint arXiv:1207.0560, 2012.
9
| 6225 |@word faculty:1 polynomial:4 norm:1 hyv:1 seek:1 decomposition:2 contrastive:3 pick:1 thereby:1 minus:2 celebrated:1 contains:2 series:2 outperforms:1 current:1 written:3 determinantal:3 realistic:1 partition:7 enables:1 update:2 greedy:3 instantiate:1 selected:1 item:42 intelligence:4 gear:2 reciprocal:3 woodford:1 location:8 preference:4 minorization:1 simpler:1 zhang:1 mathematical:1 differential:1 shorthand:1 consists:2 combine:1 inside:1 polyhedral:1 manner:2 pairwise:3 roughly:1 nor:2 frequently:1 multi:1 cardinality:1 spain:1 bounded:1 factorized:2 medium:2 minimizes:1 developed:1 narasimhan:2 finding:1 every:3 subclass:1 concave:9 tie:1 exactly:1 returning:1 grant:1 reid:1 positive:2 before:1 understood:1 safety:5 consequence:1 ak:7 approximately:1 might:1 plus:2 doctoral:1 studied:2 challenging:1 tschiatschek:2 averaged:1 psms:11 practice:1 block:8 fitzgibbon:1 procedure:12 eth:1 significantly:1 printed:1 projection:1 word:1 cannot:1 close:1 context:2 optimize:5 fruitful:1 map:4 demonstrated:1 eighteenth:1 maximizing:1 urich:1 economics:3 convex:5 emily:1 formulate:1 amazon:2 assigns:2 deceptively:1 notion:2 coordinate:4 limiting:1 diego:1 user:5 exact:1 losing:1 programming:2 element:5 wolfe:7 trend:2 particularly:1 recognition:1 cut:1 ising:1 observed:2 taskar:2 preprint:3 solved:1 capture:4 worst:1 revisiting:1 ensures:1 gutmann:1 decrease:1 removed:2 highest:4 mentioned:1 gross:2 broken:1 ui:3 convexity:1 ideally:1 josipd:1 trained:2 tight:4 solving:1 rewrite:1 purely:2 easily:4 represented:1 kolmogorov:1 instantiated:4 fast:2 describe:5 query:1 approached:1 artificial:3 gotovos:1 kawahara:1 whose:1 modular:25 widely:2 larger:4 solve:7 heuristic:1 cvpr:1 supermodularity:1 statistic:3 gi:2 final:1 online:1 advantage:1 mg:5 reconstruction:2 propose:2 product:17 bath:2 mixing:2 achieve:2 supposed:1 amin:1 description:1 exploiting:2 convergence:1 optimum:2 ben:2 illustrate:1 develop:1 andrew:1 variables1:1 resemble:1 submodularity:2 stochastic:1 enable:3 apparel:2 require:1 arinen:1 hx:2 feeding:2 decompose:1 rebeschini:1 summation:1 pl:1 hold:3 considered:6 ground:6 exp:13 mapping:1 predict:1 achieves:1 estimation:3 diminishes:1 applicable:3 combinatorial:1 largest:1 create:1 minimization:6 clearly:2 snsf:1 always:1 aim:1 super:1 varying:1 focus:1 naval:1 polyhedron:5 check:1 mainly:1 rank:4 indicates:1 contrast:3 modelling:2 stg:1 baseline:1 helpful:1 inference:37 elsevier:1 mrfs:2 membership:2 typically:1 diminishing:1 pixel:1 arg:3 among:1 colt:1 denoted:1 constrained:2 special:1 initialize:1 field:2 construct:1 sampling:4 icml:2 djolonga:9 report:1 recommend:1 np:1 summand:2 few:1 randomly:2 composed:1 individual:1 roof:1 ourselves:1 jeffrey:1 microsoft:1 interest:1 evaluation:1 rishabh:3 tj:7 held:1 oliver:1 partial:2 necessary:1 fox:1 tree:2 divide:1 pmf:1 desired:1 initialized:1 josip:5 mk:3 alkis:1 instance:4 modeling:3 asking:1 cover:1 assignment:1 maximization:2 subset:3 diaper:4 dependency:5 thanks:1 international:5 siam:1 probabilistic:11 off:1 michael:1 together:1 containing:1 return:2 toy:2 potential:17 exclude:1 diversity:8 bold:1 summarized:1 explicitly:2 performed:2 closed:1 analyze:2 francis:1 start:1 capability:1 contribution:1 minimize:3 majorization:1 accuracy:7 efficiently:2 yield:1 generalize:4 bayesian:1 bilmes:7 notoriously:1 acc:1 hamed:1 basket:4 sebastian:2 whenever:1 definition:1 energy:1 frequency:1 involved:1 naturally:3 associated:1 mi:12 gain:3 dataset:8 segmentation:5 hassani:1 higher:8 supermodular:25 improved:1 formulation:1 though:2 shrink:1 furthermore:4 marketing:1 until:2 hand:1 expressive:1 google:1 defines:1 murota:1 quality:6 usage:1 name:1 building:1 normalized:1 contain:1 facility:8 hence:2 assigned:1 iteratively:1 attractive:9 auc:5 unnormalized:1 generalized:3 demonstrate:2 performs:4 fj:28 reasoning:1 image:5 variational:17 wise:1 jack:2 recently:1 fi:3 sigmoid:1 superior:1 rl:8 flid:19 conditioning:1 discussed:3 extend:1 relating:1 marginals:8 dpps:5 smoothness:2 ield:2 mathematics:1 similarly:2 pointed:1 erc:1 submodular:50 gillenwater:1 gj:18 etc:2 base:1 patrick:1 jaggi:1 posterior:3 recent:2 mrr:3 perspective:1 optimizing:5 inf:1 certain:4 inequality:3 binary:8 arbitrarily:1 baby:2 victor:1 minimum:1 additional:1 kazuo:1 monotonically:1 recommended:2 smooth:2 faster:1 characterized:1 cross:2 bach:1 impact:1 prediction:1 scalable:3 aapo:1 essentially:1 vision:3 metric:2 expectation:1 arxiv:6 iteration:5 represent:5 sometimes:1 achieved:2 background:1 addition:1 krause:10 want:1 separately:1 fellowship:2 jian:1 pass:1 induced:1 fujishige:1 effectiveness:3 call:1 noting:2 split:2 rendering:2 variety:1 switch:1 matroid:1 gave:1 restrict:1 registry:8 andreas:6 economic:2 idea:2 inner:6 parameterizes:2 whether:1 utility:3 penalty:1 stereo:2 render:2 returned:1 speaking:1 iterating:1 informally:2 induces:2 category:11 sign:3 estimated:1 per:1 edmonds:3 discrete:3 write:1 ballesteros:1 four:1 neither:2 eureka:1 monotone:1 sum:1 nce:2 parameterized:3 uncertainty:3 you:1 decision:1 coeff:1 bound:19 hi:2 fl:6 guaranteed:1 furniture:4 followed:1 fold:2 quadratic:1 constraint:4 alex:2 aspect:1 min:4 minkowski:1 martin:2 department:1 structured:2 according:5 combination:1 smaller:1 slightly:1 em:1 making:1 s1:1 explained:2 restricted:1 iccv:1 karbasi:1 taken:2 equation:1 jennifer:1 turn:4 discus:1 merit:1 pmg:4 tractable:3 end:1 serf:1 repulsive:9 operation:1 observe:1 quarterly:1 carseats:5 shortly:2 substitute:8 include:1 exploit:2 conquer:1 approximating:1 classical:1 objective:1 already:3 added:2 strategy:1 dependence:1 nr:1 exhibit:1 minx:2 gradient:1 wrap:1 philip:2 polytope:1 collected:1 rother:1 modeled:1 minimizing:3 vladimir:1 equivalently:3 difficult:1 unfortunately:1 setup:1 frank:7 subproblems:2 gk:2 negative:2 mink:1 tightening:1 perform:4 recommender:1 upper:13 markov:1 datasets:7 finite:1 enabling:2 descent:5 acknowledge:1 logistics:1 extended:2 looking:1 rn:5 arbitrary:5 complement:9 optimized:1 learned:2 bedding:2 barcelona:1 nip:3 suggested:1 pattern:2 kulesza:2 max:4 power:2 event:1 natural:3 indicator:1 solvable:1 scheme:3 improve:2 created:1 health:3 sn:1 prior:1 literature:1 acknowledgement:1 fully:2 par:1 mixed:16 interesting:1 fictitious:1 validation:2 foundation:2 krausea:1 s0:1 flic:15 repeat:1 supported:1 free:2 allow:1 matroids:2 sparse:1 dpp:1 dimension:7 xn:1 world:2 evaluating:2 rich:1 concavity:1 crsii2:1 author:4 commonly:1 made:1 transaction:1 approximate:24 obtains:1 implicitly:1 confirm:1 global:1 uai:1 recommending:1 tuples:1 xi:1 discriminative:2 alternatively:1 latent:9 iterative:2 table:2 mj:1 yoshinobu:1 investigated:1 european:1 did:1 pk:1 main:1 aistats:2 bounding:4 noise:5 arise:2 whole:1 n2:1 x1:1 benchmarking:1 nphard:1 precision:2 sub:3 exponential:1 ian:1 rk:1 specific:2 maxi:6 alt:1 mukund:2 intractable:1 exists:1 stroller:2 effectively:1 iyer:5 budget:1 gap:1 mf:7 subfamily:3 expressed:1 recommendation:13 springer:1 ch:1 corresponds:3 satisfies:2 lempitsky:1 carsten:1 jeff:4 replace:2 hard:5 included:2 specifically:1 determined:1 uniformly:1 reversing:1 torr:1 marguerite:1 lemma:4 called:9 total:1 isomorphic:1 duality:1 experimental:2 indicating:2 formally:1 select:4 substitutability:1 exception:1 szummer:1 categorize:1 ethz:1 pmod:1 |
5,776 | 6,226 | The Product Cut
Xavier Bresson
Nanyang Technological University
Singapore
[email protected]
Thomas Laurent
Loyola Marymount University
Los Angeles
[email protected]
Arthur Szlam
Facebook AI Research
New York
[email protected]
James H. von Brecht
California State University, Long Beach
Long Beach
[email protected]
Abstract
We introduce a theoretical and algorithmic framework for multi-way graph partitioning that relies on a multiplicative cut-based objective. We refer to this objective
as the Product Cut. We provide a detailed investigation of the mathematical properties of this objective and an effective algorithm for its optimization. The proposed
model has strong mathematical underpinnings, and the corresponding algorithm
achieves state-of-the-art performance on benchmark data sets.
1
Introduction
We propose the following model for multi-way graph partitioning. Let G = (V, W ) denote a weighted
graph, with V its vertex set and W its weighted adjacency matrix. We define the Product Cut of a
partition P = (A1 , . . . , AR ) of the vertex set V as
QR
R
X
Z(Ar , Ac )
Pcut(P) = r=1 H(P) r ,
H(P) = ?
?r log ?r ,
(1)
e
r=1
where ?r = |Ar |/|V | denotes the relative size of a set. This model provides a distinctive way to
incorporate classical notions of a quality partition. The non-linear, non-local function Z(Ar , Acr ) of
a set measures its intra- and inter-connectivity with respect to the graph. The entropic balance H(P)
measures deviations of the partition P from a collection of sets (A1 , . . . , AR ) with equal size. In this
way, the Product Cut optimization parallels the classical Normalized Cut optimization [10, 15, 13] in
terms of its underlying notion of cluster, and it arises quite naturally as a multiplicative version of the
Normalized Cut.
Nevertheless, the two models strongly diverge beyond the point of this superficial similarity. We
provide a detailed analysis to show that (1) settles the compromise between cut and balance in a
fundamentally different manner than classical objectives, such as the Normalized Cut or the Cheeger
Cut. The sharp inequalities
0 ? Ncut(P) ? 1
e?H(P) ? Pcut(P) ? 1
(2)
succinctly capture this distinction; the Product Cut exhibits a non-vanishing lower bound while the
Normalized Cut does not. We show analytically and experimentally that this distinction leads to
superior stability properties and performance. From an algorithmic point-of-view, we show how
to cast the minimization of (1) as a convex maximization program. This leads to a simple, exact
continuous relaxation of the discrete problem that has a clear mathematical structure. We leverage this
formulation to develop a monotonic algorithm for optimizing (1) via a sequence of linear programs,
and we introduce a randomized version of this strategy that leads to a simple yet highly effective
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
algorithm. We also introduce a simple version of Algebraic Multigrid (AMG) tailored to our problem
that allows us to perform each step of the algorithm at very low cost. On graphs that contain
reasonably well-balanced clusters of medium scale, the algorithm provides a strong combination
of accuracy and efficiency. We conclude with an experimental evaluation and comparison of the
algorithm on real world data sets to validate these claims.
2
The Product Cut Model
We begin by introducing our notation and by describing the rationale underlying our model. We use
G = (V, W ) to denote a graph on n vertices V = {v1 , . . . , vn } with weighted edges W = {wij }ni,j=1
that encode similarity between vertices. We denote partitions of the vertex set into R subsets as
P = (A1 , . . . , AR ), with the understanding that the Ar ? V satisfy the covering A1 ? . . . ? AR = V
constraint, the non-overlapping Ar ? As = ?, (r 6= s) constraint and the non-triviality Ar 6= ?
constraint. We use f, g, h, u, v to denote vertex functions f : V ? R, which we view as functions
f (vi ) and n-vectors f ? Rn interchangeably. For a A ? V we use |A| for its cardinality and 1A for
its indicator function. Finally, for a given graph G = (V, W ) we use D := diag(W 1V ) to denote the
diagonal matrix of weighted vertex degrees.
The starting point for our model arises from a well-known and widely used property of the random
walk on a graph. Namely, a random walker initially located in a cluster A is unlikely to leave
that cluster quickly [8]. Different approaches of quantifying this intuition then lead to a variety of
multi-way partitioning strategies for graphs [11, 12, 1]. The personalized page-rank methodology
provides an example of this approach. Following [1], given a scalar 0 < ? < 1 and a non-empty
vertex subset A we define
prA := M??1 1A /|A|
M? := Id ? ?W D?1 /(1 ? ?)
(3)
as its personalized page-rank vector. As 1A /|A| is the uniform distribution on the set A and W D?1 is
the transition matrix of the random walk on the graph, prA corresponds to the stationary distribution
of a random walker that, at each step, moves with probability ? to a neighboring vertex by a usual
random walk, and has a probability (1 ? ?) to teleport to the set A. If A has a reasonable cluster
structure, then prA will concentrate on A and assign low probabilities to its complement. Given a
high-quality partition P = (A1 , . . . , AR ) of V , we therefore expect that ?i,r := prAr (vi ) should
achieve its maximal value over 1 ? r ? R when r = r(i) is the class of the ith vertex.
Viewed from this perspective, we can formulate an R-way graph partitioning problem as the task of
selecting P = (A1 , . . . , AR ) to maximize some combination of the collection {?i,r(i) : i ? V } of
page-rank probabilities generated by the partition. Two intuitive options immediately come to mind,
the arithmetic and geometric means of the collection:
P P
over all partitions (A1 , . . . , AR ) of V into R sets. (4)
Maximize n1 r vi ?Ar prAr (vi )
1/n
Q Q
Maximize
over all partitions (A1 , . . . , AR ) of V into R sets. (5)
r
vi ?Ar prAr (vi )
The first option corresponds to a straightforward variant of the classical Normalized Cut. The second
option leads to a different type of cut-based objective that we term the Product Cut. The underlying
reason for considering (5) is quite natural. If we view each prAr as a probability distribution, then (5)
corresponds to a formal likelihood of the partition. This proves quite analogous to re-formulating the
classical k-means objective for partitioning n data points (x1 , . . . , xn ) into R clusters (A1 , . . . , AR )
in terms of maximizing a likelihood
QR Q
kxi ?mr k2
)
r=1
vi ?Ar exp(?
2? 2
r
of Gaussian densities. While the Normalized Cut variant (4) is certainly popular, we show that it
suffers from several defects that the Product Cut resolves. As the Product Cut can be effectively
optimized and generally leads to higher quality partitions, it therefore provides a natural alternative.
To make these ideas precise, let us define the ?-smoothed similarity matrix as ?? := M??1 and use
{?ij }ni,j=1 to denote its entries. Thus ?ij = (M??1 1vj )i = pr{vj } (vi ), and so ?ij gives a non-local
measure of similarity between the vertices vi and vj by means of the personalized page-rank diffusion
process. The matrix ?? is column stochastic, non-symmetric, non-sparse, and has diagonal entries
2
greater than (1 ? ?). Given a partition P = (A1 , . . . , AR ), we define
QR
R
Z(Ar , Ac )1/n
1 X Cut(Ar , Acr )
and Ncut(P) :=
Pcut(P) := r=1 H(P) r
R r=1 Vol(Ar )
e
as its Product Cut and Normalized Cut, respectively. The non-linear, non-local function
P
Y
j?Ac ?ij
c
1+ P
Z(A, A ) :=
j?A ?ij
(6)
(7)
vi ?Ar
of a set measures its intra- and inter-connectivity with respect to the graph while H(P) denotes the
entropic balance (1). The definitions of
P
P
P
P
Cut(A, Ac ) = i?Acr j?Ar ?ij
and
Vol(A) = i?V j?Ar ?ij
are standard. A simple computation then shows that maximizing the geometric average (5) is
equivalent to minimizing the Product Cut, while maximizing the arithmetic average (4) is equivalent
to minimizing the Normalized Cut. At a superficial level, both models wish to achieve the same
goal. The numerator of the Product Cut aims at a partition in which each vertex is weakly connected
to vertices from other clusters and strongly connected with vertices from its own cluster. The
denominator H(P) is maximal when |A1 | = |A2 | = . . . = |AR |, and so aims at a well-balanced
partition of the vertices. The objective (5) therefore promotes partitions with strongly intra-connected
clusters and weakly inter-connected clusters that have comparable size. The Normalized Cut, defined
here on ?? but usually posed over the original similarity matrix W, is exceedingly well-known
[10, 15] and also aims at finding a good balance between low cut value and clusters of comparable
sizes.
Despite this apparent parallel between the Product and Normalized Cuts, the two objectives behave
quite differently both in theory and in practice. To illustrate this discrepancy at a high level, note first
that the following sharp bounds
0 ? Ncut(P) ? 1
(8)
hold for the Normalized Cut. The lower bound is attained for partitions P in which the clusters are
mutually disconnected. For the Product Cut, we have
Theorem 1 The following inequality holds for any partition P:
e?H(P) ? Pcut(P) ? 1.
(9)
Moreover the lower bound is attained for partitions P in which the clusters are mutually disconnected.
The lower bound in (9) can be directly read from (6) and (7), while the upper bound is non-trivial and
proved in the supplementary material. This theorem goes at the heart of the difference between the
Product and Normalized Cuts. To illustrate this, let P (k) denote a sequence of partitions. Then (9)
shows that
(10)
lim H(P (k) ) = 0 ? lim Pcut(P (k) ) = 1.
k??
k??
In other words, an arbitrarily ill-balanced partition leads to arbitrarily poor values of its Product Cut.
The Normalized Cut does not possess this property. As an extreme but easy-to-analyze example,
consider the case where G = (V, W ) is a collection of isolated vertices. All possible partitions P
consist of mutually disconnected clusters and the lower bound is reached for both (8) and (9). Thus
Ncut(P) = 0 for all P and so all partitions are equivalent for the Normalized Cut. On the other
hand Pcut(P) = e?H(P) , which shows that, in the absence of ?cut information,? the Product Cut
will choose the partition that maximizes the entropic balance. So in this case, any partition P for
which |A1 | = . . . = |AR | will be a minimizer. In essence, this tighter lower bound for the Product
Cut reflects its stronger balancing effect vis-a-vis the Normalized Cut.
2.1
(In-)Stability Properies of the Product Cut and Normalized Cut
In practice, the stronger balancing effect of the Product Cut manifests as a stronger tolerance to
perturbations. We now delve deeper and contrast the two objectives by analyzing their stability
properties using experimental data as well as a simplified model problem that isolates the source of
3
(a)
An in blue, Bn in green, C in red.
(b)
0,good
Pn
= (An , Bn ? C)
(c)
0,bad
Pn
= (An ? Bn , C)
Figure 1: The graphs Gn0 used for analyzing stability.
the inherent difficulties. Invoking ideas from dynamical systems theory, we say an objective is stable
if an infinitesimal perturbation of a graph G = (V, W ) leads to an infinitesimal perturbation of the
optimal partition. If an infinitesimal perturbation leads to a dramatic change in the optimal partition,
then the objective is unstable.
We use a simplified model to study stability of the Product Cut and Normalized Cut objectives.
Consider a graph Gn = (Vn , Wn ) made of two clusters An and Bn containing n vertices each. Each
vertex in Gn has degree k and is connected to ?k vertices in the opposite cluster, where 0 ? ? ? 1.
The graph Gn0 is a perturbation of Gn constructed by adding a small cluster C of size n0 n to
the original graph. Each vertex of C has degree k0 and is connected to ?0 k0 vertices in Bn and
(1 ? ?0 )k0 vertices in C for some 0 ? ?0 ? 1. In the perturbed graph Gn0 , a total of n0 vertices in
Bn are linked to C and have degree k + ?0 k0 . See figure 1(a). The main properties of Gn , Gn0 are
? Unperturbed graph Gn :
? Perturbed graph
.
Gn0 :
|An | = |Bn | = n,
CondGn (An ) = ?, CondGn (Bn ) = ?
|An | = |Bn | = n,
|C| = n0 n,
CondGn0 (An ) = ?,
CondGn0 (C) = ?0 .
CondGn0 (Bn ) ? ?
where CondG (A) = Cut(A, Ac )/ min(|A|, |Ac |) denotes the conductance of a set. If we consider
the parameters ?, ?0 , k, k0 , n0 as fixed and look at the perturbed graph Gn0 in the limit n ? ? of a
large number of vertices, then as n becomes larger the degree of the bulk vertices will remain constant
while the size |C| of the perturbation becomes infinitesimal.
To examine the influence of this infinitesimal perturbation for each model, let Pn = (An , Bn )
denote the desired partition of the unperturbed graph Gn and let Pn0,good = (An , Bn ? C) and
Pn0,bad = (An ? Bn , C) denote the partitions of the perturbed graph Gn0 depicted in figure 1(b)
and 1(c), respectively. As Pn0,good ? Pn , a stable objective will prefer Pn0,good to Pn0,bad while any
objective preferring the converse is unstable. A detailed study of stability proves possible for this
specific graph family. We summarize the conclusions of this analysis in the theorem below, which
shows that the Normalized Cut is unstable in certain parameter regimes while the Product Cut is
always stable. The supplementary material contains the proof.
Theorem 2 Suppose that ?, ?0 , k, k0 , n0 are fixed. Then
?0 < 2?
?
NcutGn0 (Pn0,good ) > NcutGn0 (Pn0,bad )
PcutGn0 (Pn0,good )
<
PcutGn0 (Pn0,bad )
for n large enough.
for n large enough.
(11)
(12)
Statement (11) simply says that the large cluster An must have a conductance ? at least twice
better than the conductance ?0 of the small perturbation cluster C in order to prevent instability.
Thus adding an infinitesimally small cluster with mediocre conductance (up to two times worse
the conductance of the main structure) has the potential of radically changing the partition selected
by the Normalized Cut. Moreover, this result holds for the classical Normalized Cut, its smoothed
variant (4) as well as for similar objectives such as the Cheeger Cut and Ratio Cut. Conversely,
(12) shows that adding an infinitesimally small cluster will not affect the partition selected by the
4
aa
e?H(P)
Pcut(P)
Ncut(P)
Partition P of
WEBKB4 found by
the Pcut algo.
.2506
.5335
.5257
Partition P of
WEBKB4 found by
the Ncut algo.
.7946
.8697
.5004
Partition P of
CITESEER found by
the Pcut algo.
.1722
.4312
.5972
Partition P of
CITESEER found by
the Ncut algo.
.7494
.8309
.5217
Figure 2: The Product and Normalized Cuts on WEBKB4 (R = 4 clusters) and CITESEER (R = 6
clusters). The pie charts visually depict the sizes of the clusters in each partition. In both cases, NCut
returns a super-cluster while PCut returns a well-balanced partition. The NCut objective prefers the
ill-balanced partitions while the PCut objective dramatically prefers the balanced partitions.
Product Cut. The proof, while lengthy, is essentially just theorem 1 in disguise. To see this, note
that the sequence of partitions Pn0,bad becomes arbitrarily ill-balanced, which from (10) implies
limn?? PcutGn0 (Pn0,bad ) = 1. However, the unperturbed graph Gn grows in a self-similar fashion
as n ? ? and so the Product Cut of Pn remains approximately a constant, say ?, for all n. Thus
PcutGn (Pn ) ? ? < 1 for n large enough, and PcutGn0 (Pn0,good ) ? PcutGn (Pn ) since |C| is
infinitesimal. Therefore PcutGn0 (Pn0,good ) ? ? < 1. Comparing this upper-bound with the fact
limn?? PcutGn0 (Pn0,bad ) = 1, we see that the Product Cut of Pn0,bad becomes eventually larger than
the Product Cut of Pn0,good . While we execute this program in full only for the example above, this
line of argument is fairly general and similar stability estimates are possible for more general families
of graphs.
This general contrast between the Product Cut and the Normalized Cut extends beyond the realm of
model problems, as the user familiar with off-the-shelf NCut codes likely knows. When provided
with ?dirty? graphs, for example an e-mail network or a text data set, NCut has the aggravating
tendency to return a super-cluster. That is, NCut often returns a partition P = (A1 , . . . , AR ) where
a single set |Ar | contains the vast majority of the vertices. Figure 2 illustrates this phenomenon. It
compares the partitions obtained for NCut (computed on ?? using a modification of the standard
spectral approximation from [15]) and for PCut (computed using the algorithm presented in the
next section) on two graphs constructed from text data sets. The NCut algorithm returns highly
ill-balanced partitions containing a super-cluser, while PCut returns an accurate and well-balanced
partition. Other strategies for optimizing NCut obtain similarly unbalanced partitions. As an example,
using the algorithm from [9] with the original sparse weight matrix W leads to relative cluster sizes
of 99.2%, 0.5%, 0.2% and 0.1% for WEBKB4 and 98.5%, 0.4%, 0.3%, 0.3%, 0.3% and 0.2% for
CITESEER. As our theoretical results indicate, these unbalanced partitions result from the normalized
cut criterion itself and not the algorithm used to minimize it.
3
The Algorithm
Our strategy for optimizing the Product Cut relies on a popular paradigm for discrete optimization,
i.e. exact relaxation. We begin by showing that the discrete, graph-based formulation (5) can be
relaxed to a continuous optimization problem, specifically a convex maximization program. We then
prove that this relaxation is exact, in the sense that optimal solutions of the discrete and continuous
problems coincide. With an exact relaxation in hand, we may then appeal to continuous optimization
strategies (rather than discrete or greedy ones) for optimizing the Product Cut. This general idea of
exact relaxation is intimately coupled with convex maximization.
Assume that the graph G = (V, W ) is connected. Then by taking the logarithm of (5) we see that (5)
is equivalent to the problem
)
PR P
(?? 1Ar )i
Maximize
r=1
i?Ar log
|Ar |
(P)
over all partitions P = (A1 , . . . , AR ) of V into R non-empty subsets.
5
The relaxation of (P) then follows from the usual approach. We first encode sets Ar ( V as binary
vertex functions 1Ar , then relax the binary constraint to arrive at a continuous program. Given a
vertex function f ? Rn+ with non-negative entries, we define the continuous energy e(f ) as
e(f ) := f, log ?? f / hf, 1V i
if f 6= 0,
and e(0) = 0,
where h?, ?i denotes the usual dot product in Rn and the logarithm applies entriwise.
As (?? f )i > 0
P
whenever f 6= 0, the continuous energy is well-defined. After noting that r e(1Ar ) is simply the
objective value in problem (P), we arrive to the following continuous relaxation
)
PR
Maximize r=1 e(fr )
,
(P-rlx)
PR
over all (f1 , . . . , fR ) ? Rn+ ? . . . ? Rn+ satisfying r=1 fr = 1V
where the non-negative cone Rn+ consists of all vectors in Rn with non-negative entries.
The following theorem provides the theoretical underpinning for our algorithmic approach. It
establishes convexity of the relaxed objective for connected graphs.
Theorem 3 Assume that G = (V, W ) is connected. Then the energy e(f ) is continuous, positive
1-homogeneous and convex on Rn+ . Moreover, the strict convexity property
e(?f + (1 ? ?)g) < ?e(f ) + (1 ? ?)e(g) for all ? ? (0, 1)
holds whenever f, g ? Rn+ are linearly independent.
The continuity of e(f ) away from the origin as well as the positive one-homogeneity are obvious,
while the continuity of e(f ) at the origin is easy to prove. The proof of convexity of e(f ), provided
in the supplementary material, is non-trivial and heavily relies on the particular structure of ?? itself.
With convexity of e(f ) in hand, we may prove the main theorem of this section.
Theorem 4 ( Equivalence of (P) and (P-rlx) ) Assume that G = (V, W ) is connected and that V
contains at least R vertices. If P = (A1 , . . . , AR ) is a global optimum of (P) then (1A1 , . . . , 1AR )
is a global optimum of (P-rlx) . Conversely, if (f1 , . . . , fR ) is a global optimum of (P-rlx) then
(f1 , . . . , fR ) = (1A1 , . . . , 1AR ) where (A1 , . . . , AR ) is a global optimum of (P).
Proof. By strict convexity, the solution of the maximization (P-rlx) occurs at the extreme points of
PR
the constraint set ? = {(f1 , . . . , fR ) : fr ? RN
+ and
r=1 fr = 1}. Any such extreme point takes
the form (1A1 , . . . , 1AR ), where necessarily A1 ? . . . ? AR = V and Ar ? As = ? (r 6= s) hold. It
therefore suffices to rule out extreme points that have an empty set of vertices. But if A 6= B are
non-empty then 1A , 1B are linearly independent, and so the inequality e(1A + 1B ) < e(1A ) + e(1B )
holds by strict convexity and one-homogeneity. Thus given a partition of the vertices into R ? 1
non-empty subsets and one empty subset, we can obtain a better energy by splitting one of the
non-empty vertex subsets into two non-empty subsets. Thus any globally maximal partition cannot
contain empty subsets.
With theorems 3 and 4 in hand, we may now proceed to optimize (P) by searching for optima of
its exact relaxation. We tackle the latter problem by leveraging sequential linear programming or
gradient thresholding strategies for convex maximization. We may write (P-rlx) as
Maximize E(F ) subject to
F ? C and ?i (F ) = 0 for i = 1, . . . , n
(13)
where F = (f1 , . . . , fR ) is the optimization variable, E(F ) is the convex energy to be maximized, C
is the bounded convex set [0, 1]n ? . . . ? [0, 1]n and the n affine constraints ?i (F ) = 0 correspond
PR
to the row-stochastic constraints r=1 fi,r = 1. Given a current feasible estimate F k of the solution,
we obtain the next estimate F k+1 by solving the linear program
Maximize Lk (F ) subject to F ? C and ?i (F ) = 0 for i = 1, . . . , n
(14)
where Lk (F ) = E(F k ) + h?E(F (k) ), F ? F k i is the linearization of the energy E(F ) around the
current iterate. By convexity of E(F ), this strategy monotonically increases E(F k ) since E(F k+1 ) ?
Lk (F k+1 ) ? Lk (F k ) = E(F k ). The iterates F k therefore encode a sequence of partitions of V that
monotonically increase the energy at each step. Either the current iterate maximizes the linear form,
in which case first-order optimality holds, or else the subsequent iterate produces a partition with a
6
Algorithm 1 Randomized SLP for PCut
Initialization: (f10 , . . . , fR0 ) = (1A1 , . . . , 1AR ) for (A1 , . . . , AR ) a random partition of V
for k = 0 to maxiter do
for r = 1 to R doP
n
k
)
then solve M? ur = f?r
Set f?r = frk /( i=1 fi,r
Set gi,r = fi,r /ui,r for i = 1, . . . n then solve M?T vr = gr
Set hr = log ur + vr ? 1
end for
Choose at random sk vertices and let I ? V be these vertices.
for all i ? V do
1 if r = arg maxs his
1 if hi,r > 0
k+1
k+1
If i ? I then fi,r
=
if i ?
/ I then fi,r
=
0 otherwise,
0 otherwise.
end for
end for
strictly larger objective value. The latter case can occur only a finite number of times, as only a finite
number of partitions exist. Thus the sequence F k converges after a finite number of iterations.
While simple and easy to implement, this algorithm suffers from a severe case of early termination.
When initialized from a random partition, the iterates F k almost immediately converge to a poorquality solution. We may rescue this poor quality algorithm and convert it to a highly effective one,
while maintaining its simplicity, by randomizing the LP (14) at each step in the following way. At
step k we solve the LP
maximize Lk (F ) subject to
F ? C and ?i (F ) = 0 for i ? Ik ,
(15)
where the set Ik is a random subset of {1, 2, . . . , n} obtained by drawing sk constraints uniformly at
random without replacement. The LP (15) is therefore version of LP (14) in which we have dropped
a random set of constraints. If we start by enforcing a small number sk of constraints and slowly
increment this number sk+1 = sk + ?sk as the algorithm progresses, we allow the algorithm time
to explore the energy landscape. Enforcing more constraints as the iterates progress ensures that
(15) eventually coincides with (14), so convergence of the iterates F k of the randomized algorithm
is still guaranteed. The attraction is that LP (15) has a simple, closed-form solution given by a
variant of gradient thresholding. We derive the closed form solution of LP (15) in section 1 of the
supplementary material, and this leads to Algorithm 1 above.
The overall effectiveness of this strategy relies on two key ingredients. The first is a proper choice
of the number of constraints sk to enforce at each step. Selecting the rate at which sk increases is
similar, in principle, to selecting a learning rate schedule for a stochastic gradient descent algorithm.
If sk increases too quickly then the algorithm will converge to poor-quality partitions. If sk increases
too slowly, the algorithm will find a quality solution but waste computational effort. A good rule of
thumb is to linearly increase sk at some constant rate ?sk ? ? until all constraints are enforced, at
which point we switch to the deterministic algorithm and terminate the process at convergence. The
second key ingredient involves approximating solutions to the linear system M? x = b quickly. We
use a simple Algebraic Multigrid (AMG) technique, i.e. a stripped-down version of [7] or [6], to
accomplish this. The main insight here is that exact solutions of M? x = b are not needed, but not all
approximate solutions are effective. We need an approximate solution x that has non-zero entries on
all of |V | for thresholding to succeed, and this can be accomplished by AMG at very little cost.
4
Experiments
We conclude our study of the Product Cut model by presenting extensive experimental evaluation
of the algorithm1 . We intend these experiments to highlight the fact that, in addition to a strong
theoretical model, the algorithm itself leads to state-of-the-art performance in terms of cluster purity
on a variety of real world data sets. We provide experimental results on four text data sets (20NEWS,
RCV1, WEBKB4, CITESEER) and four data sets containing images of handwritten digits (MNIST,
PENDIGITS, USPS, OPTDIGITS). We provide the source for these data sets and details on their
1
The code is available at https://github.com/xbresson/pcut
7
Table 1: Algorithmic Comparison via Cluster Purity.
size
R
RND
NCUT
LSD
MTV
GRACLUS
NMFR
PCut (.9,?1 )
PCut (.9,?2 )
20NE
20K
20
6
27
34
36
42
61
61
60
RCV1
9.6K
4
30
38
38
43
42
43
53
50
WEBK
4.2K
4
39
40
46
45
49
58
58
57
CITE
3.3K
6
22
23
53
43
54
63
63
64
MNIS
70K
10
11
77
76
96
97
97
97
96
PEND
11K
10
12
80
86
87
85
87
87
84
USPS
9.3K
10
17
72
70
85
87
86
89
89
OPTI
5.6K
10
12
91
91
95
94
98
98
95
construction in the supplementary material. We compare our method against partitioning algorithms
that, like the Product Cut, rely on graph-cut objective principles and that partition the graph in a direct,
non-recursive manner. The NCut algorithm [15] is a widely used spectral algorithm that relies on a
post-processing of the eigenvectors of the graph Laplacian to optimize the Normalized Cut energy.
The NMFR algorithm [14] uses a graph-based random walk variant of the Normalized Cut. The LSD
algorithm [2] provides a non-negative matrix factorization algorithm that relies upon a trace-based
relaxation of the Normalized Cut objective. The MTV algorithm from [3] and the balanced k-cut
algorithm from [9] provide total-variation based algorithms that attempt to find an optimal multi-way
Cheeger cut of the graph by using `1 optimization techniques. Both algorithms optimize the same
objective and achieve similar purity values. We report results for [3] only. The GRACLUS algorithm
[4, 5] uses a multi-level coarsening approach to optimize the NCut objective as formulated in terms
of kernel k-means. Table 1 reports the accuracy obtained by these algorithms for each data set. We
use cluster purity to quantify the quality of the calculated partition, defined according to the relation:
PR
Purity = n1 r=1 max1<i<R mr,i . Here mr,i denotes the number of data points in the rth cluster
that belong to the ith ground-truth class. The third row of the table (RND) provides a base-line purity
for reference, i.e. the purity obtained by assigning each data point to a class from 1 to R uniformly at
random. The PCut, MTV and GRACLUS algorithms rely on randomization, so for these algorithms
we report the average purity achieved over 500 different runs. For the PCut algorithm, we use ? = .9
when defining ?? . Also, in order to illustrate the tradeoff when selecting the rate at which the number
of enforced constraints sk increases, we report accuracy results for the linear rates
?sk = 10?4 ? n := ?1
and
?sk = 5 ? 10?4 ? n := ?2
where n denotes the total number of vertices in the data set. By and large both PCut and NMFR
consistently outperform the other algorithms in terms of accuracy.
Table 2: Computational Time
NMFR
4.6mn
(92%)
MNIST
PCut (.9,?1 )
11s
(92%)
PCut (.9,?2 )
10s
(91%)
NMFR
3.7mn
(58%)
20NEWS
PCut (.9,?1 ) PCut (.9,?2 )
1.3mn
16s
(58%)
(57%)
In addition to the accuracy comparisons, table 2 records the time required for PCut and NMFR to
reach 95% of their limiting purity value on the two largest data sets, 20NEWS and MNIST. Each
algorithm is implemented in a fair and consistent way, and the experiments were all performed
on the same architecture. Timing results on the smaller data sets from table 1 are consistent with
those obtained for 20NEWS and MNIST. In general we observe that PCut runs significantly faster.
Additionally, as we expect for PCut, the slower rate ?1 generally leads to more accurate results while
the larger rate ?2 typically converges more quickly.
When taken together, our theoretical and experimental results clearly reveal that the model provides
a promising method for graph partitioning. The algorithm consistently achieves state-of-the-art
results, and typically runs significantly faster than other algorithms that achieve a comparable level
of accuracy. Additionally, both the model and algorithmic approach rely upon solid mathematical
foundations that are frequently missing in the multi-way clustering literature.
Acknowledgements: TL was supported by NSF DMS-1414396.
8
References
[1] Reid Andersen, Fan Chung, and Kevin Lang. Local graph partitioning using pagerank vectors.
In Proceedings of the 47th Annual Symposium on Foundations of Computer Science (FOCS
?06), pages 475?486, 2006.
[2] Raman Arora, M Gupta, Amol Kapila, and Maryam Fazel. Clustering by left-stochastic matrix
factorization. In International Conference on Machine Learning (ICML), pages 761?768, 2011.
[3] Xavier Bresson, Thomas Laurent, David Uminsky, and James von Brecht. Multiclass total
variation clustering. In Advances in Neural Information Processing Systems (NIPS), 2013.
[4] Inderjit S. Dhillon, Yuqiang Guan, and Brian Kulis. Weighted graph cuts without eigenvectors:
A multilevel approach. IEEE Transactions on Pattern Analysis and Machine Intelligence,
29(11):1944?1957, 2007.
[5] George Karypis and Vipin Kumar. A fast and high quality multilevel scheme for partitioning
irregular graphs. SIAM J. Sci. Comput., 20(1):359?392, 1998.
[6] Dilip Krishnan, Raanan Fattal, and Richard Szeliski. Efficient preconditioning of laplacian
matrices for computer graphics. ACM Transactions on Graphics (TOG), 32(4):142, 2013.
[7] Oren E Livne and Achi Brandt. Lean algebraic multigrid (lamg): Fast graph laplacian linear
solver. SIAM Journal on Scientific Computing, 34(4):B499?B522, 2012.
[8] L?szl? Lov?sz and Mikl?s Simonovits. Random walks in a convex body and an improved
volume algorithm. Random structures & algorithms, 4(4):359?412, 1993.
[9] Syama Sundar Rangapuram, Pramod Kaushik Mudrakarta, and Matthias Hein. Tight continuous
relaxation of the balanced k-cut problem. In Advances in Neural Information Processing
Systems, pages 3131?3139, 2014.
[10] J. Shi and J. Malik. Normalized Cuts and Image Segmentation. IEEE Transactions on Pattern
Analysis and Machine Intelligence (PAMI), 22(8):888?905, 2000.
[11] Daniel A. Spielman and Shang-Hua Teng. Nearly-linear time algorithms for graph partitioning,
graph sparsification, and solving linear systems. In Proceedings of the thirty-sixth annual ACM
symposium on Theory of computing, pages 81?90, 2004.
[12] Daniel A. Spielman and Shang-Hua Teng. A local clustering algorithm for massive graphs and
its application to nearly linear time graph partitioning. SIAM Journal on Computing, 42(1):1?26,
2013.
[13] U. von Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17(4):395?416,
2007.
[14] Zhirong Yang, Tele Hao, Onur Dikmen, Xi Chen, and Erkki Oja. Clustering by nonnegative
matrix factorization using graph random walk. In Advances in Neural Information Processing
Systems (NIPS), pages 1088?1096, 2012.
[15] Stella X. Yu and Jianbo Shi. Multiclass spectral clustering. in international conference on
computer vision. In International Conference on Computer Vision, 2003.
9
| 6226 |@word kulis:1 version:5 stronger:3 termination:1 bn:13 citeseer:5 invoking:1 dramatic:1 solid:1 contains:3 selecting:4 daniel:2 current:3 com:2 comparing:1 lang:1 yet:1 assigning:1 must:1 subsequent:1 partition:55 depict:1 n0:5 stationary:1 greedy:1 selected:2 intelligence:2 ith:2 vanishing:1 record:1 provides:8 iterates:4 brandt:1 mathematical:4 constructed:2 direct:1 symposium:2 ik:2 focs:1 prove:3 consists:1 manner:2 introduce:3 lov:1 inter:3 examine:1 frequently:1 multi:6 globally:1 resolve:1 little:1 cardinality:1 considering:1 becomes:4 spain:1 begin:2 underlying:3 notation:1 moreover:3 medium:1 maximizes:2 maxiter:1 bounded:1 provided:2 multigrid:3 finding:1 sparsification:1 tackle:1 pramod:1 k2:1 jianbo:1 partitioning:11 szlam:1 converse:1 reid:1 aszlam:1 positive:2 dropped:1 local:5 timing:1 limit:1 despite:1 lsd:2 id:1 analyzing:2 laurent:2 opti:1 webkb4:5 approximately:1 solver:1 pami:1 twice:1 initialization:1 pendigits:1 equivalence:1 conversely:2 delve:1 factorization:3 karypis:1 fazel:1 thirty:1 nanyang:1 practice:2 implement:1 recursive:1 digit:1 significantly:2 word:1 cannot:1 mediocre:1 influence:1 instability:1 optimize:4 equivalent:4 deterministic:1 missing:1 maximizing:3 shi:2 straightforward:1 go:1 starting:1 convex:8 formulate:1 simplicity:1 splitting:1 immediately:2 rule:2 attraction:1 insight:1 yuqiang:1 his:1 stability:7 searching:1 notion:2 variation:2 increment:1 analogous:1 limiting:1 construction:1 suppose:1 heavily:1 user:1 exact:7 programming:1 homogeneous:1 us:2 kapila:1 massive:1 origin:2 satisfying:1 located:1 cut:72 lean:1 rangapuram:1 capture:1 ensures:1 connected:10 news:4 technological:1 balanced:11 intuition:1 cheeger:3 convexity:7 ui:1 weakly:2 solving:2 tight:1 algo:4 compromise:1 distinctive:1 upon:2 efficiency:1 max1:1 usps:2 preconditioning:1 tog:1 differently:1 k0:6 fast:2 effective:4 kevin:1 quite:4 apparent:1 widely:2 posed:1 supplementary:5 say:3 larger:4 relax:1 solve:3 otherwise:2 drawing:1 statistic:1 gi:1 itself:3 sequence:5 matthias:1 acr:3 propose:1 maryam:1 product:33 maximal:3 fr:9 neighboring:1 achieve:4 f10:1 intuitive:1 validate:1 qr:3 los:1 convergence:2 cluster:31 empty:9 optimum:5 produce:1 leave:1 converges:2 illustrate:3 develop:1 ac:6 derive:1 ij:7 progress:2 strong:3 implemented:1 involves:1 come:1 implies:1 indicate:1 quantify:1 concentrate:1 stochastic:4 settle:1 material:5 adjacency:1 multilevel:2 assign:1 f1:5 suffices:1 ntu:1 investigation:1 randomization:1 tighter:1 brian:1 strictly:1 hold:7 underpinning:1 around:1 teleport:1 ground:1 exp:1 visually:1 algorithmic:5 claim:1 achieves:2 entropic:3 a2:1 early:1 largest:1 establishes:1 weighted:5 reflects:1 minimization:1 clearly:1 gaussian:1 always:1 aim:3 super:3 rather:1 pn:7 shelf:1 encode:3 consistently:2 rank:4 likelihood:2 contrast:2 sense:1 dilip:1 unlikely:1 typically:2 initially:1 relation:1 wij:1 arg:1 overall:1 ill:4 art:3 fairly:1 frk:1 equal:1 beach:2 graclus:3 look:1 icml:1 nearly:2 yu:1 discrepancy:1 report:4 fundamentally:1 inherent:1 richard:1 oja:1 homogeneity:2 familiar:1 replacement:1 n1:2 attempt:1 pra:3 conductance:5 highly:3 intra:3 nmfr:6 evaluation:2 certainly:1 gn0:7 szl:1 severe:1 extreme:4 accurate:2 underpinnings:1 edge:1 arthur:1 logarithm:2 walk:6 re:1 desired:1 initialized:1 isolated:1 hein:1 theoretical:5 column:1 gn:7 ar:45 bresson:3 maximization:5 cost:2 introducing:1 deviation:1 vertex:35 entry:5 subset:9 uniform:1 gr:1 too:2 graphic:2 perturbed:4 randomizing:1 kxi:1 accomplish:1 density:1 pn0:16 randomized:3 international:3 siam:3 preferring:1 off:1 diverge:1 rlx:6 quickly:4 together:1 connectivity:2 von:3 andersen:1 containing:3 choose:2 slowly:2 worse:1 disguise:1 xbresson:1 chung:1 return:6 potential:1 waste:1 satisfy:1 vi:12 multiplicative:2 view:3 performed:1 closed:2 pend:1 analyze:1 linked:1 reached:1 red:1 hf:1 option:3 parallel:2 start:1 minimize:1 chart:1 ni:2 accuracy:6 maximized:1 correspond:1 landscape:1 handwritten:1 thumb:1 sundar:1 reach:1 suffers:2 whenever:2 facebook:1 definition:1 infinitesimal:6 lengthy:1 against:1 energy:9 sixth:1 james:3 obvious:1 dm:1 naturally:1 proof:4 proved:1 popular:2 manifest:1 lim:2 realm:1 segmentation:1 schedule:1 mikl:1 higher:1 attained:2 methodology:1 improved:1 formulation:2 execute:1 strongly:3 just:1 until:1 lmu:1 hand:4 overlapping:1 continuity:2 quality:8 reveal:1 scientific:1 grows:1 effect:2 normalized:27 contain:2 xavier:3 analytically:1 read:1 symmetric:1 dhillon:1 interchangeably:1 numerator:1 self:1 kaushik:1 covering:1 essence:1 vipin:1 coincides:1 criterion:1 presenting:1 image:2 isolates:1 fi:5 superior:1 volume:1 belong:1 rth:1 refer:1 ai:1 similarly:1 dot:1 stable:3 similarity:5 base:1 own:1 perspective:1 optimizing:4 zhirong:1 certain:1 inequality:3 binary:2 arbitrarily:3 accomplished:1 greater:1 relaxed:2 george:1 mr:3 purity:9 converge:2 maximize:8 paradigm:1 monotonically:2 arithmetic:2 full:1 faster:2 long:2 post:1 promotes:1 a1:22 laplacian:3 variant:5 denominator:1 essentially:1 vision:2 iteration:1 kernel:1 tailored:1 achieved:1 oren:1 irregular:1 addition:2 else:1 walker:2 source:2 limn:2 posse:1 strict:3 subject:3 leveraging:1 effectiveness:1 coarsening:1 leverage:1 noting:1 yang:1 easy:3 wn:1 enough:3 variety:2 affect:1 iterate:3 switch:1 krishnan:1 brecht:2 architecture:1 opposite:1 idea:3 tradeoff:1 multiclass:2 angeles:1 triviality:1 effort:1 algebraic:3 york:1 proceed:1 prefers:2 dramatically:1 generally:2 detailed:3 clear:1 eigenvectors:2 http:1 outperform:1 exist:1 nsf:1 singapore:1 tutorial:1 rescue:1 bulk:1 blue:1 discrete:5 write:1 vol:2 key:2 four:2 nevertheless:1 changing:1 prevent:1 diffusion:1 v1:1 vast:1 graph:46 relaxation:10 defect:1 cone:1 convert:1 enforced:2 run:3 luxburg:1 extends:1 family:2 reasonable:1 arrive:2 slp:1 vn:2 almost:1 raman:1 prefer:1 comparable:3 bound:9 hi:1 guaranteed:1 fan:1 annual:2 mtv:3 nonnegative:1 occur:1 constraint:14 erkki:1 personalized:3 argument:1 min:1 formulating:1 optimality:1 rcv1:2 uminsky:1 infinitesimally:2 kumar:1 according:1 combination:2 disconnected:3 poor:3 remain:1 smaller:1 intimately:1 ur:2 lp:6 modification:1 amol:1 pr:7 heart:1 taken:1 mutually:3 remains:1 describing:1 eventually:2 needed:1 mind:1 know:1 end:3 available:1 observe:1 away:1 spectral:4 enforce:1 alternative:1 slower:1 algorithm1:1 thomas:2 original:3 denotes:6 dirty:1 clustering:7 maintaining:1 prof:2 approximating:1 classical:6 objective:24 move:1 intend:1 malik:1 occurs:1 strategy:8 usual:3 diagonal:2 exhibit:1 gradient:3 onur:1 sci:1 majority:1 mail:1 unstable:3 trivial:2 reason:1 loyola:1 enforcing:2 code:2 ratio:1 balance:5 minimizing:2 pie:1 statement:1 hao:1 trace:1 negative:4 proper:1 perform:1 upper:2 benchmark:1 finite:3 descent:1 behave:1 tele:1 defining:1 precise:1 rn:10 perturbation:8 smoothed:2 sharp:2 david:1 complement:1 cast:1 namely:1 required:1 extensive:1 optimized:1 california:1 distinction:2 barcelona:1 nip:3 beyond:2 usually:1 dynamical:1 below:1 pattern:2 regime:1 summarize:1 program:6 pagerank:1 green:1 max:1 natural:2 difficulty:1 rely:3 indicator:1 hr:1 mn:3 scheme:1 github:1 ne:1 lk:5 arora:1 stella:1 coupled:1 text:3 sg:1 understanding:1 geometric:2 literature:1 acknowledgement:1 relative:2 expect:2 highlight:1 rationale:1 dop:1 ingredient:2 foundation:2 degree:5 affine:1 consistent:2 thresholding:3 principle:2 balancing:2 row:2 succinctly:1 supported:1 formal:1 allow:1 deeper:1 szeliski:1 stripped:1 taking:1 sparse:2 tolerance:1 calculated:1 xn:1 world:2 transition:1 fb:1 exceedingly:1 collection:4 made:1 coincide:1 simplified:2 transaction:3 approximate:2 sz:1 global:4 simonovits:1 conclude:2 xi:1 continuous:10 sk:15 table:6 additionally:2 promising:1 terminate:1 reasonably:1 superficial:2 necessarily:1 diag:1 vj:3 main:4 linearly:3 fair:1 x1:1 body:1 tl:1 fashion:1 vr:2 wish:1 comput:1 guan:1 third:1 theorem:10 down:1 bad:9 specific:1 showing:1 unperturbed:3 appeal:1 gupta:1 consist:1 mnist:4 adding:3 effectively:1 sequential:1 linearization:1 illustrates:1 chen:1 depicted:1 simply:2 likely:1 explore:1 ncut:18 scalar:1 inderjit:1 hua:2 monotonic:1 aa:1 corresponds:3 minimizer:1 radically:1 relies:6 acm:2 applies:1 rnd:2 cite:1 succeed:1 truth:1 viewed:1 goal:1 optdigits:1 quantifying:1 formulated:1 dikmen:1 absence:1 feasible:1 experimentally:1 change:1 specifically:1 uniformly:2 shang:2 total:4 teng:2 experimental:5 tendency:1 latter:2 arises:2 unbalanced:2 syama:1 spielman:2 incorporate:1 phenomenon:1 |
5,777 | 6,227 | An algorithm for 1 nearest neighbor search via
monotonic embedding
Xinan Wang?
UC San Diego
[email protected]
Sanjoy Dasgupta
UC San Diego
[email protected]
Abstract
Fast algorithms for nearest neighbor (NN) search have in large part focused on 2
distance. Here we develop an approach for 1 distance that begins with an explicit
and exactly distance-preserving embedding of the points into 22 . We show how
this can ef?ciently be combined with random-projection based methods for 2 NN
search, such as locality-sensitive hashing (LSH) or random projection trees. We
rigorously establish the correctness of the methodology and show by experimentation using LSH that it is competitive in practice with available alternatives.
1
Introduction
Nearest neighbor (NN) search is a basic primitive of machine learning and statistics. Its utility in
practice hinges on two critical issues: (1) picking the right distance function and (2) using algorithms
that ?nd the nearest neighbor, or an approximation thereof, quickly.
The default distance function is very often Euclidean distance. This is a matter of convenience
and can be partially justi?ed by theory: a classical result of Stone [1] shows that k-nearest neighbor classi?cation is universally consistent in Euclidean space. This means that no matter what the
distribution of data and labels might be, as the number of samples n goes to in?nity, the kn -NN
classi?er converges to the Bayes-optimal decision boundary, for any sequence (kn ) with kn ? ?
and kn /n ? 0. The downside is that the rate of convergence could be slow, leading to poor performance on ?nite data sets. A more careful choice of distance function can help, by better separating
the different classes. For the well-known MNIST data set of handwritten digits, for instance, the 1NN classi?er using Euclidean distance has an error rate of about 3%, whereas a more careful choice
of distance function?tangent distance [2] or shape context [3], for instance?brings this below 1%.
The second impediment to nearest neighbor search in practice is that a naive search through n
candidate neighbors takes O(n) time, ignoring the dependence on dimension. A wide variety of
ingenious data structures have been developed to speed this up. The most popular of these fall into
two categories: hashing-based and tree-based.
Perhaps the best-known hashing approach is locality-sensitive hashing (LSH) [4, 5, 6, 7, 8, 9, 10].
These randomized data structures ?nd approximate nearest neighbors with high probability, where
c-approximate solutions are those that are at most c times as far as the nearest neighbor.
Whereas hashing methods create a lattice-like spatial partition, tree methods [11, 12, 13, 14] create
a hierarchical partition that can also be used to speed up nearest neighbor search. There are families
of randomized trees with strong guarantees on the tradeoff between query time and probability of
?nding the exact nearest neighbor [15].
These hashing and tree methods for 2 distance both use the same primitive: random projection [16].
For data in Rd , they (repeatedly) choose a random direction u from the multivariate Gaussian
?
Supported by UC San Diego Jacobs Fellowship
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
N (0, Id ) and then project points x onto this direction: x ? u ? x. Such projections have many
appealing mathematical properties that make it possible to give algorithmic guarantees, and that
also produce good performance in practice.
For distance functions other than 2 , there has been far less work. In this paper, we develop nearest
neighbor methods for 1 distance. This is a more natural choice than 2 in many situations, for
instance when the data points are probability distributions: documents are often represented as distributions over topics, images as distributions over categories, and so on. Earlier works on 1 search
are summarized below. We adopt a different approach, based on a novel embedding.
One basic fact is that 1 distance is not embeddable in 2 [17]. That is, given a set of points
x1 , . . . , xn ? Rd , it is in general not possible to ?nd corresponding points z1 , . . . , zn ? Rq such that
xi ? xj 1 = zi ? zj 2 . This can be seen even from the four points at the?vertices of a square?any
embedding of these into 2 induces a multiplicative distortion of at least 2.
Interestingly, however, the square root of 1 distance is embeddable in 2 [18]. And the nearest
1/2
neighbor with respect to 1 distance is the same as the nearest neighbor with respect to 1 . This
observation is the starting point of our approach. It suggests that we might be able to embed data
into 2 and then simply apply well-established methods for 2 nearest neighbor search. However,
there are numerous hurdles to overcome.
1/2
First, the embeddability of 1 into 2 is an existential, not algorithmic, fact. Indeed, all that is
known for general case is that there exists such an embedding into Hilbert space. For the special
case of data in {0, 1, . . . , M }d , earlier work has suggested a unary embedding into a Hamming
space {0, 1}M d (where 0 ? x ? M gets mapped to x 1?s followed by (M ? x) 0?s) [19], but this is
wasteful of space and is inef?cient to be used by dimension reduction algorithms [16] when M is
large. Our embedding is general and is more ef?cient.
Now, given a ?nite point set x1 , . . . , xn ? Rd and the knowledge that an embedding exists, we
could use multidimensional scaling [20] to ?nd such an embedding. But this takes O(n3 ) time,
which is often not viable. Instead, we exhibit an explicit embedding: we give an expression for
points z1 , . . . , zn ? RO(nd) such that xi ? xj 1 = zi ? zj 22 .
This brings us to the second hurdle. The explicit construction avoids in?nite-dimensional space but
is still much higher-dimensional than we would like. The space requirement for writing down the n
embedded points is O(n2 d), which is prohibitive in practice. To deal with this, we recall that the two
popular schemes for 2 embedding described above are both based on Gaussian random projections,
and in fact look at the data only through the lens of such projections. We show how to compute these
projections without ever constructing the O(nd)-dimensional embeddings explicitly.
Finally, even if it is possible to ef?ciently build a data structure on the n points, how can queries
be incorporated? It turns out that if a query point is added to the original n points, our explicit
embedding changes signi?cantly. Nonetheless, by again exploiting properties of Gaussian random
projections, we show that it is possible to hold on to the random projections of the original n embedded points and to set the projection of the query point so that the correct joint distribution is
achieved. Moreover, this can be done very ef?ciently.
Finally, we run a variety of experiments showing the good practical performance of this approach.
Related work
The k-d tree [11] is perhaps the prototypical tree-based method for nearest neighbor search, and can
be used for 1 distance. It builds a hierarchical partition of the data using coordinate-wise splits, and
uses geometric reasoning to discard subtrees during NN search. Its query time can degenerate badly
with increasing dimension, as a result of which several variants have been developed, such as trees
in which the cells are allowed to overlap slightly [21]. Various tree-based methods have also been
developed for general metrics, such as the metric tree and cover tree [14, 12].
For k-d tree variants, theoretical guarantees are available for exact 2 nearest neighbor search when
the split direction is chosen at random from a multivariate Gaussian [15]. For a data set of n points,
the tree has size O(n) and the query time is O(2d log n), where d is the intrinsic dimension of the
data. Such analysis is not available for 1 distance.
2
Also in wide use is locality-sensitive hashing for approximate nearest neighbor search [22]. For a
data set of n points, this scheme builds a data structure of size O(n1+? ) and ?nds a c-approximate
nearest neighbor in time O(n? ), for some ? > 0 that depends on c, on the speci?c distance function,
and on the hash family. For 2 distance, it is known how to achieve ? ? 1/c2 [23], although the
scheme most commonly used in practice has ? ? 1/c [8]. This works by repeatedly using the
following hash function:
h(x) =
(v ? x + b)/R,
where v is chosen at random from a multivariate Gaussian, R > 0 is a constant, and b is uniformly
distributed in [0, R). A similar scheme also works for 1 , using Cauchy random projection: each
coordinate of v is picked independently from a standard Cauchy distribution. This achieves exponent
? ? 1/c, although one downside is the high variance of this distribution. Another LSH family
[22, 10] uses a randomly shifted grid for 1 nearest neighbor search. But it is less used in practice,
due to its restrictions on data. For example, if the nearest neighbor is further away than the width of
the grid, it may never be found.
Besides LSH, random projection is the basis for some other NN search algorithms [24, 25], classi?cation methods [26], and dimension reduction techniques [27, 28, 29].
There are several impediments to developing NN methods for 1 spaces. 1) There is no JohnsonLindenstrauss type dimension reduction technique for 1 [30]. 2) The Cauchy random projection
does not preserve the 1 distance as a norm, which restricts its usage for norm based algorithms [31].
3) Useful random properties [26] cannot be formulated exactly; only approximations exist. Fortunately, all these three problems are absent in 2 space, which motivates developing ef?cient embedding algorithms from 1 to 2 .
2
Explicit embedding
We begin with an explicit isometric embedding from 1 to 22 for 1-dimensional data. This extends
immediately to multiple dimensions because both 1 and 22 distance are coordinatewise additive.
2.1
The 1-dimensional case
First, sort the points x1 , . . . , xn ? R so that x1 ? x2 ? ? ? ? ? xn . Then, construct the embedding
?(x1 ), ?(x2 ), . . . , ?(xn ) ? Rn?1 as follows:
?
?
?
?
?
?
?(x1 )
0
0
0
..
.
0
?(x )
? 2
??
x2 ? x1
0
??
??
0
??
??
..
??
.
0
?(x3 )
?
? ?
?x 2 ? x 1
? ? x3 ? x2
??
0
??
??
..
??
.
0
?
?
?
?
? ...
?
?
?
?
?
?
?
?(xn )
?
?x 2 ? x 1
?x 3 ? x 2
x4 ? x3
..
.
?
xn ? xn?1
?
?
?
?
?
?
(1)
For any 1 ? i < j ? n, ?(xi ) and ?(xj ) agree on all coordinates except i to (j ? 1). Therefore,
?(xi ) ? ?(xj )2 =
j
2
xk ? xk?1
1/2
=
k=i+1
j
1/2
xk ? xk?1
= |xj ? xi |1/2 , (2)
k=i+1
1/2
so the embedding preserves the 1 distance between these points. Since the construction places no
restrictions on the range of x1 , x2 , . . . , xn , it is applicable to any ?nite set of points.
2.2
Extension to multiple dimensions
We construct an embedding of d-dimensional points by stacking 1-dimensional embeddings.
Consider points x1 , x2 , . . . , xn ? Rd . Suppose we have a collection of embedding maps
?1 , ?2 , . . . , ?d , one per dimension. Each of the embeddings is constructed from the values on a
(j)
single coordinate: if we let xi denote the j-th coordinate of xi , for 1 ? j ? d, then embedding ?j
3
(j)
(j)
(j)
is based on x1 , x2 , . . . , xn ? R. The overall embedding is the concatenation
?
(1)
(2)
(d)
, ??2 xi
, . . . , ??d xi
? Rd(n?1)
? (xi ) = ??1 xi
(3)
where 1 ? i ? n, and ? denotes transpose. For any 1 ? i < j ? n,
d
2
(k)
(k)
? (xi ) ? ? (xj )2 =
? ? k xj
?k xi
1/2
(4)
2
k=1
d
(k)
(k)
=
xi ? xj
1/2
1/2
= xi ? xj 1
(5)
k=1
It may be of independent interest to consider the properties of this explicit embedding. We can
represent it by a matrix of n columns with one embedded point per column. The rank of this
matrix?and, therefore, the dimensionality of the embedded points?turns out to be O(n). But we
can show that the ?effective rank? [32] of the centered matrix is just O(d log n); see Appendix B.
3
Incorporating a query
Once again, we begin with the 1-dimensional case and then extend to higher dimension.
3.1
The 1-dimensional case
For nearest neighbor search, we need a joint embedding of the data points S = {x1 , x2 , . . . , xn }
with the subsequent query point q. In fact, we need to embed S ?rst and then incorporate q later, but
this is non-trivial since adding q changes the explicit embedding of other points.
We start with an example. Again, assume x1 ? x2 ? ? ? ? ? xn .
Example 1. Suppose query q has x2 ? q < x3 . Adding q to the original n points changes the
embedding ?(?) ? Rn?1 of Eq. 1 to ?(?) ? Rn . Notice that the dimension increases by one.
?
?
?
?
?
?
?
?
?(x1 )
0
0
0
0
..
.
0
?(x )
2
?
? ?
x2 ? x1
??
0
??
0
??
??
0
??
??
..
??
.
0
?(x3 )
?
? ?
?x2 ? x1
? ? ?q ? x 2
??
x3 ? q
??
??
0
??
??
..
??
.
0
?
?
?
?
?
?
? ...
?
?
?
?
?
?
?
?
?
?(xn )
?
?x2 ? x1
?q ? x 2
? x3 ? q
x4 ? x3
..
.
?
xn ? xn?1
?
?
?
?
?
?
?
?
(6)
?
?
The query point is mapped to ?(q) = ( x2 ? x1 , q ? x2 , 0, . . . , 0)? ? Rn .
From the example above, it is clear what happens when q lies between some xi and xi+1 . There are
also two ?corner cases? that can occur: q < x1 and q > xn . Fortunately, the embedding of S is
almost unchanged for the corner cases: ?(xi ) = (?? (xi ), 0)? ? Rn , appending a zero at the end.
?
For q < x1 , the query is mapped to ?(q) = (0, . . . , 0, x1 ? q)? ? Rn ; for q ? xn , the query is
?
?
?
?
mapped to ?(q) = ( x2 ? x1 , x3 ? x2 , . . . , xn ? xn?1 , q ? xn )? ? Rn .
3.2
Random projection for the 1-dimensional case
We would like to generate Gaussian random projections of the 2 embeddings of the data points. In
this subsection, we mainly focus on the typical case when the query q lies between two data points,
and we leave the treatment of the (simpler) corner cases to Alg. 1. The notation follows section 3.1,
and we assume the xi are arranged in increasing order for i = 1, 2, . . . , n.
Setting 1. The query lies between two data points: x? ? q < x?+1 for some 1 ? ? ? n ? 1.
4
We will consider two methods for randomly projecting the embedding of S ? {q} and show that they
yield exactly the same joint distribution.
The ?rst method applies Gaussian random projection to the embedding ? of S ? {q}. Sample a
multivariate Gaussian vector v from N (0, In ). For any x ? S ? {q}, the projection is
pg (x) := v ? ?(x)
(7)
This is exactly the projection we want. However, it requires both S and q, whereas in practice, we
will initially have to project just S by itself, and we will only later be given some (arbitrary) q.
The second method starts by projecting the explicitly embedded points S. Later, it receives query
q and ?nds a suitable projection for it as well. So, we begin by sampling a multivariate Gaussian
vector u from N (0, In?1 ), and for any x ? S, use the projection
pe (x) := u ? ?(x)
(8)
where the subindex e stands for embedding. Conditioned on the value (pe (x?+1 )?pe (x? )), namely
?
x?+1 ? x? ? u(?) , the projection of a subsequent query q is taken to be
pe (q) = pe (x? ) + ?
2
?1 ? (pe (x?+1 ) ? pe (x? )) ?12 ?22
, 2
??N
?12 + ?22
?1 + ?22
(9)
where ?12 = q ? x? , ?22 = x?+1 ? q.
Theorem 1. Fix any x1 , . . . , xn , q ? R satisfying Setting 1. Consider the joint distribution of
[pg (x1 ) , pg (x2 ) , . . . , pg (xn ) , pg (q)] induced by a random choice of v (as per Eq. 7), and the joint
distribution of [pe (x1 ) , pe (x2 ) , . . . , pe (xn ) , pe (q)] induced by a random choice of u and ? (as
per Eqs. 8 and 9). These distributions are identical.
The details are in Appendix A: brie?y, we show that both joint distributions are multivariate Gaussians, and that they have the same mean and covariance.
We highlight the advantages of our method. First, projecting the data set using Eq. 8 does not require
advance knowledge of the query, which is crucial for nearest neighbor search; second, generating
the projection for the 1-dimensional query takes O(log n) time, which makes this method ef?cient.
We describe the 1-dimensional algorithm in Alg. 1, where we assume that a permutation that sorts
the points, denoted ?, is provided, along with the location of q within this ordering, denoted ?. We
will resolve this later in Alg. 2.
3.3
Random projection for the higher dimensional case
We will henceforth use ERP (Euclidean random projection) to denote our overall scheme consisting
of embedding 1 into 22 , followed by random Gaussian projection (Alg. 2). A competitor scheme,
as described earlier, applies Cauchy random projection directly in the 1 space; we refer to this as
CRP. The time and space costs for ERP are shown in Table 1, if we generate k projections for n data
points and m queries in Rd . The costs scale linearly in d, since the constructions and computation
are dimension by dimension. We have a detailed analysis below.
Preprocessing: This involves sorting the points along each coordinate separately and storing the
resulting permutations ?1 , . . . , ?d . The time and space costs are acceptable, because reading or
storing the data takes as much as O(nd).
Project data: The time taken by ERP to project the n points is comparable to that of CRP. But
ERP requires a factor O(n) more space, compared to O(kd) for CRP, because it needs to store the
projections of each of the individual coordinates of the data points.
Project query: ERP methods are ef?cient for query answering. The projection is calculated directly
in the original d-dimensional space. The log n overhead comes from using binary search, coordinatewise, to place the query within the ordering of the data points. Once these ranks are obtained,
they can be reused for as many projections as needed.
5
Algorithm 1 Random projection (1-dimensional case)
function project-data (S, ?)
input:
? data set S = (xi : 1 ? i ? n)
? sorted indices ? = (?i : 1 ? i ? n)
such that x?1 ? x?2 ?, . . . , ? x?n
output:
? projections P = (pi : 1 ? i ? n) for S
p?1 ? 0
for i = 2, 3, . . . , n do
ui ? N (0, 1)
p?i ? p?i?1 + ui ? x?i ? x?i?1
end for
return P
function project-query(q, ?, S, ?, P )
input:
? query q and its rank ? in data set S
? sorted indices ? of S
? projections P of S
output:
? projection pq for q
case: 1 ? ? ? n ? 1
?12 ? q ? x??
?22 ? x??+1 ? q
2
?1 ? (p??+1 ? p?? ) ?12 ?22
??N
, 2
?12 + ?22
?1 + ?22
p q ? p?? + ?
case: ? = 0
?
r ? N (0, 1), pq ? r ? x?1 ? q
case: ? = n
?
r ? N (0, 1), pq ? p?n + r ? q ? x?n
return pq
Table 1: Ef?ciency of ERP algorithm: Generate k projections for n data points and m queries in Rd .
Time cost
Space cost
4
Preprocessing
O(dn log n)
O(dn)
Project data
O(knd)
O(knd)
Project query
O(md(k + log n))
NA
Experiment
In this section, we demonstrate that ERP can be directly used by existing NN search algorithms,
such as LSH, for ef?cient 1 NN search. We choose commonly used data sets for image retrieval
and text classi?cation. Besides our method, we also implement the metric tree (a popular tree-type
data structure) and Cauchy LSH for comparison.
Data sets When data points represent distributions, 1 distance is natural. We use four such data
sets. 1) Corel uci [21], available at [33], contains 68,040 histograms (32-dimension) for color images from Corel image collections; 2) Corel hist [34, 21], processed by [21], contains 19,797 histograms (64-dimension, non-zero dimension is 44) for color images from Corel Stock Library; 3)
Cade [35], is a collection of documents from Brazilian web pages. Topics are extracted using latent
Dirichlet allocation algorithm [36]. We use 13,016 documents with distributions over the 120 topics
(120-dimension); 4) We download about 35,000 images from ImageNet [37], and process each of
them into a probabilistic distribution over 1,000 classes using trained convolution neural network
[38]. Furthermore, we collapse the distribution into a 100-dimension representation, summing each
10 consecutive mass of probability. This reduces the training and testing time.
In each data set, we remove duplicates. For either parameter optimization or testing, we randomly
separate out 10% of the data as queries such that the query-to-data ratio is 1 : 9.
Performance evaluation We evaluate performance using query cost. For linear scan or metric
tree, this is the average number of points accessed when answering a query. For LSH, we also need
to add the overhead of evaluating the LSH functions.
The scheme [8, 39] of LSH is summarized as follows. Given three parameters k, L and R (k, L
are positive integers, k is even, R is a positive real), the LSH algorithm uses k-tuple hash functions
of the form g(x) = (h1 (x), h2 (x), . . . , hk (x)) to distribute data or queries to their bins. L is the
total number of such g-functions. The h-functions are of the form h(x) =
(v ? x + b)/R, each
6
Algorithm 2 Overall algorithm for Random projection, in context of NN search
project data:
for j = 1, 2, . . . , d do
Pj ? project-data (Sj , ?j ) where
Pj = {pji : 1 ? i ? n}
end for
save P = (P1 , P2 , . . . , Pd )
d
projection of xi ? S is j=1 pji
Starting information:
? data set S = {xi : 1 ? i ? n} ? Rd
Subsequent arrival:
? query q ? Rd
preprocessing:
Sort data along each dimension:
for j ? {1, . . . , d} do
(j)
Sj = {xi : 1 ? i ? n}
?j ? index-sort (Sj ), where
?j = {?ji : 1 ? i ? n} satisfying
(j)
(j)
project query:
for j = 1, 2, . . . , d do
?j ? binary-search(q (j) , Sj , ?j ) satisfying
(j)
(j)
x?j?j ? q (j) ? x?j(?j +1)
(j)
x?j1 ? x?j2 ? ? ? ? ? x?jn
end for
save ? = (?1 , ?2 , . . . , ?d )
end for
save rank ? for use in multiple projections
pq ? 0
for j = 1, 2, . . . , d do
pg ? pg + project-query(q (j) , ?j , Sj , ?j , Pj )
end if
projection for q is pg
Table 2: Performance evaluation: Query cost = Tr + To .
Linear Scan or Metric Tree
CRP-LSH
ERP-LSH
Retrieval cost: Tr
# Accessed points
# Accessed points
# Accessed points
Overhead: To
0?
k/2
? ? 2L
k/2 ? 2L + log n
either explicitly or implicitly associated with a random vector v and a uniformly distributed?
variable
b ? [0, R). As suggested in [39], we implement the reuse of h-functions so that only (k/2 ? 2L) of
them are actually evaluated. For ERP-LSH, there is an additional overhead of log n due to the use
of binary search. We summarize these costs in Table 2; for conciseness, we have removed the linear
dependence on d in both the retrieval cost and the overhead.
Implementations The linear scan and the metric tree are for exact NN search. We use the code
[40] for metric tree. For LSH, there is only public code for 2 NN search. We implement the LSH
scheme, referring to the manual [39]. In particular, we implement
the reuse of the h-functions, such
?
that the number of actually evaluated h-functions is (k/2 ? 2L), in contrast to (k ? L).
We choose approximation factor c = 1.5 (the results turn out to be much closer to true NN), and
set the success rate to be 0.9, which means that the algorithm should report c-approximate NN
successfully for at least 90% of the queries. Taking the parameter suggestions [8] into account, we
? {1, 5, 10, 50, 100}; we choose R for ERP-LSH from dN N ?
choose R for CRP-LSH from dN N
1
{1, 2, 3, 4}, where dN N = |Q| q?Q q ? xN N (q)1 is the average 1 NN distance; dN N =
1/2
1
q ? xN N (q)1 is the average 1 NN distance. The term dN N or dN N normalizes
q?Q
|Q|
the average NN distance to 1 for LSH. Fixing R, we optimize k and L in the following range:
k ? {2, 4, . . . , 30}, L ? {1, 2, . . . , 40}.
Results Both CRP-LSH and ERP-LSH achieve a competitive ef?ciency over the other two methods. We list the test results in Table 3, and put parameters in Table 4 in Appendix C.
7
Table 3: Average query cost and average approximation rate if applicable (in parentheses).
Linear scan
Metric tree
CRP-LSH
ERP-LSH
5
Corel uci
(d = 32)
61220
2575
329 ? 55 (1.07)
330 ? 18 (1.11)
Corel hist
(d = 44)
17809
718
245 ? 43 (1.05)
250 ? 15 (1.08)
Cade
(d = 120)
11715
9184
292 ? 11 (1.11)
218 ? 8 (1.15)
ImageNet
(d = 100)
31458
12375
548 ? 66 (1.09)
346 ? 15 (1.13)
Conclusion
In this paper, we have proposed an explicit embedding from 1 to 22 , and we have found an algorithm
to generate the random projections, reducing the time dependence of n from O(n) to O(log n). In
addition, we have observed that the effective rank of the (centered) embedding is as low as O(d ln n),
compared to its rank O(n). Algorithms remain to be explored, in order to take advantage of such a
low rank.
Our current method takes space O(ndm) to store the parameters of the random vectors, where m is
the number of hash functions. We have implemented one empirical scheme [39] to reuse the hashing
functions. It is still expected to develop other possible schemes.
Acknowledgement
The authors are grateful to the National Science Foundation for support under grant IIS-1162581.
References
[1] C. J. Stone. Consistent nonparametric regression. The Annals of Statistics, 5:595?620, 1977.
[2] P. Y. Simard, Y. A. LeCun, J. S. Denker, and B. Victorri. Transformation invariance in pattern
recognition?Tangent distance and tangent propagation. In Neural networks: Tricks of the
trade, volume 1524, pages 239?274. Springer-Verlag, New York, 1998.
[3] S. Belongie, J. Malik, and J. Puzicha. Shape matching and object recognition using shape
contexts. IEEE Trans. Pattern Anal. Mach. Intell., 24(4):509?522, 2002.
[4] A. Broder. On the resemblance and containment of documents. In Proceedings of Compression
and Complexity of Sequences, pages 21?29, 1997.
[5] P. Indyk and R. Motwani. Approximate nearest neighbors: Towards removing the curse of
dimensionality. In STOC, pages 604?613, 1998.
[6] A. Broder, M. Charikar, A. Frieze, and M. Mitzenmacher. Min-wise independent permutations.
Journal of Computer and System Sciences, 60:630?659, 2000.
[7] M. Charikar. Similarity estimation techniques from rounding algorithms. In STOC, pages
380?388, 2002.
[8] M. Datar, N. Immorlica, P. Indyk, and V. Mirrokni. Locality-sensitive hashing scheme based
on p-stable distributions. In SoCG, pages 253?262, 2004.
[9] A. Shrivastava and P. Li. Asymmetric LSH (ALSH) for sublinear time maximum inner product
search (MIPS). In NIPS, pages 2321?2329, 2014.
[10] A. Andoni and P. Indyk. Ef?cient algorithms for substring near neighbor problem. In SODA,
pages 1203?1212, 2006.
[11] J. L. Bentley. Multidimensional binary search trees used for associative searching. Communications of the ACM, 18(9):509?517, 1975.
[12] A. Beygelzimer, S. Kakade, and J. Langford. Cover trees for nearest neighbor. In ICML, pages
97?104, 2006.
[13] S. M. Omohundro. Bumptrees for ef?cient function, constraint, and classi?cation learning. In
NIPS, volume 40, pages 175?179, 1991.
[14] J. K. Uhlmann. Satisfying general proximity/similarity queries with metric trees. Information
processing letters, 40:175?179, 1991.
8
[15] S. Dasgupta and K. Sinha. Randomized partition trees for nearest neighbor search. Algorithmica, 72:237?263, 2015.
[16] W. Johnson and J. Lindenstrauss. Extensions of Lipschhitz maps into a Hilbert space. Contemporary Math, 26:189?206, 1984.
[17] J. H. Wells and L. R. Williams. Embeddings and extensions in analysis, volume 84. SpringerVerlag, New York, 1975.
[18] A. Vedaldi and A. Zisserman. Ef?cient additive kernels via explicit feature maps. IEEE Trans.
Pattern Anal. Mach. Intell., 34:480?492, 2012.
[19] N. Linial, E. London, and Y. Rabinovich. The geometry of graphs and some of its algorithmic
applications. In FOCS, pages 577?591, 1994.
[20] I. Borg and P. Groenen. Modern multidimensional scaling: Theory and applications. SpringerVerlag, Berlin, 1997.
[21] T. Liu, A. Moore, A. Gray, and K. Yang. An investigation of practical approximate nearest
neighbor algorithms. In NIPS, pages 825?832, 2004.
[22] A. Andoni and P. Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in
high dimensions. In Communications of the ACM, volume 51, pages 117?122, 2008.
[23] A. Andoni and P. Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in
high dimensions. In FOCS, pages 459?468, 2006.
[24] J. Kleinberg. Two algorithms for nearest-neighbor search in high dimensions. In STOC, pages
599?608, 1997.
[25] N. Ailon and B. Chazelle. The fast Johnson-Lindenstrauss transform and approximate nearest
neighbors. SIAM Journal on Computing, 39:302?322, 2009.
[26] P. Li, G. Samorodnitsk, and J. Hopcroft. Sign cauchy projections and chi-square kernel. In
NIPS, pages 2571?2579, 2013.
[27] S. Dasgupta and A. Gupta. An elementary proof of a theorem of Johnson and Lindenstrauss.
Random Structures & Algorithms, 22:60?65, 2003.
[28] D. Achlioptas. Database-friendly random projections. In Proceedings of the Symposium on
Principles of Database Systems, pages 274?281, 2001.
[29] R. I. Arriaga and S. Vempala. An algorithmic theory of learning: Robust concepts and random
projection. In FOCS, pages 616?623, 1999.
[30] M. Charikar and A. Sahai. Dimension reduction in the L1 norm. In FOCS, pages 551?560,
2002.
[31] P. Indyk. Stable distributions, pseudorandom generators, embeddings, and data stream computation. Journal of the ACM, 53(3):307?323, 2006.
[32] M. Rudelson and R. Vershynin. Sampling from large matrices: An approach through geometric
functional analysis. Journal of the ACM (JACM), 54(4):21, 2007.
[33] https://archive.ics.uci.edu/ml/datasets/Corel+Image+Features.
[34] A. Gionis, P. Indyk, and R. Motwani. Similarity search in high dimensions via hashing. In
VLDB, volume 99, pages 518?529, 1999.
[35] Ana Cardoso-Cachopo. Improving Methods for Single-label Text Categorization. PdD Thesis,
Instituto Superior Tecnico, Universidade Tecnica de Lisboa. Data avaliable at http://ana.
cachopo.org/datasets-for-single-label-text-categorization, 2007.
[36] http://www.cs.columbia.edu/~blei/topicmodeling_software.html.
[37] J. Deng, W. Dong, R. Socher, L. J. Li, K. Li, and F. Li. Imagenet: A large-scale hierarchical
image database. In CVPR, pages 248?255. IEEE, 2009.
[38] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and
T. Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint
arXiv:1408.5093, 2014.
[39] A. Andoni and P. Indyk. E 2 LSH 0.1 user manual. Technical report, 2005.
[40] J. K. Uhlmann. Implementing metric trees to satisfy general proximity/similarity queries.
Manuscript, 1991.
9
| 6227 |@word compression:1 knd:2 norm:3 nd:9 reused:1 vldb:1 jacob:1 covariance:1 pg:8 tr:2 reduction:4 liu:1 contains:2 document:4 interestingly:1 existing:1 current:1 chazelle:1 guadarrama:1 beygelzimer:1 additive:2 partition:4 subsequent:3 j1:1 shape:3 remove:1 hash:4 prohibitive:1 xk:4 blei:1 math:1 location:1 org:1 simpler:1 accessed:4 mathematical:1 along:3 c2:1 constructed:1 dn:5 borg:1 viable:1 symposium:1 focs:4 overhead:5 expected:1 indeed:1 p1:1 chi:1 resolve:1 curse:1 increasing:2 begin:4 spain:1 project:13 moreover:1 notation:1 provided:1 mass:1 what:2 developed:3 transformation:1 guarantee:3 multidimensional:3 friendly:1 exactly:4 ro:1 grant:1 positive:2 instituto:1 mach:2 id:1 bumptrees:1 datar:1 might:2 suggests:1 collapse:1 range:2 practical:2 lecun:1 testing:2 practice:8 implement:4 x3:9 digit:1 nite:4 empirical:1 vedaldi:1 projection:43 matching:1 get:1 convenience:1 onto:1 cannot:1 put:1 context:3 writing:1 restriction:2 optimize:1 map:3 www:1 primitive:2 go:1 starting:2 independently:1 williams:1 focused:1 immediately:1 pdd:1 embedding:33 searching:1 sahai:1 coordinate:7 annals:1 diego:3 construction:3 suppose:2 user:1 exact:3 us:3 trick:1 satisfying:4 recognition:2 asymmetric:1 database:3 observed:1 preprint:1 wang:1 embeddable:2 ordering:2 trade:1 removed:1 contemporary:1 rq:1 pd:1 ui:2 complexity:1 rigorously:1 cade:2 trained:1 grateful:1 linial:1 basis:1 joint:6 hopcroft:1 stock:1 represented:1 various:1 fast:3 effective:2 describe:1 london:1 query:39 caffe:1 cvpr:1 distortion:1 statistic:2 transform:1 itself:1 indyk:8 associative:1 sequence:2 advantage:2 karayev:1 product:1 j2:1 uci:3 inef:1 nity:1 degenerate:1 achieve:2 exploiting:1 convergence:1 rst:2 requirement:1 motwani:2 darrell:1 produce:1 generating:1 categorization:2 converges:1 leave:1 object:1 help:1 develop:3 fixing:1 nearest:30 eq:4 p2:1 strong:1 implemented:1 c:2 signi:1 involves:1 come:1 direction:3 correct:1 centered:2 ana:2 public:1 implementing:1 bin:1 require:1 fix:1 investigation:1 elementary:1 extension:3 hold:1 proximity:2 ic:1 algorithmic:4 achieves:1 adopt:1 consecutive:1 estimation:1 applicable:2 label:3 uhlmann:2 sensitive:4 correctness:1 create:2 successfully:1 universidade:1 gaussian:10 focus:1 rank:8 mainly:1 hk:1 contrast:1 nn:18 unary:1 initially:1 issue:1 overall:3 groenen:1 html:1 denoted:2 exponent:1 spatial:1 special:1 uc:3 construct:2 never:1 once:2 sampling:2 x4:2 identical:1 look:1 icml:1 report:2 duplicate:1 modern:1 randomly:3 frieze:1 preserve:2 national:1 intell:2 individual:1 algorithmica:1 consisting:1 geometry:1 n1:1 interest:1 evaluation:2 subtrees:1 tuple:1 closer:1 tree:25 euclidean:4 girshick:1 theoretical:1 sinha:1 instance:3 column:2 earlier:3 downside:2 cover:2 zn:2 rabinovich:1 lattice:1 stacking:1 cost:11 vertex:1 rounding:1 johnson:3 kn:4 combined:1 referring:1 vershynin:1 broder:2 randomized:3 siam:1 cantly:1 probabilistic:1 dong:1 picking:1 quickly:1 na:1 again:3 thesis:1 choose:5 henceforth:1 corner:3 simard:1 leading:1 return:2 li:5 account:1 embeddability:1 distribute:1 de:1 summarized:2 matter:2 gionis:1 satisfy:1 explicitly:3 depends:1 stream:1 multiplicative:1 root:1 picked:1 h1:1 later:4 competitive:2 bayes:1 sort:4 start:2 jia:1 square:3 convolutional:1 variance:1 yield:1 handwritten:1 substring:1 cation:4 manual:2 ed:1 competitor:1 nonetheless:1 thereof:1 johnsonlindenstrauss:1 associated:1 conciseness:1 proof:1 hamming:1 treatment:1 popular:3 recall:1 knowledge:2 subsection:1 dimensionality:2 color:2 hilbert:2 actually:2 manuscript:1 hashing:12 higher:3 isometric:1 methodology:1 zisserman:1 arranged:1 done:1 evaluated:2 mitzenmacher:1 furthermore:1 just:2 crp:7 achlioptas:1 langford:1 receives:1 web:1 propagation:1 brings:2 perhaps:2 resemblance:1 gray:1 bentley:1 usage:1 concept:1 true:1 moore:1 deal:1 during:1 width:1 ndm:1 stone:2 omohundro:1 demonstrate:1 l1:1 reasoning:1 image:8 wise:2 ef:13 novel:1 superior:1 functional:1 ji:1 corel:7 volume:5 extend:1 refer:1 rd:9 grid:2 pq:5 lsh:25 stable:2 similarity:4 add:1 multivariate:6 discard:1 store:2 verlag:1 binary:4 success:1 preserving:1 seen:1 fortunately:2 additional:1 speci:1 deng:1 ii:1 multiple:3 lisboa:1 reduces:1 technical:1 long:1 retrieval:3 parenthesis:1 variant:2 basic:2 regression:1 metric:10 arxiv:2 histogram:2 represent:2 kernel:2 achieved:1 cell:1 whereas:3 fellowship:1 hurdle:2 want:1 separately:1 addition:1 victorri:1 crucial:1 archive:1 induced:2 integer:1 ciently:3 near:3 yang:1 split:2 embeddings:6 mips:1 variety:2 xj:9 zi:2 architecture:1 impediment:2 inner:1 tradeoff:1 absent:1 expression:1 utility:1 reuse:3 york:2 repeatedly:2 useful:1 subindex:1 clear:1 detailed:1 cardoso:1 nonparametric:1 induces:1 processed:1 category:2 generate:4 http:3 exist:1 restricts:1 zj:2 shifted:1 notice:1 sign:1 per:4 dasgupta:4 four:2 alsh:1 wasteful:1 erp:12 pj:3 graph:1 run:1 letter:1 soda:1 extends:1 family:3 place:2 almost:1 brazilian:1 decision:1 appendix:3 scaling:2 acceptable:1 comparable:1 followed:2 badly:1 occur:1 constraint:1 n3:1 x2:19 kleinberg:1 speed:2 min:1 pseudorandom:1 vempala:1 charikar:3 developing:2 ailon:1 poor:1 kd:1 remain:1 slightly:1 appealing:1 kakade:1 happens:1 projecting:3 taken:2 socg:1 ln:1 agree:1 turn:3 needed:1 end:6 available:4 gaussians:1 experimentation:1 apply:1 denker:1 hierarchical:3 away:1 appending:1 save:3 alternative:1 pji:2 jn:1 original:4 denotes:1 dirichlet:1 rudelson:1 hinge:1 build:3 establish:1 classical:1 unchanged:1 malik:1 added:1 ingenious:1 dependence:3 md:1 mirrokni:1 exhibit:1 distance:28 separate:1 mapped:4 separating:1 concatenation:1 berlin:1 topic:3 cauchy:6 trivial:1 besides:2 code:2 index:3 ratio:1 stoc:3 implementation:1 anal:2 motivates:1 observation:1 convolution:1 datasets:2 cachopo:2 situation:1 ever:1 incorporated:1 communication:2 rn:7 ucsd:2 arbitrary:1 download:1 namely:1 z1:2 imagenet:3 established:1 barcelona:1 nip:5 trans:2 able:1 suggested:2 below:3 pattern:3 reading:1 summarize:1 critical:1 overlap:1 natural:2 suitable:1 scheme:11 library:1 numerous:1 nding:1 naive:1 columbia:1 existential:1 text:3 geometric:2 acknowledgement:1 tangent:3 embedded:5 highlight:1 permutation:3 sublinear:1 prototypical:1 suggestion:1 allocation:1 generator:1 shelhamer:1 h2:1 foundation:1 consistent:2 principle:1 storing:2 pi:1 normalizes:1 supported:1 transpose:1 neighbor:32 wide:2 fall:1 taking:1 distributed:2 overcome:1 default:1 xn:26 lindenstrauss:3 boundary:1 dimension:24 avoids:1 stand:1 calculated:1 evaluating:1 commonly:2 collection:3 san:3 universally:1 preprocessing:3 author:1 far:2 sj:5 approximate:10 implicitly:1 ml:1 hist:2 summing:1 containment:1 belongie:1 xi:24 search:30 latent:1 table:7 robust:1 ignoring:1 shrivastava:1 improving:1 alg:4 constructing:1 linearly:1 arrival:1 n2:1 coordinatewise:2 allowed:1 x1:24 cient:9 brie:1 slow:1 explicit:10 ciency:2 candidate:1 lie:3 pe:11 answering:2 justi:1 donahue:1 down:1 theorem:2 embed:2 removing:1 showing:1 er:2 list:1 explored:1 gupta:1 exists:2 intrinsic:1 mnist:1 incorporating:1 adding:2 andoni:4 socher:1 conditioned:1 sorting:1 locality:4 arriaga:1 simply:1 jacm:1 partially:1 monotonic:1 applies:2 springer:1 extracted:1 acm:4 sorted:2 formulated:1 careful:2 towards:1 change:3 springerverlag:2 typical:1 except:1 uniformly:2 reducing:1 classi:6 lens:1 sanjoy:1 total:1 invariance:1 puzicha:1 immorlica:1 support:1 scan:4 incorporate:1 evaluate:1 |
5,778 | 6,228 | Man is to Computer Programmer as Woman is to
Homemaker? Debiasing Word Embeddings
Tolga Bolukbasi1 , Kai-Wei Chang2 , James Zou2 , Venkatesh Saligrama1,2 , Adam Kalai2
1
Boston University, 8 Saint Mary?s Street, Boston, MA
Microsoft Research New England, 1 Memorial Drive, Cambridge, MA
[email protected], [email protected], [email protected], [email protected], [email protected]
2
Abstract
The blind application of machine learning runs the risk of amplifying biases present
in data. Such a danger is facing us with word embedding, a popular framework to
represent text data as vectors which has been used in many machine learning and
natural language processing tasks. We show that even word embeddings trained on
Google News articles exhibit female/male gender stereotypes to a disturbing extent.
This raises concerns because their widespread use, as we describe, often tends to
amplify these biases. Geometrically, gender bias is first shown to be captured by
a direction in the word embedding. Second, gender neutral words are shown to
be linearly separable from gender definition words in the word embedding. Using
these properties, we provide a methodology for modifying an embedding to remove
gender stereotypes, such as the association between the words receptionist and
female, while maintaining desired associations such as between the words queen
and female. Using crowd-worker evaluation as well as standard benchmarks, we
empirically demonstrate that our algorithms significantly reduce gender bias in
embeddings while preserving the its useful properties such as the ability to cluster
related concepts and to solve analogy tasks. The resulting embeddings can be used
in applications without amplifying gender bias.
1
Introduction
Research on word embeddings has drawn significant interest in machine learning and natural language
processing. There have been hundreds of papers written about word embeddings and their applications,
from Web search [22] to parsing Curriculum Vitae [12]. However, none of these papers have
recognized how blatantly sexist the embeddings are and hence risk introducing biases of various
types into real-world systems.
A word embedding, trained on word co-occurrence in text corpora, represents each word (or common
phrase) w as a d-dimensional word vector w
~ 2 Rd . It serves as a dictionary of sorts for computer
programs that would like to use word meaning. First, words with similar semantic meanings tend to
have vectors that are close together. Second, the vector differences between words in embeddings
have been shown to represent relationships between words [27, 21]. For example given an analogy
puzzle, ?man is to king as woman is to x? (denoted as man:king :: woman:x), simple arithmetic of
!
! woman
! ? king
!
the embedding vectors finds that x=queen is the best answer because man
queen.
Similarly, x=Japan is returned for Paris:France :: Tokyo:x. It is surprising that a simple vector
arithmetic can simultaneously capture a variety of relationships. It has also excited practitioners
because such a tool could be useful across applications involving natural language. Indeed, they
are being studied and used in a variety of downstream applications (e.g., document ranking [22],
sentiment analysis [14], and question retrieval [17]).
However, the embeddings also pinpoint sexism implicit in text. For instance, it is also the case that:
!
!
! woman
! ? computer programmer
man
homemaker.
In other words, the same system that solved the above reasonable analogies will offensively answer
?man is to computer programmer as woman is to x? with x=homemaker. Similarly, it outputs that a
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Extreme she
1. homemaker
2. nurse
3. receptionist
4. librarian
5. socialite
6. hairdresser
7. nanny
8. bookkeeper
9. stylist
10. housekeeper
Extreme he
1. maestro
2. skipper
3. protege
4. philosopher
5. captain
6. architect
7. financier
8. warrior
9. broadcaster
10. magician
Gender stereotype she-he analogies
sewing-carpentry registered nurse-physician
housewife-shopkeeper
nurse-surgeon
interior designer-architect
softball-baseball
blond-burly
feminism-conservatism
cosmetics-pharmaceuticals
giggle-chuckle
vocalist-guitarist
petite-lanky
sassy-snappy
diva-superstar
charming-affable
volleyball-football cupcakes-pizzas
lovely-brilliant
queen-king
waitress-waiter
Gender appropriate she-he analogies
sister-brother
mother-father
ovarian cancer-prostate cancer convent-monastery
Figure 1: Left The most extreme occupations as projected on to the she he gender direction on
w2vNEWS. Occupations such as businesswoman, where gender is suggested by the orthography,
were excluded. Right Automatically generated analogies for the pair she-he using the procedure
described in text. Each automatically generated analogy is evaluated by 10 crowd-workers to whether
or not it reflects gender stereotype.
father is to a doctor as a mother is to a nurse. The primary embedding studied in this paper is the
popular publicly-available word2vec [19, 20] 300 dimensional embedding trained on a corpus of
Google News texts consisting of 3 million English words, which we refer to here as the w2vNEWS.
One might have hoped that the Google News embedding would exhibit little gender bias because
many of its authors are professional journalists. We also analyze other publicly available embeddings
trained via other algorithms and find similar biases (Appendix B).
In this paper, we quantitatively demonstrate that word-embeddings contain biases in their geometry
that reflect gender stereotypes present in broader society.1 Due to their wide-spread usage as basic
features, word embeddings not only reflect such stereotypes but can also amplify them. This poses a
significant risk and challenge for machine learning and its applications. The analogies generated from
these embeddings spell out the bias implicit in the data on which they were trained. Hence, word
embeddings may serve as a means to extract implicit gender associations from a large text corpus
similar to how Implicit Association Tests [11] detect automatic gender associations possessed by
people, which often do not align with self reports.
To quantify bias, we will compare a word vector to the vectors of a pair of gender-specific words. For
! is close to woman
! is not in itself necessarily biased(it is also somewhat
instance, the fact that nurse
!
close to man ? all are humans), but the fact that these distances are unequal suggests bias. To make
this rigorous, consider the distinction between gender specific words that are associated with a gender
by definition, and the remaining gender neutral words. Standard examples of gender specific words
include brother, sister, businessman and businesswoman. We will use the gender specific words to
learn a gender subspace in the embedding, and our debiasing algorithm removes the bias only from
the gender neutral words while respecting the definitions of these gender specific words.
We propose approaches to reduce gender biases in the word embedding while preserving the useful
properties of the embedding. Surprisingly, not only does the embedding capture bias, but it also
contains sufficient information to reduce this bias.We will leverage the fact that there exists a low
dimensional subspace in the embedding that empirically captures much of the gender bias.
2
Related work and Preliminary
Gender bias and stereotype in English. It is important to quantify and understand bias in languages
as such biases can reinforce the psychological status of different groups [28]. Gender bias in language
has been studied over a number of decades in a variety of contexts (see, e.g., [13]) and we only
highlight some of the findings here. Biases differ across people though commonalities can be detected.
Implicit Association Tests [11] have uncovered gender-word biases that people do not self-report and
may not even be aware of. Common biases link female terms with liberal arts and family and male
terms with science and careers [23]. Bias is seen in word morphology, i.e., the fact that words such as
1
Stereotypes are biases that are widely held among a group of people. We show that the biases in the word
embedding are in fact closely aligned with social conception of gender stereotype, as evaluated by U.S.-based
crowd workers on Amazon?s Mechanical Turk. The crowd agreed that the biases reflected both in the location of
!
! than to woman)
! as well as in analogies (e.g., he:coward :: she:whore.) exhibit
vectors (e.g. doctor closer to man
common gender stereotypes.
2
actor are, by default, associated with the dominant class [15], and female versions of these words,
e.g., actress, are marked. There is also an imbalance in the number of words with F-M with various
associations. For instance, while there are more words referring to males, there are many more words
that sexualize females than males [30]. Consistent biases have been studied within online contexts
and specifically related to the contexts we study such as online news (e.g., [26]), Web search (e.g.,
[16]), and Wikipedia (e.g., [34]).
Bias within algorithms. A number of online systems have been shown to exhibit various biases,
such as racial discrimination and gender bias in the ads presented to users [31, 4]. A recent study
found that algorithms used to predict repeat offenders exhibit indirect racial biases [1]. Different
demographic and geographic groups also use different dialects and word-choices in social media
[6]. An implication of this effect is that language used by minority group might not be able to be
processed by natural language tools that are trained on ?standard? data-sets. Biases in the curation of
machine learning data-sets have explored in [32, 3].
Independent from our work, Schmidt [29] identified the bias present in word embeddings and
proposed debiasing by entirely removing multiple gender dimensions, one for each gender pair. His
goal and approach, similar but simpler than ours, was to entirely remove gender from the embedding.
There is also an intense research agenda focused on improving the quality of word embeddings from
different angles (e.g., [18, 25, 35, 7]), and the difficulty of evaluating embedding quality (as compared
to supervised learning) parallels the difficulty of defining bias in an embedding.
Within machine learning, a body of notable work has focused on ?fair? binary classification in
particular. A definition of fairness based on legal traditions is presented by Barocas and Selbst [2].
Approaches to modify classification algorithms to define and achieve various notions of fairness
have been described in a number of works, see, e.g., [2, 5, 8] and a recent survey [36]. The prior
work on algorithmic fairness is largely for supervised learning. Fair classification is defined based
on the fact that algorithms were classifying a set of individuals using a set of features with a
distinguished sensitive feature. In word embeddings, there are no clear individuals and no a priori
defined classification problem. However, similar issues arise, such as direct and indirect bias [24].
Word embedding. An embedding consists of a unit vector w
~ 2 Rd , with kwk
~ = 1, for each word
(or term) w 2 W . We assume there is a set of gender neutral words N ? W , such as flight attendant
or shoes, which, by definition, are not specific to any gender. We denote the size of a set S by |S|. We
also assume we are given a set of F-M gender pairs P ? W ? W , such as she-he or mother-father
whose definitions differ mainly in gender. Section 5 discusses how N and P can be found within
the embedding itself, but until then we take them as given. As is common, similarity between two
u?v
vectors u and v can be measured by their cosine similarity : cos(u, v) = kukkvk
. This normalized
similarity between vectors u and v is the cosine of the angle between the two vectors. Since words
are normalized cos(w
~ 1, w
~ 2) = w
~1 ? w
~ 2 .2
Unless otherwise stated, the embedding we refer to is the aforementioned w2vNEWS embedding, a
d = 300-dimensional word2vec [19, 20] embedding, which has proven to be immensely useful since
it is high quality, publicly available, and easy to incorporate into any application. In particular, we
downloaded the pre-trained embedding on the Google News corpus,3 and normalized each word to
unit length as is common. Starting with the 50,000 most frequent words, we selected only lower-case
words and phrases consisting of fewer than 20 lower-case characters (words with upper-case letters,
digits, or punctuation were discarded). After this filtering, 26,377 words remained. While we focus
on w2vNEWS, we show later that gender stereotypes are also present in other embedding data-sets.
Crowd experiments.4 Two types of experiments were performed: ones where we solicited words
from the crowd (to see if the embedding biases contain those of the crowd) and ones where we
solicited ratings on words or analogies generated from our embedding (to see if the crowd?s biases
contain those from the embedding). These two types of experiments are analogous to experiments
performed in rating results in information retrieval to evaluate precision and recall. When we speak
of the majority of 10 crowd judgments, we mean those annotations made by 5 or more independent
workers. The Appendix contains the questionnaires that were given to the crowd-workers.
2
We will abuse terminology and refer to the embedding of a word and the word interchangeably. For example,
! !
! !
the statement cat is more similar to dog than to cow means cat ? dog cat ? cow.
3
https://code.google.com/archive/p/word2vec/
4
All human experiments were performed on the Amazon Mechanical Turk platform. We selected for
U.S.-based workers to maintain homogeneity and reproducibility to the extent possible with crowdsourcing.
3
3
Geometry of Gender and Bias in Word Embeddings
Our first task is to understand the biases present in the word-embedding (i.e. which words are closer
to she than to he, etc.) and the extent to which these geometric biases agree with human notion of
gender stereotypes. We use two simple methods to approach this problem: 1) evaluate whether the
embedding has stereotypes on occupation words and 2) evaluate whether the embedding produces
analogies that are judged to reflect stereotypes by humans. The exploratory analysis of this section
will motivate the more rigorous metrics used in the next two sections.
Occupational stereotypes. Figure 1 lists the occupations that are closest to she and to he in the
w2vNEWS embeddings. We asked the crowdworkers to evaluate whether an occupation is considered
female-stereotypic, male-stereotypic, or neutral. The projection of the occupation words onto the shehe axis is strongly correlated with the stereotypicality estimates of these words (Spearman ? = 0.51),
suggesting that the geometric biases of embedding vectors is aligned with crowd judgment. We
projected each of the occupations onto the she-he direction in the w2vNEWS embedding as well as a
different embedding generated by the GloVe algorithm on a web-crawl corpus [25]. The results are
highly consistent (Appendix Figure 6), suggesting that gender stereotypes is prevalent across different
embeddings and is not an artifact of the particular training corpus or methodology of word2vec.
Analogies exhibiting stereotypes. Analogies are a useful way to both evaluate the quality of a word
embedding and also its stereotypes. We first briefly describe how the embedding generate analogies
and then discuss how we use analogies to quantify gender stereotype in the embedding. A more
detailed discussion of our algorithm and prior analogy solvers is given in Appendix C.
In the standard analogy tasks, we are given three words, for example he, she, king, and look for the
4th word to solve he to king is as she to x. Here we modify the analogy task so that given two words,
e.g. he, she, we want to generate a pair of words, x and y, such that he to x as she to y is a good
analogy. This modification allows us to systematically generate pairs of words that the embedding
believes it analogous to he, she (or any other pair of seed words). The input into our analogy generator
is a seed pair of words (a, b) determining a seed direction ~a ~b corresponding to the normalized
difference between the two seed words. In the task below, we use (a, b) = (she, he). We then score
all pairs of words x, y by the following metric:
?
?
S(a,b) (x, y) = cos ~a ~b, ~x ~y if k~x ~y k ? , 0 else
(1)
where is a threshold for similarity. The intuition of the scoring metric is that we want a good
analogy pair to be close to parallel to the seed direction while the two words are not too far apart in
order to be semantically coherent. The parameter sets the threshold for semantic similarity. In all
the experiments, we take = 1 as we find that this choice often works well in practice. Since all
embeddings are normalized, this threshold corresponds to an angle ? ?/3, indicating that the two
words are closer to each other than they are to the origin. In practice, it means that the two words
forming the analogy are significantly closer together than two random embedding vectors. Given the
embedding and seed words, we output the top analogous pairs with the largest positive S(a,b) scores.
To reduce redundancy, we do not output multiple analogies sharing the same word x.
We employed U.S. based crowd-workers to evaluate the analogies output by the aforementioned
algorithm. For each analogy, we asked the workers two yes/no questions: (a) whether the pairing
makes sense as an analogy, and (b) whether it reflects a gender stereotype. Overall, 72 out of 150
analogies were rated as gender-appropriate by five or more out of 10 crowd-workers, and 29 analogies
were rated as exhibiting gender stereotype by five or more crowd-workers (Figure 4). Examples of
analogies generated from w2vNEWS are shown at Figure 1. The full list are in Appendix J.
Identifying the gender subspace. Next, we study the bias present in the embedding geometrically,
identifying the gender direction and quantifying the bias independent of the extent to which it is
aligned with the crowd bias. Language use is ?messy? and therefore individual word pairs do not
always behave as expected. For instance, the word man has several different usages: it may be used
as an exclamation as in oh man! or to refer to people of either gender or as a verb, e.g., man the
station. To more robustly estimate bias, we shall aggregate across multiple paired comparisons. By
! !
! man,
! we identify a gender direction
combining several directions, such as she he and woman
d
g 2 R that largely captures gender in the embedding. This direction helps us to quantify direct and
indirect biases in words and associations.
In English as in many languages, there are numerous gender pair terms, and for each we can
consider the difference between their embeddings. Before looking at the data, one might imagine
4
!
she
!
her
!
woman
!
Mary
!
herself
!
daughter
!
mother
!
gal
!
girl
!
female
!
he
!
his
!
man
!
John
!
himself
!
son
!
father
!
guy
!
boy
!
male
def.
92%
84%
90%
75%
93%
93%
91%
85%
90%
84%
stereo.
89%
87%
83%
87%
89%
91%
85%
85%
86%
75%
Before
Hard-debiased
Soft-debiased
RG
WS
analogy
62.3
62.4
62.4
54.5
54.1
54.2
57.0
57.0
56.8
Figure 2: Left: Ten word pairs to define gender, along with agreement with sets of definitional
and stereotypical words solicited from the crowd. The accuracy is shown for the corresponding
gender classifier based on which word is closer to a target word, e.g., the she-he classifier predicts a
word is female if it is closer to she than he. Middle: The bar plot shows the percentage of variance
explained in the PCA of the 10 pairs of gender words. The top component explains significantly more
variance than any other; the corresponding percentages for random words shows a more gradual decay
(Figure created by averaging over 1,000 draws of ten random unit vectors in 300 dimensions). Right:
The table shows performance of the original w2vNEWS embedding (?before?) and the debiased
w2vNEWS on standard evaluation metrics measuring coherence and analogy-solving abilities: RG
[27], WS [10], MSR-analogy [21]. Higher is better. The results show that the performance does not
degrade after debiasing. Note that we use a subset of vocabulary in the experiments. Therefore, the
performances are lower than the previously published results. See Appendix for full results.
!
that they all had roughly the same vector differences, as in the following caricature: grandmother =
! !
!
!
!
!
!
! grandmother
! = g However, gender
wise + gal, grandfather = wise + guy,
grandfather = gal guy
pair differences are not parallel in practice, for multiple reasons. First, there are different biases
associated with with different gender pairs. Second is polysemy, as mentioned, which in this case
occurs due to the other use of grandfather as in to grandfather a regulation. Finally, randomness in
the word counts in any finite sample will also lead to differences. Figure 2 illustrates ten possible
10
gender pairs, (xi , yi ) i=1 .
To identify the gender subspace, we took the ten gender pair difference vectors and computed its
principal components (PCs). As Figure 2 shows, there is a single direction that explains the majority
of variance in these vectors. The first eigenvalue is significantly larger than the rest. Note that,
from the randomness in a finite sample of ten noisy vectors, one expects a decrease in eigenvalues.
However, as also illustrated in 2, the decrease one observes due to random sampling is much more
gradual and uniform. Therefore we hypothesize that the top PC, denoted by the unit vector g, captures
the gender subspace. In general, the gender subspace could be higher dimensional and all of our
analysis and algorithms (described below) work with general subspaces.
Direct bias. To measure direct bias, we first identify words that should be gender-neutral for the
application in question. How to generate this set of gender-neutral words is described in Section 5.
Given the gender neutral words, denoted by N , and the P
gender direction learned from above, g, we
c
define the direct gender bias of an embedding to be |N1 | w2N |cos(w,
~ g)| , where c is a parameter
c
that determines how strict do we want to in measuring bias. If c is 0, then |cos(w
~ g)| = 0
only if w
~ has no overlap with g and otherwise it is 1. Such strict measurement of bias might be
desirable in settings such as the college admissions example from the Introduction, where it would
be unacceptable for the embedding to introduce a slight preference for one candidate over another
by gender. A more gradual bias would be setting c = 1. The presentation we have chosen favors
simplicity ? it would be natural to extend our definitions to weight words by frequency. For example,
in w2vNEWS, if we take N to be the set of 327 occupations, then DirectBias1 = 0.08, which
confirms that many occupation words have substantial component along the gender direction.
4
Debiasing algorithms
The debiasing algorithms are defined in terms of sets of words rather than just pairs, for generality, so
that we can consider other biases such as racial or religious biases. We also assume that we have a set
of words to neutralize, which can come from a list or from the embedding as described in Section 5.
(In many cases it may be easier to list the gender specific words not to neutralize as this set can be
much smaller.)
5
biased
he
okay
Figure 3: Selected words projected along two axes: x is a projection onto the difference between
the embeddings of the words he and she, and y is a direction learned in the embedding that captures
gender neutrality, with gender neutral words above the line and gender specific words below the line.
Our hard debiasing algorithm removes the gender pair associations for gender neutral words. In this
figure, the words above the horizontal line would all be collapsed to the vertical line.
The first step, called Identify gender subspace, is to identify a direction (or, more generally, a
subspace) of the embedding that captures the bias. For the second step, we define two options:
Neutralize and Equalize or Soften. Neutralize ensures that gender neutral words are zero in the
gender subspace. Equalize perfectly equalizes sets of words outside the subspace and thereby
enforces the property that any neutral word is equidistant to all words in each equality set. For
instance, if {grandmother, grandfather} and {guy, gal} were two equality sets, then after equalization
babysit would be equidistant to grandmother and grandfather and also equidistant to gal and guy,
but presumably closer to the grandparents and further from the gal and guy. This is suitable for
applications where one does not want any such pair to display any bias with respect to neutral words.
The disadvantage of Equalize is that it removes certain distinctions that are valuable in certain
applications. For instance, one may wish a language model to assign a higher probability to the phrase
to grandfather a regulation) than to grandmother a regulation since grandfather has a meaning that
grandmother does not ? equalizing the two removes this distinction. The Soften algorithm reduces
the differences between these sets while maintaining as much similarity to the original embedding as
possible, with a parameter that controls this trade-off.
To define the algorithms, it will be convenient to introduce some further notation. A subspace B is
defined by k orthogonal unit vectors B = {b1 , . . . , bk } ? Rd . In the case k = 1, the subspace is
Pk
simply a direction. We denote the projection of a vector v onto B by, vB = j=1 (v ? bj )bj . This
also means that v vB is the projection onto the orthogonal subspace.
Step 1: Identify gender subspace. Inputs: word sets W , defining sets D1 , D2 ,P
. . . , Dn ? W
as well as embedding w
~ 2 Rd w2W and integer parameter k
1. Let ?i := w2Di w/|D
~
i|
be theP
means
of
the
defining
sets.
Let
the
bias
subspace
B
be
the
first
k
rows
of
SVD(C)
where
n P
C := i=1 w2Di (w
~ ?i ) T ( w
~ ?i ) |Di |.
Step 2a: Hard de-biasing (neutralize and equalize). Additional inputs: words to neutralize
N ? W , family of equality sets E = {E1 , E2 , . . . , Em } where each Ei ? W . For each word
w 2 N , let w
~ be re-embedded to w
~ := (w
~ w
~ B ) kw
~ w
~ B k. For each set E 2 E, let
p
P
~ B ?B
? := w2E w/|E| and ? := ? ?B . For each w 2 E, w
~ := ? + 1 k?k2 kw
w
~ B ?B k . Finally,
output the subspace B and the new embedding w
~ 2 Rd w2W .
Equalize equates each set of words outside of B to their simple average ? and then adjusts vectors
so that they are unit length. It is perhaps easiest to understand by thinking separately of the two
components w
~ B and w
~ ?B = w
~ w
~ B . The latter w
~ ?B are all simply equated to their average. Within
B, they are centered (moved to mean 0) and then scaled so that each w
~ is unit length. To motivate
why we center, beyond the fact that it is common in machine learning, consider the bias direction
being the gender direction (k = 1) and a gender pair such as E = {male, female}. As discussed, it
6
Figure 4: Number of stereotypical (Left) and appropriate (Right) analogies generated by word
embeddings before and after debiasing.
so happens that both words are positive (female) in the gender direction, though female has a greater
projection. One can only speculate as to why this is the case, e.g., perhaps the frequency of text
such as male nurse or male escort or she was assaulted by the male. However, because female has a
greater gender component, after centering the two will be symmetrically balanced across the origin.
If instead, we simply scaled each vector?s component in the bias direciton without centering, male
and female would have exactly the same embedding and we would lose analogies such as father:male
:: mother:female. We note that Neutralizing and Equalizing completely remove pair bias.
Observation 1. After Steps 1 and 2a, for any gender neutral word w any equality set E, and any two
words e1 , e2 2 E, w?~
~ e1 = w?~e2 and kw
~ ~e1 k = kw
~ ~e2 k. Furthermore, if E = {x, y}|(x, y) 2 P
are the sets of pairs defining PairBias, then PairBias = 0.
Step 2b: Soft bias correction. Overloading the notation, we let W 2 Rd?|vocab| denote the matrix
of all embedding vectors and N denote the matrix of the embedding vectors corresponding to gender
neutral words. W and N are learned from some corpus and are inputs to the algorithm. The
desired debiasing transformation T 2 Rd?d is a linear transformation that seeks to preserve pairwise
inner products between all the word vectors while minimizing the projection of the gender neutral
words onto the gender subspace. This can be formalized as minT k(T W )T (T W ) W T W k2F +
k(T N )T (T B)k2F , where B is the gender subspace learned in Step 1 and is a tuning parameter
that balances the objective of preserving the original embedding inner products with the goal of
reducing gender bias. For large, T would remove the projection onto B from all the vectors in N ,
which corresponds exactly to Step 2a. In the experiment, we use = 0.2. The optimization problem
is a semi-definite program and can be solved efficiently. The output embedding is normalized to have
? = {T w/kT wk2 , w 2 W }.
unit length, W
5
Determining gender neutral words
For practical purposes, since there are many fewer gender specific words, it is more efficient to
enumerate the set of gender specific words S and take the gender neutral words to be the compliment,
N = W \ S. Using dictionary definitions, we derive a subset S0 of 218 words out of the words in
w2vNEWS. Recall that this embedding is a subset of 26,377 words out of the full 3 million words
in the embedding, as described in Section 2. This base list S0 is given in Appendix F. Note that the
choice of words is subjective and ideally should be customized to the application at hand.
We generalize this list to the entire 3 million words in the Google News embedding using a linear
classifier, resulting in the set S of 6,449 gender-specific words. More specifically, we trained a linear
Support Vector Machine (SVM) with regularization parameter of C = 1.0. We then ran this classifier
on the remaining words, taking S = S0 [ S1 , where S1 are the words labeled as gender specific by
our classifier among the words in the entire embedding that are not in the 26,377 words of w2vNEWS.
Using 10-fold cross-validation to evaluate the accuracy, we find an F -score of .627 ? .102.
Figure 3 illustrates the results of the classifier for separating gender-specific words from genderneutral words. To make the figure legible, we show a subset of the words. The x-axis correspond to
! !
projection of words onto the she he direction and the y-axis corresponds to the distance from the
decision boundary of the trained SVM.
7
6
Debiasing results
We evaluated our debiasing algorithms to ensure that they preserve the desirable properties of the
original embedding while reducing both direct and indirect gender biases. First we used the same
analogy generation task as before: for both the hard-debiased and the soft-debiased embeddings,
we automatically generated pairs of words that are analogous to she-he and asked crowd-workers
to evaluate whether these pairs reflect gender stereotypes. Figure 4 shows the results. On the initial
w2vNEWS embedding, 19% of the top 150 analogies were judged as showing gender stereotypes
by a majority of the ten workers. After applying our hard debiasing algorithm, only 6% of the new
embedding were judged as stereotypical.
As an example, consider the analogy puzzle, he to doctor is as she to X. The original embedding
returns X = nurse while the hard-debiased embedding finds X = physician. Moreover the harddebiasing algorithm preserved gender appropriate analogies such as she to ovarian cancer is as he
to prostate cancer. This demonstrates that the hard-debiasing has effectively reduced the gender
stereotypes in the word embedding. Figure 4 also shows that the number of appropriate analogies
remains similar as in the original embedding after executing hard-debiasing. This demonstrates that
that the quality of the embeddings is preserved. The details results are in Appendix J. Soft-debiasing
was less effective in removing gender bias. To further confirms the quality of embeddings after
debiasing, we tested the debiased embedding on several standard benchmarks that measure whether
related words have similar embeddings as well as how well the embedding performs in analogy tasks.
Appendix Table 2 shows the results on the original and the new embeddings and the transformation
does not negatively impact the performance. In Appendix A, we show how our algorithm also reduces
indirect gender bias.
7
Discussion
Word embeddings help us further our understanding of bias in language. We find a single direction
that largely captures gender, that helps us capture associations between gender neutral words and
gender as well as indirect inequality. The projection of gender neutral words on this direction enables
us to quantify their degree of female- or male-bias.
To reduce the bias in an embedding, we change the embeddings of gender neutral words, by removing
their gender associations. For instance, nurse is moved to to be equally male and female in the
direction g. In addition, we find that gender-specific words have additional biases beyond g. For
instance, grandmother and grandfather are both closer to wisdom than gal and guy are, which does not
reflect a gender difference. On the other hand, the fact that babysit is so much closer to grandmother
than grandfather (more than for other gender pairs) is a gender bias specific to grandmother. By
equating grandmother and grandfather outside of gender, and since we?ve removed g from babysit,
both grandmother and grandfather and equally close to babysit after debiasing. By retaining the
gender component for gender-specific words, we maintain analogies such as she:grandmother
:: he:grandfather. Through empirical evaluations, we show that our hard-debiasing algorithm
significantly reduces both direct and indirect gender bias while preserving the utility of the embedding.
We have also developed a soft-embedding algorithm which balances reducing bias with preserving
the original distances, and could be appropriate in specific settings.
One perspective on bias in word embeddings is that it merely reflects bias in society, and therefore
one should attempt to debias society rather than word embeddings. However, by reducing the bias in
today?s computer systems (or at least not amplifying the bias), which is increasingly reliant on word
embeddings, in a small way debiased word embeddings can hopefully contribute to reducing gender
bias in society. At the very least, machine learning should not be used to inadvertently amplify these
biases, as we have seen can naturally happen.
In specific applications, one might argue that gender biases in the embedding (e.g. computer
programmer is closer to he) could capture useful statistics and that, in these special cases, the original
biased embeddings could be used. However given the potential risk of having machine learning
algorithms that amplify gender stereotypes and discriminations, we recommend that we should err on
the side of neutrality and use the debiased embeddings provided here as much as possible.
Acknowledgments. The authors thank Tarleton Gillespie and Nancy Baym for numerous helpful
discussions.5
5
This material is based upon work supported in part by NSF Grants CNS-1330008, CCF-1527618, by ONR
Grant 50202168, NGA Grant HM1582-09-1-0037 and DHS 2013-ST-061-ED0001
8
References
[1] J. Angwin, J. Larson, S. Mattu, and L. Kirchner. Machine bias: There?s software used across the country
to predict future criminals. and it?s biased against blacks., 2016.
[2] S. Barocas and A. D. Selbst. Big data?s disparate impact. Available at SSRN 2477899, 2014.
[3] E. Beigman and B. B. Klebanov. Learning with annotation noise. In ACL, 2009.
[4] A. Datta, M. C. Tschantz, and A. Datta. Automated experiments on ad privacy settings. Proceedings on
Privacy Enhancing Technologies, 2015.
[5] C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel. Fairness through awareness. In Innovations in
Theoretical Computer Science Conference, 2012.
[6] J. Eisenstein, B. O?Connor, N. A. Smith, and E. P. Xing. Diffusion of lexical change in social media. PLoS
ONE, pages 1?13, 2014.
[7] M. Faruqui, J. Dodge, S. K. Jauhar, C. Dyer, E. Hovy, and N. A. Smith. Retrofitting word vectors to
semantic lexicons. In NAACL, 2015.
[8] M. Feldman, S. A. Friedler, J. Moeller, C. Scheidegger, and S. Venkatasubramanian. Certifying and
removing disparate impact. In KDD, 2015.
[9] C. Fellbaum, editor. Wordnet: An Electronic Lexical Database. The MIT Press, Cambridge, MA, 1998.
[10] L. Finkelstein, E. Gabrilovich, Y. Matias, E. Rivlin, Z. Solan, G. Wolfman, and E. Ruppin. Placing search
in context: The concept revisited. In WWW. ACM, 2001.
[11] A. G. Greenwald, D. E. McGhee, and J. L. Schwartz. Measuring individual differences in implicit cognition:
the implicit association test. Journal of personality and social psychology, 74(6):1464, 1998.
[12] C. Hansen, M. Tosik, G. Goossen, C. Li, L. Bayeva, F. Berbain, and M. Rotaru. How to get the best word
vectors for resume parsing. In SNN Adaptive Intelligence / Symposium: Machine Learning 2015, Nijmegen.
[13] J. Holmes and M. Meyerhoff. The handbook of language and gender, volume 25. John Wiley & Sons,
2008.
[14] O. ?Irsoy and C. Cardie. Deep recursive neural networks for compositionality in language. In NIPS. 2014.
[15] R. Jakobson, L. R. Waugh, and M. Monville-Burston. On language. Harvard Univ Pr, 1990.
[16] M. Kay, C. Matuszek, and S. A. Munson. Unequal representation and gender stereotypes in image search
results for occupations. In Human Factors in Computing Systems. ACM, 2015.
[17] T. Lei, H. Joshi, R. Barzilay, T. Jaakkola, A. M. Katerina Tymoshenko, and L. Marquez. Semi-supervised
question retrieval with gated convolutions. In NAACL. 2016.
[18] O. Levy and Y. Goldberg. Linguistic regularities in sparse and explicit word representations. In CoNLL,
2014.
[19] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space.
In ICLR, 2013.
[20] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and
phrases and their compositionality. In NIPS.
[21] T. Mikolov, W.-t. Yih, and G. Zweig. Linguistic regularities in continuous space word representations. In
HLT-NAACL, pages 746?751, 2013.
[22] E. Nalisnick, B. Mitra, N. Craswell, and R. Caruana. Improving document ranking with dual word
embeddings. In www, April 2016.
[23] B. A. Nosek, M. Banaji, and A. G. Greenwald. Harvesting implicit group attitudes and beliefs from a
demonstration web site. Group Dynamics: Theory, Research, and Practice, 6(1):101, 2002.
[24] D. Pedreshi, S. Ruggieri, and F. Turini. Discrimination-aware data mining. In SIGKDD, pages 560?568.
ACM, 2008.
[25] J. Pennington, R. Socher, and C. D. Manning. Glove: Global vectors for word representation. In EMNLP,
2014.
[26] K. Ross and C. Carter. Women and news: A long and winding road. Media, Culture & Society, 33(8):1148?
1165, 2011.
[27] H. Rubenstein and J. B. Goodenough. Contextual correlates of synonymy. Communications of the ACM,
8(10):627?633, 1965.
[28] E. Sapir. Selected writings of Edward Sapir in language, culture and personality, volume 342. Univ of
California Press, 1985.
[29] B. Schmidt. Rejecting the gender binary: a vector-space operation. http://bookworm.benschmidt.
org/posts/2015-10-30-rejecting-the-gender-binary.html, 2015.
[30] J. P. Stanley. Paradigmatic woman: The prostitute. Papers in language variation, pages 303?321, 1977.
[31] L. Sweeney. Discrimination in online ad delivery. Queue, 11(3):10, 2013.
[32] A. Torralba and A. Efros. Unbiased look at dataset bias. In CVPR, 2012.
[33] P. D. Turney. Domain and function: A dual-space model of semantic relations and compositions. Journal
of Artificial Intelligence Research, pages 533?585, 2012.
[34] C. Wagner, D. Garcia, M. Jadidi, and M. Strohmaier. It?s a man?s wikipedia? assessing gender inequality
in an online encyclopedia. In Ninth International AAAI Conference on Web and Social Media, 2015.
[35] D. Yogatama, M. Faruqui, C. Dyer, and N. A. Smith. Learning word representations with hierarchical
sparse coding. In ICML, 2015.
[36] I. Zliobaite. A survey on measuring indirect discrimination in machine learning. arXiv preprint
arXiv:1511.00148, 2015.
9
| 6228 |@word msr:1 version:1 middle:1 briefly:1 rivlin:1 d2:1 confirms:2 gradual:3 solan:1 seek:1 zliobaite:1 excited:1 thereby:1 yih:1 initial:1 venkatasubramanian:1 contains:2 score:3 uncovered:1 occupational:1 document:2 ours:1 subjective:1 err:1 contextual:1 com:3 surprising:1 gmail:1 written:1 parsing:2 john:2 happen:1 kdd:1 enables:1 hypothesize:1 plot:1 remove:8 discrimination:5 intelligence:2 selected:4 fewer:2 smith:3 harvesting:1 revisited:1 contribute:1 preference:1 lexicon:1 liberal:1 org:1 location:1 five:2 simpler:1 admission:1 along:3 dn:1 unacceptable:1 direct:7 symposium:1 pairing:1 consists:1 privacy:2 introduce:2 pairwise:1 indeed:1 expected:1 roughly:1 morphology:1 gabrilovich:1 vocab:1 automatically:3 snn:1 little:1 solver:1 spain:1 provided:1 notation:2 moreover:1 medium:4 easiest:1 developed:1 finding:1 gal:7 transformation:3 exactly:2 classifier:6 demonstrates:2 k2:1 schwartz:1 control:1 grant:3 scaled:2 unit:8 before:5 positive:2 mitra:1 modify:2 tends:1 abuse:1 black:1 might:5 acl:1 equating:1 wk2:1 studied:4 suggests:1 co:6 acknowledgment:1 practical:1 lovely:1 enforces:1 practice:4 recursive:1 definite:1 carpentry:1 digit:1 procedure:1 danger:1 empirical:1 significantly:5 projection:9 convenient:1 synonymy:1 tolga:1 road:1 word:161 pre:1 get:1 amplify:4 close:5 interior:1 faruqui:2 onto:8 judged:3 context:4 applying:1 collapsed:1 equalization:1 writing:1 www:2 risk:4 dean:2 center:1 lexical:2 starting:1 sweeney:1 focused:2 survey:2 amazon:2 simplicity:1 formalized:1 identifying:2 adjusts:1 stereotypical:3 holmes:1 oh:1 kay:1 his:2 embedding:75 notion:2 exploratory:1 variation:1 analogous:4 imagine:1 target:1 today:1 user:1 speak:1 goldberg:1 origin:2 agreement:1 harvard:1 nanny:1 predicts:1 database:1 labeled:1 preprint:1 solved:2 capture:10 ensures:1 news:7 plo:1 trade:1 decrease:2 removed:1 observes:1 valuable:1 ran:1 mentioned:1 intuition:1 substantial:1 respecting:1 equalize:5 messy:1 asked:3 balanced:1 questionnaire:1 ideally:1 dynamic:1 trained:9 motivate:2 raise:1 solving:1 surgeon:1 serve:1 baseball:1 dodge:1 negatively:1 upon:1 completely:1 debias:1 girl:1 indirect:8 various:4 herself:1 cat:3 dialect:1 univ:2 attitude:1 describe:2 effective:1 detected:1 zemel:1 artificial:1 aggregate:1 outside:3 crowd:17 equalizes:1 whose:1 kai:1 larger:1 cvpr:1 solve:2 tested:1 otherwise:2 football:1 widely:1 ability:2 favor:1 statistic:1 crowdworkers:1 noisy:1 itself:2 online:5 eigenvalue:2 equalizing:2 net:1 took:1 propose:1 product:2 frequent:1 finkelstein:1 aligned:3 combining:1 moeller:1 reproducibility:1 achieve:1 moved:2 sutskever:1 regularity:2 cluster:1 assessing:1 produce:1 adam:2 executing:1 help:3 derive:1 pose:1 measured:1 barzilay:1 edward:1 come:1 stereotype:26 quantify:5 direction:22 differ:2 exhibiting:2 closely:1 tokyo:1 modifying:1 centered:1 human:5 programmer:4 material:1 explains:2 assign:1 preliminary:1 correction:1 immensely:1 considered:1 presumably:1 seed:6 algorithmic:1 puzzle:2 bj:2 cognition:1 predict:2 efros:1 dictionary:2 torralba:1 friedler:1 commonality:1 purpose:1 estimation:1 lose:1 hansen:1 amplifying:3 neutralize:6 ross:1 largest:1 sensitive:1 w2w:2 tool:2 reflects:3 mit:1 always:1 rather:2 kalai:1 jaakkola:1 broader:1 linguistic:2 ax:1 focus:1 she:27 philosopher:1 rubenstein:1 prevalent:1 mainly:1 cosmetic:1 rigorous:2 tradition:1 sigkdd:1 sense:1 detect:1 helpful:1 waugh:1 entire:2 w:2 relation:1 her:1 france:1 caricature:1 overall:1 classification:4 issue:1 among:2 aforementioned:2 priori:1 retaining:1 html:1 dual:2 art:1 platform:1 denoted:3 special:1 aware:2 religious:1 having:1 sampling:1 placing:1 kw:5 dhs:1 k2f:2 icml:1 fairness:4 represents:1 look:2 thinking:1 future:1 report:2 prostate:2 quantitatively:1 recommend:1 barocas:2 okay:1 preserve:2 ve:1 simultaneously:1 homogeneity:1 neutrality:2 pharmaceutical:1 individual:4 geometry:2 consisting:2 cns:1 microsoft:2 maintain:2 n1:1 attempt:1 interest:1 dwork:1 mining:1 highly:1 evaluation:3 male:14 punctuation:1 extreme:3 kukkvk:1 pc:2 held:1 word2vec:4 implication:1 kt:1 closer:10 worker:12 culture:2 solicited:3 intense:1 unless:1 orthogonal:2 desired:2 re:1 legible:1 theoretical:1 homemaker:4 psychological:1 instance:8 soft:5 disadvantage:1 measuring:4 caruana:1 queen:4 phrase:4 soften:2 introducing:1 subset:4 expects:1 neutral:21 hundred:1 uniform:1 father:5 too:1 answer:2 referring:1 st:1 international:1 bu:2 physician:2 off:1 together:2 aaai:1 reflect:5 kirchner:1 woman:12 emnlp:1 guy:7 return:1 snappy:1 li:1 japan:1 suggesting:2 potential:1 de:1 speculate:1 coding:1 grandfather:13 notable:1 ranking:2 blind:1 ad:3 performed:3 charming:1 later:1 analyze:1 kwk:1 doctor:3 xing:1 sort:1 option:1 parallel:3 annotation:2 convent:1 publicly:3 accuracy:2 hovy:1 variance:3 largely:3 efficiently:1 judgment:2 wisdom:1 correspond:1 resume:1 yes:1 identify:6 generalize:1 rejecting:2 none:1 cardie:1 drive:1 published:1 randomness:2 sharing:1 hlt:1 definition:8 centering:2 against:1 matias:1 frequency:2 turk:2 james:1 e2:4 naturally:1 associated:3 di:1 nurse:8 ruggieri:1 dataset:1 hardt:1 popular:2 nancy:1 recall:2 stanley:1 agreed:1 fellbaum:1 higher:3 supervised:3 reflected:1 methodology:2 wei:1 april:1 evaluated:3 though:2 strongly:1 generality:1 furthermore:1 mcghee:1 implicit:8 just:1 until:1 flight:1 hand:2 horizontal:1 web:5 ei:1 warrior:1 hopefully:1 google:6 widespread:1 quality:6 artifact:1 perhaps:2 lei:1 mary:2 usage:2 effect:1 naacl:3 contain:3 concept:2 geographic:1 ccf:1 spell:1 hence:2 equality:4 regularization:1 normalized:6 unbiased:1 excluded:1 semantic:4 illustrated:1 interchangeably:1 self:2 larson:1 cosine:2 eisenstein:1 demonstrate:2 performs:1 meaning:3 wise:2 ruppin:1 image:1 brilliant:1 common:6 wikipedia:2 empirically:2 debiasing:18 irsoy:1 volume:2 million:3 association:12 discussed:1 slight:1 extend:1 he:28 significant:2 refer:4 composition:1 connor:1 feldman:1 mother:5 cambridge:2 compliment:1 automatic:1 measurement:1 rd:7 tuning:1 similarly:2 language:16 had:1 actor:1 similarity:6 etc:1 base:1 align:1 nalisnick:1 dominant:1 closest:1 pitassi:1 recent:2 female:17 perspective:1 apart:1 mint:1 certain:2 inequality:2 onr:1 binary:3 yi:1 scoring:1 captured:1 preserving:5 additional:2 seen:2 somewhat:1 greater:2 waiter:1 employed:1 recognized:1 corrado:2 paradigmatic:1 semi:2 arithmetic:2 multiple:4 desirable:2 full:3 reduces:3 memorial:1 england:1 cross:1 long:1 retrieval:3 zweig:1 post:1 e1:4 equally:2 curation:1 paired:1 impact:3 involving:1 basic:1 ed0001:1 himself:1 metric:4 enhancing:1 arxiv:2 represent:2 sapir:2 orthography:1 preserved:2 addition:1 want:4 separately:1 scheidegger:1 else:1 country:1 biased:4 rest:1 archive:1 strict:2 tend:1 reingold:1 practitioner:1 integer:1 joshi:1 symmetrically:1 leverage:1 conception:1 easy:1 automated:1 embeddings:38 variety:3 psychology:1 equidistant:3 identified:1 perfectly:1 cow:2 reduce:5 inner:2 broadcaster:1 whether:8 pca:1 utility:1 sentiment:1 stereo:1 queue:1 returned:1 deep:1 enumerate:1 useful:6 generally:1 detailed:1 clear:1 encyclopedia:1 ten:6 processed:1 carter:1 reduced:1 generate:4 http:2 percentage:2 nsf:1 brother:2 designer:1 sister:2 ovarian:2 shall:1 group:6 redundancy:1 terminology:1 threshold:3 actress:1 drawn:1 diffusion:1 geometrically:2 downstream:1 merely:1 definitional:1 nga:1 run:1 angle:3 letter:1 selbst:2 family:2 reasonable:1 electronic:1 monastery:1 draw:1 delivery:1 coherence:1 appendix:10 decision:1 vb:2 conll:1 matuszek:1 entirely:2 def:1 display:1 fold:1 software:1 certifying:1 mikolov:3 separable:1 ssrn:1 manning:1 spearman:1 across:6 smaller:1 em:1 son:2 character:1 increasingly:1 angwin:1 modification:1 happens:1 s1:2 explained:1 pr:1 yogatama:1 legal:1 agree:1 previously:1 remains:1 discus:2 count:1 dyer:2 volleyball:1 serf:1 demographic:1 available:4 operation:1 hierarchical:1 appropriate:6 occurrence:1 robustly:1 distinguished:1 schmidt:2 professional:1 original:9 personality:2 top:4 remaining:2 ensure:1 include:1 saint:1 maintaining:2 society:5 objective:1 question:4 occurs:1 primary:1 craswell:1 exhibit:5 iclr:1 subspace:19 distance:3 link:1 reinforce:1 thank:1 grandparent:1 street:1 majority:3 separating:1 degrade:1 argue:1 srv:1 extent:4 reason:1 minority:1 length:4 code:1 racial:3 relationship:2 balance:2 minimizing:1 innovation:1 demonstration:1 regulation:3 statement:1 boy:1 stated:1 daughter:1 nijmegen:1 disparate:2 pizza:1 agenda:1 gated:1 imbalance:1 vertical:1 convolution:1 observation:1 upper:1 discarded:1 benchmark:2 finite:2 possessed:1 behave:1 architect:2 defining:4 communication:1 looking:1 ninth:1 station:1 verb:1 datta:2 rating:2 bk:1 venkatesh:1 compositionality:2 criminal:1 pair:28 paris:1 mechanical:2 dog:2 california:1 coherent:1 registered:1 learned:4 distinction:3 unequal:2 barcelona:1 nip:3 beyond:2 suggested:1 bar:1 able:1 below:3 biasing:1 challenge:1 grandmother:12 program:2 belief:2 gillespie:1 suitable:1 overlap:1 difficulty:2 natural:5 curriculum:1 customized:1 technology:1 rated:2 offender:1 numerous:2 axis:3 created:1 extract:1 text:7 prior:2 geometric:2 understanding:1 determining:2 occupation:10 embedded:1 highlight:1 generation:1 filtering:1 analogy:41 facing:1 proven:1 generator:1 validation:1 downloaded:1 awareness:1 degree:1 sufficient:1 consistent:2 s0:3 article:1 editor:1 systematically:1 munson:1 classifying:1 row:1 cancer:4 surprisingly:1 repeat:1 supported:1 english:3 bias:84 side:1 understand:3 wide:1 taking:1 wagner:1 hm1582:1 sparse:2 distributed:1 boundary:1 default:1 vocabulary:1 world:1 dimension:2 crawl:1 attendant:1 evaluating:1 equated:1 made:1 disturbing:1 projected:3 adaptive:1 turini:1 author:2 far:1 social:5 correlate:1 status:1 global:1 handbook:1 corpus:7 b1:1 xi:1 thep:1 search:4 continuous:1 decade:1 why:2 table:2 learn:1 correlated:1 career:1 klebanov:1 improving:2 conservatism:1 necessarily:1 polysemy:1 domain:1 pk:1 spread:1 linearly:1 big:1 noise:1 arise:1 fair:2 body:1 site:1 wiley:1 precision:1 wish:1 explicit:1 pinpoint:1 sewing:1 candidate:1 levy:1 removing:4 remained:1 specific:18 showing:1 list:6 decay:1 explored:1 svm:2 concern:1 exists:1 socher:1 overloading:1 effectively:1 pennington:1 equates:1 mattu:1 hoped:1 illustrates:2 chen:2 easier:1 boston:2 rg:2 garcia:1 simply:3 wolfman:1 forming:1 reliant:1 shoe:1 pedreshi:1 escort:1 gender:127 corresponds:3 determines:1 acm:4 ma:3 goal:2 presentation:1 king:6 greenwald:2 marked:1 quantifying:1 man:14 hard:9 change:2 specifically:2 glove:2 reducing:5 debiased:9 averaging:1 semantically:1 wordnet:1 principal:1 blond:1 stereotypic:2 called:1 svd:1 inadvertently:1 turney:1 katerina:1 indicating:1 college:1 support:1 people:5 latter:1 incorporate:1 evaluate:8 d1:1 crowdsourcing:1 |
5,779 | 6,229 | Estimating the Size of a Large Network and its
Communities from a Random Sample
1
Lin Chen1,2 , Amin Karbasi1,2 , Forrest W. Crawford2,3
Department of Electrical Engineering, 2 Yale Institute for Network Science,
3
Department of Biostatistics, Yale University
{lin.chen, amin.karbasi, forrest.crawford}@yale.edu
Abstract
Most real-world networks are too large to be measured or studied directly and
there is substantial interest in estimating global network properties from smaller
sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a
major challenge in computer science, epidemiology, demography, and intelligence
analysis. In this paper we consider a population random graph G = (V, E) from the
stochastic block model (SBM) with K communities/blocks. A sample is obtained
by randomly choosing a subset W ? V and letting G(W ) be the induced subgraph
in G of the vertices in W . In addition to G(W ), we observe the total degree of
each sampled vertex and its block membership. Given this partial information, we
propose an efficient PopULation Size Estimation algorithm, called P ULSE, that
accurately estimates the size of the whole population as well as the size of each
community. To support our theoretical analysis, we perform an exhaustive set of
experiments to study the effects of sample size, K, and SBM model parameters
on the accuracy of the estimates. The experimental results also demonstrate that
P ULSE significantly outperforms a widely-used method called the network scale-up
estimator in a wide variety of scenarios.
1
Introduction
Many real-world networks cannot be studied directly because they are obscured in some way, are too
large, or are too difficult to measure. There is therefore a great deal of interest in estimating properties
of large networks via sub-samples [15, 5]. One of the most important properties of a large network
is the number of vertices it contains. Unfortunately census-like enumeration of all the vertices in a
network is often impossible, so researchers must try to learn about the size of real-world networks
by sampling smaller components. In addition to the size of the total network, there is great interest
in estimating the size of different communities or sub-groups from a sample of a network. Many
real-world networks exhibit community structure, where nodes in the same community have denser
connections than those in different communities [10, 18]. In the following examples, we describe
network size estimation problems in which only a small subgraph of a larger network is observed.
Social networks. The social and economic value of an online social network (e.g. Facebook,
Instagram, Twitter) is closely related to the number of users the service has. When a social networking
service does not reveal the true number of users, economists, marketers, shareholders, or other groups
may wish to estimate the number of people who use the service based on a sub-sample [4].
World Wide Web. Pages on the World-Wide Web can be classified into several categories (e.g.
academic, commercial, media, government, etc.). Pages in the same category tend to have more
connections. Computer scientists have developed crawling methods for obtaining a sub-network of
web pages, along with their hyperlinks to other unknown pages. Using the crawled sub-network and
hyperlinks, they can estimate the number of pages of a certain category [17, 16, 21, 13, 19].
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
G(W )
G(W )
G
Observed
True
Figure 1: Illustration of the vertex set size estimation problem with N = 13 and K = 2. White vertices are
type-1 and gray are type-2.
Size of the Internet. The number of computers on the Internet (the size of the Internet) is of great
interest to computer scientists. However, it is impractical to access and enumerate all computers
on the Internet and only a small sample of computers and the connection situation among them are
accessible [24].
Counting terrorists. Intelligence agencies often target a small number of suspicious or radicalized
individuals to learn about their communication network. But agencies typically do not know the
number of people in the network. The number of elements in such a covert network might indicate
the size of a terrorist force, and would be of great interest [7].
Epidemiology. Many of the groups at greatest risk for HIV infection (e.g. sex workers, injection
drug users, men who have sex with men) are also difficult to survey using conventional methods.
Since members of these groups cannot be enumerated directly, researchers often trace social links to
reveal a network among known subjects. Public health and epidemiological interventions to mitigate
the spread of HIV rely on knowledge of the number of HIV-positive people in the population [12, 11,
22, 23, 8].
Counting disaster victims. After a disaster, it can be challenging to estimate the number of people
affected. When logistical challenges prevent all victims from being enumerated, a random sample of
individuals may be possible to obtain [2, 3].
In this paper, we propose a novel method called P ULSE for estimating the number of vertices and the
size of individual communities from a random sub-sample of the network. We model the network
as an undirected simple graph G = (V, E), and we treat G as a realization from the stochastic
blockmodel (SBM), a widely-studied extension of the Erd?os-R?nyi random graph model [20] that
accommodates community structures in the network by mapping each vertex into one of K ? 1
disjoint types or communities. We construct a sample of the network by choosing a sub-sample of
vertices W ? V uniformly at random without replacement, and forming the induced subgraph G(W )
of W in G. We assume that the block membership and total degree d(v) of each vertex v ? W are
observed. We propose a Bayesian estimation algorithm P ULSE for N = |V |, the number of vertices in
the network, along with the number of vertices Ni in each block. We first prove important regularity
results for the posterior distribution of N . Then we describe the conditions under which relevant
moments of the posterior distribution exist. We evaluate the performance of P ULSE in comparison
with the popular ?network scale-up? method (NSUM) [12, 11, 22, 23, 8, 14, 9]. We show that while
NSUM is asymptotically unbiased, it suffers from serious finite-sample bias and large variance. We
show that P ULSE has superior performance ? in terms of relative error and variance ? over NSUM in
a wide variety of model and observation scenarios. All proofs are given in the extended version [6].
2
Problem Formulation
The stochastic blockmodel (SBM) is a random graph model that generalizes the Erd?os-R?nyi random
graph [20]. Let G = (V, E) ? G(N, K, p, t) be a realization from an SBM, where N = |V | is the
total number of vertices, the vertices are divided into K types indexed 1, . . . , K, specified by the
map t : V ? {1, . . . , K}, and a type-i vertex and a type-j vertex are connected independently with
PK
probability pij ? [0, 1]. Let Ni be the number of type-i vertices in G, with N = i=1 Ni . The degree
of a vertex v is d(v). An edge is said to be of type-(i, j) if it connects a type-i vertex and a type-j
vertex. A random induced subgraph is obtained by sampling a subset W ? V with |W | = n uniformly
at random without replacement, and forming the induced subgraph, denoted by G(W ). Let Vi be
the number of type-i vertices in the sample and Eij be the number of type-(i, j) edges in the sample.
2
For a vertex v P
in the sample, a pendant edge connects vertex v to a vertex outside the sample. Let
? = d(v)?
d(v)
w?W 1{{w, v} ? E} be the number of pendant
Pedges incident to v. Let yi (v) be the
number of type-(t(v), i) pendant edges of vertex v; i.e., yi (v) = w?V \W 1{t(w) = i, {w, v} ? E}.
PK
?
?i = Ni ?Vi be the number of type-i nodes outside the sample. We
We have i=1 yi (v) = d(v).
Let N
? = (N
?i : 1 ? i ? K), p = (pij : 1 ? i < j ? K), and y = (yi (v) : v ? W, 1 ? i ? K).
define N
We observe only G(W ) and the total degree d(v) of each vertex v in the sample. Assume that we
know the type of each vertex in the sample. The observed data D consists of G(W ), (d(v) : v ? W )
and (t(v) : v ? W ); i.e., D = (G(W ), (d(v) : v ? W ), (t(v) : v ? W )).
Problem 1. Given the observed data D, estimate the size N of the vertex set N = |V | and the size
of each community Ni .
Fig. 1 illustrates the vertex set size estimation problem. White nodes are of type-1 and gray nodes
are of type-2. All nodes outside G(W ) are unobserved. We observe the types and the total degree
of each vertex in the sample. Thus we know the number of pendant edges that connect each vertex
in the sample to other, unsampled vertices. However, the destinations of these pendant edges are
unknown to us.
3
Network Scale-Up Estimator
We briefly outline a simple and intuitive estimator for N = |V | that will serve as a comparison
to P ULSE. The network scale-up method (NSUM) is a simple estimator for the vertex set size of
Erd?os-R?nyi random graphs. It has been used in real-world applications to estimate the size of
hidden or hard-to-reach populations such as drug users [12], HIV-infected individuals [11, 22, 23],
men who have sex with men (MSM) [8], and homeless people [14]. Consider a random graph
that follows the Erd?
s-R?nyi distribution.
The expected sum of total degrees in a random sample
oP
W of vertices is E
v?W d(v) = n(N ? 1)p. The expected number of edges in the sample is
E [ES ] = n2 p, where ES is the number
of edges within the sample. A simple estimator of the
connection probability p is p? = ES / n2 . Plugging p? into into the moment equation and solving
P
? = 1 + (n ? 1) P
?
for N yields N
v?W d(v)/2ES , often simplified to NN S = n
v?W d(v)/2ES
[12, 11, 22, 23, 8, 14, 9].
Theorem 1. (Proof in [6]) Suppose G follows a stochastic blockmodel with edge probability pij > 0
for 1 ? i, j ? K. For any sufficiently large sample size, the NSUM is positively biased and
?N S |ES > 0] ? N has an asymptotic lower bound E[N
?N S |ES > 0] ? N & N/n ? 1, as n
E[N
becomes large, where for two sequences {an } and {bn }, an & bn means that there exists a sequence
cn such that an ? cn ? bn ; i.e., an ? cn for all n and limn?? cn /bn = 1. However, as sample size
goes to infinity, the NSUM becomes asymptotically unbiased.
4
Main Results
NSUM uses only aggregate information about the sum of the total degrees of vertices in the sample
and the number of edges in the sample. We propose a novel algorithm, P ULSE, that uses individual
degree, vertex type, and the network structure information. Experiments (Section 5) show that it
outperforms NSUM in terms of both bias and variance.
Given p = (pij : 1 ? i < j ? K), the conditional likelihood of the edges in the sample is given by
?
?
!
K
Y
Y
Vi
E
E
ij
?E
V
V
?E
(
)
ii
ii
i
j
ij
??
LW (D; p) = ?
pij (1 ? pij )
pii (1 ? pii ) 2
,
i=1
1?i<j?K
and the conditional likelihood of the pendant edges is given by
K ?
Y XY
Ni
?
yi (v)
L?W (D; p) =
pi,t(v)
(1 ? pi,t(v) )Ni ?yi (v) ,
y
(v)
i
i=1
v?W y(v)
where the sum is taken over all yi (v)?s (i = 1, 2, 3, . . . , K) such that yi (v) ? 0, ?1 ? i ? K and
PK
?
i=1 yi (v) = d(v). Thus the total conditional likelihood is L(D; p) = LW (D; p)L?W (D; p).
3
If we condition on p and y, the likelihood of the edges within the sample is the same as LW (D; p)
since it does not rely on y, while the likelihood of the pendant edges given p and y is
K ?
Y Y
Ni
?
yi (v)
L?W (D; p, y) =
pi,t(v)
(1 ? pi,t(v) )Ni ?yi (v) .
y
(v)
i
i=1
v?W
Therefore the total likelihood conditioned on p and y is given by L(D; p, y) =
? . We may view
LW (D; p)L?W (D; p, y). The conditional likelihood L(D; p) is indeed a function of N
?
?
this as the likelihood of N given the data D and the probabilities p; i.e., L(N ; D, p) , L(D; p). Simi? and y. It can be viewed as the
larly, the likelihood L(D; p, y) conditioned on p and y is a function of N
? and y given the data D and the probabilities p; i.e., L(N
? , y; D, p) , L(D; p, y),
joint likelihood of N
P
?
?
and y L(N , y; D, p) = L(N ; D, p), where the sum is taken over all yi (v)?s, v ? W and
PK
?
1 ? i ? K, such that yi (v) ? 0 and i=1 yi (v) = d(v),
?v ? W , ?1 ? i ? K. To have a
? , p). Hence,
? and p is ?(N
full Bayesian approach, we assume that the joint prior distribution for N
?:
the population size estimation problem is equivalent to the following optimization problem for N
?
?
? = arg max L(N
? ; D, p)?(N
? , p)dp.
N
(1)
??
? = PK N
Then we estimate the total population size as N
i=1 i + |W |.
? , we must
We briefly study the regularity of the posterior distribution of N . In order to learn about N
observe enough vertices from each block type, and enough edges connecting members of each block,
so that the first and second moments of the posterior distribution exist. Intuitively, in order for the
first two moments to exist, either we must observe many edges connecting vertices of each block
type, or we must have sufficiently strong prior beliefs about pij .
? , p) = ?(N
? )?(p) and pij follows the Beta distribution
Theorem 2. (Proof in [6]) Assume that ?(N
P
K
?)
B(?ij , ?ij ) independently for 1 ? i < j ? K. Let ? = min1?i?K
(E
+
?
)
. If ?(N
ij
ij
j=1
is bounded and ? > n + 1, then the n-th moment of N exists.
In particular, if ? > 3, the variance of N exists. Theorem 2 gives the minimum possible number
of edges in the sample to make the posterior sampling meaningful. If the prior distribution of
pij is Uniform[0, 1], then we need at least three edges incident on type-i edges for all types i =
1, 2, 3, . . . , K to guarantee the existence of the posterior variance.
4.1
Erd?os-R?nyi Model
In order to better understand how P ULSE estimates the size of a general stochastic blockmodel
we study the Erd?os-R?nyi case where K = 1, and all vertices are connected independently with
probability p. Let N denote the total population size, W be the sample with size |W | = V1 and
?
? = N ? |W |. For each vertex v ? W in the sample, let d(v)
N
= y(v) denote the number
of pendant edges of vertex v, and E = E11 is the number of edges within the sample. Then
Y N
? ?
|W |
? ?
?E
E
(
)
2
LW (D; p) = p (1 ? p)
,
L?W (D; p) =
pd(v) (1 ? p)N ?d(v) .
?
d(v)
v?W
? and thus L?W (D; p) = L?W (D; p, y). Therefore, the total
In the Erd?os-R?nyi case, y(v) = d(v)
? conditioned on p is given by
likelihood of N
? ; D, p) = LW (D; p)L?W (D; p) = pE (1 ? p)(
L(N
|W |
2
)?E
Y
v?W
!
?
?
? ?
N
pd(v) (1 ? p)N ?d(v) .
?
d(v)
? has a prior ?(N
? ). Let
We assume that p has a beta prior B(?, ?) and that N
Y N
?
|W |
? ; D) =
? ? u + ?),
L(N
B(E
+
u
+
?,
? E + |W |N
?
d(v)
2
v?W
P
?
?
?
where u =
v?W d(v). The posterior probability Pr[N |D] is proportional to ?(N ; D) ,
? )L(N
? ; D). The algorithm is presented in Algorithm 1.
?(N
4
Algorithm 1 Population size estimation algorithm P ULSE (Erd?os-R?nyi case)
? (? ) ? N
? 0 (? ) with probability q; otherwise
? , denoted by N (0); 6:
N
Input: Data D; initial guess for N
?
?
parameters of the beta prior, ? and ?
N (? ) ? N (? ? 1)
?
Output: Estimate for the population size N
7:
? ?? +1
? (0) ? N (0) ? |W |
8: until some termination condition is satisfied
1: N
? (? ) : ? > ?0 } and view it as the sam9: Look at {N
2: ? ? 1
?
3: repeat
pled posterior distribution for N
? 0 (? ) according to a proposal distribu?
4:
Propose N
?
10: Let N be the posterior mean with respect to the
? (? ? 1) ? N
? 0 (? ))
tion g(N
sampled posterior distribution.
0
0
? (? );D)g(N
? (? )?N
? (? ?1))
?(N
}
5:
q ? min{1, ?(
? (? ?1);D)g(N
? (? ?1)?N
? 0 (? ))
N
Algorithm 2 Population size estimation algorithm P ULSE (general stochastic blockmodel case)
? , denoted by N
? (0) ;
Input: Data D; initial guess for N
(0)
initial guess for y, denoted by y ; parameters of
the beta prior, ?ij and ?ij , 1 ? i ? j ? K.
?
Output: Estimate for the population size N
1: ? ? 1
2: repeat
? or y
3:
Randomly decide whether to update N
?
4:
if update N then
5:
Randomly selects i ? [1, K] ? N.
?? ? N
? (? ?1)
6:
N
?i? according to the proposal distri7:
Propose N
? (? ?1) ? N
?i? )
bution gi (N
i
(? ?1)
? ? ,y;D)gi (N
? ? ?N
?
?(N
i
i
)
q ? min{1,
9:
? (? ) ? N
? ? with probability q; otherwise
N
(? )
?
?
N
? N (? ?1) .
y (? ) ? y (? ?1)
10:
4.2
15:
16:
17:
18:
19:
20:
} 21:
8:
? ?)
? (? ?1) ,y;D)gi (N
? (? ?1) ?N
?(N
i
i
11:
12:
13:
14:
22:
else
Randomly selects v ? W .
y ? ? y (? ?1)
Propose y(v)? according to the proposal distribution hv (y(v)(? ?1) ? y(v)? )
q ? min{1,
? ,y ? ;D)hv (y(v)? ?y(v)(? ?1) )
L(N
? ,y;D)hv (y(v)(? ?1) ?y(v)? ) }
L(N
y (? ) ? y ? with probability q; otherwise
y (? ) ? y (? ?1) .
? (? ) ? N
? (? ?1)
N
end if
? ?? +1
until some termination condition is satisfied
? (? ) : ? > ?0 } and view it as the
Look at {N
?
sampled posterior distribution for N
? be the posterior mean of PK N
?i + |W |
Let N
i=1
with respect to the sampled posterior distribution.
General Stochastic Blockmodel
?
In the Erd?os-R?nyi case, y(v) = d(v).
However, in the general stochastic blockmodel case,
?1 , N
?2 , . . . , N
?K to be estimated, we do not know yi (v)
in addition to the unknown variables N
(v ? W , i = 1, 2, 3, . . . , K) either. The expression L?W (D; p) involves costly summation
? (v ? W ). However, the joint posterior disover all possibilities of integer composition of d(v)
?
? and y, which is proportional to L(N
? , y; D, p)?(N
? )?(p)dp, does not involve
tribution for N
?
summing over integer partitions; thus we may sample from the joint posterior distribution for N
?
and y, and obtain the marginal
distribution for N . Our proposed algorithm P ULSE realizes this
?
? , y; D) = L(N
? , y; D, p)?(p)dp. We know that the joint posterior distribution
idea. Let L(N
? and y, denoted by Pr[N
? , y|D], is proportional to ?(N
? , y; D) , L(N
? , y; D)?(N
? ). In adfor N
?i |N
??i , y] and Pr[y(v)|N
? , y(?v)] are also proportional to
dition, the conditional distributions Pr[N
? , y; D)?(N
? ), where N
??i = (N
?j : 1 ? j ? K, j 6= i), y(v) = (yi (v) : 1 ? i ? K) and
L(N
y(?v) = (y(w) : w ? W, w 6= v). The proposed algorithm P ULSE is a Gibbs sampling process that
? , y|D]), which is specified in Algorithm 2.
samples from the joint posterior distribution (i.e., Pr[N
?i because the number of type-(i, t(v))
For every v ? W and i = 1, 2, 3, . . . , K, 0 ? yi (v) ? N
pendant edges of vertex v must not exceed the total number of type-i vertices outside the sample.
?i ? maxv?W yi (v) must hold for every i = 1, 2, 3, . . . , K. These observations
Therefore, we have N
put constraints on the choice of proposal distributions gi and hv , i = 1, 2, 3, . . . , K and v ? W ;
i.e., the support of gi must be contained in [maxv?W yi (v), ?) ? N and the support of hv must be
?
?i , PK yi (v) = d(v)}.
contained in {y(v) : ?1 ? i ? K, 0 ? yi (v) ? N
j=1
5
?i , taking values in N. Let l = max{maxv?W yi (v), N
? (? ?1) ? ?i }.
Let ?i be the window size for N
i
Let the proposal distribution gi be defined as below:
(
1
? ? ? l + 2?i
if l ? N
(? ?1)
?
i
?
?
gi (Ni
? Ni ) = 2?i +1
0
otherwise.
? ? is always greater than or equal to maxv?W yi (v). This proposal disThe proposed value N
i
?? ?
tribution uniform within the window [l, l + 2?i ], and thus the proposal ratio is gi (N
i
(? ?1)
(? ?1)
?
?
?
?
Ni
)/gi (Ni
? Ni ) = 1. The proposal for y(v) is detailed in the extended version [6].
5
5.1
Experiment
Erd?os-R?nyi
Effect of Parameter p. We first evaluate the performance of P ULSE in the Erd?os-R?nyi case. We
fix the size of the network at N = 1000 and the sample size |W | = 280 and vary the parameter p.
For each p ? [0.1, 0.9], we sample 100 graphs from G(N, p). For each selected graph, we compute
NSUM and run P ULSE 50 times (as it is a randomized algorithm) to compute its performance. We
record the relative errors by the Tukey boxplots shown in Fig. 2a. The posterior mean proposed
by P ULSE is an accurate estimate of the size. For the parameter p varying from 0.1 to 0.9, most of
the relative errors are bounded between ?1% and 1%. We also observe that the NSUM tends to
overestimate the size as it shows a positive bias. This confirms experimentally the result of Theorem
1. For both methods, the interquartile ranges (IQRs, hereinafter) correlate negatively with p. This
shows that the variance of both estimators shrinks when the graph becomes denser. The relative errors
of P ULSE tend to concentrate around 0 with larger p which means that the performance of P ULSE
improves with larger p. In contrast, a larger p does not improve the bias of the NSUM.
Effect of Network Size N . We fix the parameter p = 0.3 and the sample size |W | = 280 and vary
the network size N from 400 to 1000. For each N ? [400, 1000], we randomly pick 100 graphs from
G(N, p). For each selected graph, we compute NSUM and run P ULSE 50 times. We illustrate the
results via Tukey boxplots in Fig. 2b. Again, the estimates given by P ULSE are very accurate. Most
of the relative errors reside in [?0.5%, 0.5%] and almost all reside in [?1%, 1%]. We also observe
that smaller network sizes can be estimated more accurately as P ULSE will have a smaller variance.
For example, when the network size is N = 400, almost all of the relative errors are bounded in the
range [?0.7%, 0.7%] while for N = 1000, the relative errors are in [?1.5%, 1.5%]. This agrees with
our intuition that the performance of estimation improves with a larger sampling fraction. In contrast,
NSUM heavily overestimates the network size as the size increases. In addition, its variance also
correlates positively with network size.
Effect of Sample Size |W |. We study the effect of the sample size |W | on the estimation error. Thus,
we fix the size N = 1000 and the parameter p = 0.3, and we vary the sample size |W | from 100 to
500. For each |W | ? [100, 500], we randomly select 100 graphs from G(N, p). For every selected
graph, we compute the NSUM estimate, run P ULSE 50 times, and record the relative errors. The
results are presented in Fig. 2c. We observe that for both methods that the IQR shrinks as the sample
size increases; thus a larger sample size reduces the variance of both estimators. P ULSE does not
exhibit appreciable bias when the sample size varies from 100 to 500. Again, NSUM overestimates
the size; however, its bias reduces when the sample size becomes large. This reconfirms Theorem 1.
5.2
General Stochastic Blockmodel
Effect of Sample Size and Type Partition. Here, we study the effect of the sample size and
the type partition. We set the network size N to 200 and we assume that there are two types of
vertices in this network: type 1 and type 2 with N1 and N2 nodes, respectively. The ratio N1 /N
quantifies the type partition. We vary N1 /N from 0.2 to 0.8 and the sample size |W | from 40
to 160. For each combination of N1 /N and the sample size |W |, we generate 50 graphs with
p11 , p22 ? Uniform[0.5, 1] and p12 = p21 ? Uniform[0, min{p11 , p22 }]. For each graph, we
compute the NSUM and obtain the average relative error. Similarly, for each graph, we run P ULSE
10 times in order to compute the average relative error for the 50 graphs and 10 estimates for each
graph. The results are shown as heat maps in Fig. 2d. Note that the color bar on the right side of Fig.
6
0.0
Relative error (%)
Relative error (%)
Relative error (%)
2
2.5
1
0
?1
?2.5
400 500 600 700 800 900 1000
Parameter p
NSUM
0
?5
?2
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
5
100
True network size
PULSE
NSUM
PULSE
300
400
500
Sample size
NSUM
(b)
PULSE
(c)
Relative error (%)
(a)
200
5
0
?5
?0.3 ?0.2 ?0.1
0
0.1
0.2
0.3
Deviation from Erdos?Renyi
NSUM
PULSE
(e)
(d)
Relative error (%)
Relative error (%)
10
40
0
?40
5
0
?5
?10
?0.3 ?0.2 ?0.1
0
0.1
0.2
0.3
Deviation from Erdos?Renyi
N1 (PULSE)
N2 (PULSE)
1
2
NS 33%
NS 50%
3
4
Number of types (K)
NS 66%
PU 33%
5
PU 50%
6
PU 66%
(f)
(g)
Figure 2: Fig. 2a, 2b and 2c are the results of the Erd?os-R?nyi case: (a) Effect of parameter p on the estimation
error. (b) Effect of the network size on the estimation error. (c) Effect of the sample size on the estimation error.
Fig. 2d, 2e, 2f and 2g are the results of the general SBM case: (d) Effect of sample size and type partition on
the relative error. Note that the color bar on the right is on logarithmic scale. (e) Effect of deviation from the
Erd?os-R?nyi model (controlled by ) on the relative error of NSUM and P ULSE in the SBM with K = 2. (f)
Effect of deviation from the Erd?os-R?nyi model (controlled by ) on the relative error of P ULSE in estimating
the number of type-1 and type-2 nodes in the SBM with K = 2. (g) Effect of the number of types K and the
sample size on the population estimation. The percentages are the sampling fractions n/N . The horizontal axis
represents the number of types K that varies from 1 to 6. The vertical axis is the relative error in percentage.
2d is on logarithmic scale. In general, the estimates given by P ULSE are very accurate and exhibit
significant superiority over the NSUM estimates. The largest relative errors of P ULSE in absolute
value, which are approximately 1%, appear in the upper-left and lower-left corner on the heat map.
The performance of the NSUM (see the right subfigure in Fig. 2d) is robust to the type partition and
equivalently the ratio N1 /N . As we enlarge the sample size, its relative error decreases.
The left subfigure in Fig. 2d shows the performance of P ULSE. When the sample size is small, the
relative error decreases as N1 /N increases from 0.2 to 0.5; when N1 /N rises from 0.5 to 0.8, the
relative error becomes large. Given the fixed ratio N1 /N , as expected, the relative error declines
when we have a larger sample. This agrees with our observation in the Erd?os-R?nyi case. However,
when the sample size is large, P ULSE exhibits better performance when the type partition is more
homogeneous. There is a local minimum relative error in absolute value shown at the center of the
subfigure. P ULSE performs best when there is a balance between the number of edges in the sampled
7
induced subgraph and the number of pendant edges emanating outward. Larger sampled subgraphs
allow more precision in knowledge about pij , but more pendant edges allow for better estimation of y,
and hence each Ni . Thus when the sample is about half of the network size, the balanced combination
of the number of edges within the sample and those emanating outward leads to better performance.
Effect of Intra- and Inter-Community Edge Probability.
that there are two types
h Suppose
i of nodes
N1
N1
2
in the network. The mean degree is given by dmean = N
2 p11 + 2 p22 + N1 N2 p12 . We want
to keep the mean degree constant and vary the random graph gradually so that we observe 3 phases:
high intra-community and low inter-community edge probability (more cohesive), Erd?os-R?nyi , and
low intra-community and high inter-community edge probability (more incohesive). We introduce a
cohesion parameter . In the two-block model, we have p11 = p22 = p01 = p?, where p? is a constant.
Let?s call the deviation from this situation and let p11 = p? + N2 1NN12 , p22 = p? + N2 1NN22 , p12 = p? ? .
(2)
(2)
The mean degree stays constant for different . In addition, p11 , p12 and p22 must reside in [0, 1].
This requirement can be met if we set the absolute value of small enough. By changing from
positive to negative we go from cohesive behavior to incohesive behavior. Clearly, for = 0, the
graph becomes an Erd?os-R?nyi graph with p11 = p22 = p01 = p?.
We set the network size N to 850, N1 to 350, and N2 to 500. We fix p? = 0.5 and let vary from ?0.3
to 0.3. When = 0.3, the intra-community edge probabilities are p11 = 0.9298 and p22 = 0.7104
and the inter-community edge probability is p12 = 0.2. When = ?0.3, the intra-community
edge probabilities are p11 = 0.0702 and p22 = 0.2896 and the inter-community edge probability is
p12 = 0.8. For each , we generate 500 graphs and for each graph, we run P ULSE 50 times. Given
each value of , relative errors are shown in box plots. We present the results in Fig. 2e as we vary
. From Fig. 2e, we observe that despite deviation from the Erd?os-R?nyi graph, both methods are
robust. However, the figure indicates that P ULSE is unbiased (as median is around zero) while NSUM
overestimates the size on average. This again confirms Theorem 1.
An important feature of P ULSE is that it can also estimate the number of nodes of each type while
NSUM cannot. The results for type-1 and type-2 with different are shown in Fig. 2f. We observe
that the median of all boxes agree with the 0% line; thus the separate estimates for N1 or N2 are
unbiased. Note that when the edge probabilities are more homogeneous (i.e., when the graph becomes
more similar to the Erd?os-R?nyi model) the IQRs, as well as the interval between the two ends of the
whiskers, become larger. This shows that when we try to fit an Erd?os-R?nyi model (a single-type
stochastic blockmodel) into a two-type model, the variance becomes larger.
Effect of Number of Types and Sample Size. Finally, we study the impact of the number of types
K and the sample size |W | = n on the relative error. To generate graphs with different number of
types, we use a Chinese restaurant process (CRP) [1]. We set the total number of vertices to 200, first
pick 100 vertices and use the Chinese restaurant process to assign them to different types. Suppose
that CRP gives K types; We then distribute the remaining 100 vertices evenly among the K types.
The edge probability pii (1 ? i ? K) is sampled from Uniform[0.7, 1] and pij (1 ? i < j ? K) is
sampled from Uniform[0, min{pii , pjj }], all independently. We set the sampling fraction n/N to
33%, 50% and 66%, and use NSUM and P ULSE to estimate the network size. Relative estimation
errors are illustrated in Fig. 2g. We observe that with the same sampling fraction n/N and same
the number of types K, P ULSE has a smaller relative error than that of the NSUM. Similarly, the
interquartile range of P ULSE is also smaller than that of the NSUM. Hence, P ULSE provides a higher
accuracy with a smaller variance. For both methods the relative error decreases (in absolute value) as
the sampling fraction increases. Accordingly, the IQRs also shrink for larger sampling fraction. With
the sampling fraction fixed, the IQRs become larger when we increase the number of types in the
graph. The variance of both methods increases for increasing values of K. The median of NSUM is
always above 0 on average which indicates that it overestimates the network size.
Acknowledgements
This research was supported by Google Faculty Research Award, DARPA Young Faculty Award
(D16AP00046), NIH grants from NICHD DP2HD091799, NCATS KL2 TR000140, and NIMH
P30 MH062294, the Yale Center for Clinical Investigation, and the Yale Center for Interdisciplinary
Research on AIDS. LC thanks Zheng Wei for his consistent support.
8
References
[1] D. J. Aldous. Exchangeability and related topics. Springer, 1985.
[2] H. Bernard, E. Johnsen, P. Killworth, and S. Robinson. How many people died in the mexico city
earthquake. Estimating the Number of People in an Average Network and in an Unknown Event Population.
The Small World, ed. M. Kochen (forthcoming). Newark, 1988.
[3] H. R. Bernard, P. D. Killworth, E. C. Johnsen, G. A. Shelley, and C. McCarty. Estimating the ripple effect
of a disaster. Connections, 24(2):18?22, 2001.
[4] M. S. Bernstein, E. Bakshy, M. Burke, and B. Karrer. Quantifying the invisible audience in social networks.
In Proc. SIGCHI, pages 21?30. ACM, 2013.
[5] L. Chen, F. W. Crawford, and A. Karbasi. Seeing the unseen network: Inferring hidden social ties from
respondent-driven sampling. In AAAI, pages 1174?1180, 2016.
[6] L. Chen, A. Karbasi, and F. W. Crawford. Estimating the size of a large network and its communities from
a random sample. arXiv preprint arXiv:1610.08473, 2016. https://arxiv.org/abs/1610.08473.
[7] F. W. Crawford. The graphical structure of respondent-driven sampling. Sociological Methodology,
46(1):187?211, 2016.
[8] S. Ezoe, T. Morooka, T. Noda, M. L. Sabin, and S. Koike. Population size estimation of men who have sex
with men through the network scale-up method in Japan. PLoS One, 7(1):e31184, 2012.
[9] D. M. Feehan and M. J. Salganik. Generalizing the network scale-up method: a new estimator for the size
of hidden populations. Sociological Methodology, 46(1):153?186, 2016.
[10] M. Girvan and M. E. Newman. Community structure in social and biological networks. PNAS, 99(12):7821?
7826, 2002.
[11] W. Guo, S. Bao, W. Lin, G. Wu, W. Zhang, W. Hladik, A. Abdul-Quader, M. Bulterys, S. Fuller, and
L. Wang. Estimating the size of HIV key affected populations in Chongqing, China, using the network
scale-up method. PLoS One, 8(8):e71796, 2013.
[12] C. Kadushin, P. D. Killworth, H. R. Bernard, and A. A. Beveridge. Scale-up methods as applied to estimates
of heroin use. Journal of Drug Issues, 2006.
[13] L. Katzir, E. Liberty, and O. Somekh. Estimating sizes of social networks via biased sampling. In WWW,
pages 597?606. ACM, 2011.
[14] P. D. Killworth, C. McCarty, H. R. Bernard, G. A. Shelley, and E. C. Johnsen. Estimation of seroprevalence,
rape, and homelessness in the United States using a social network approach. Eval. Rev., 22(2):289?308,
1998.
[15] A. S. Maiya and T. Y. Berger-Wolf. Benefits of bias: Towards better characterization of network sampling.
In Proc. SIGKDD, pages 105?113. ACM, 2011.
[16] L. Massouli?, E. Le Merrer, A.-M. Kermarrec, and A. Ganesh. Peer counting and sampling in overlay
networks: random walk methods. In Proc. PODC, pages 123?132. ACM, 2006.
[17] B. H. Murray and A. Moore. Sizing the internet. White paper, Cyveillance, page 3, 2000.
[18] M. E. Newman. Modularity and community structure in networks. PNAS, 103(23):8577?8582, 2006.
[19] M. Papagelis, G. Das, and N. Koudas. Sampling online social networks. TKDE, 25(3):662?676, 2013.
[20] A. R?nyi and P. Erd?os. On random graphs. Publicationes Mathematicae, 6:290?297, 1959.
[21] B. Ribeiro and D. Towsley. Estimating and sampling graphs with multidimensional random walks. In Proc.
IMC, pages 390?403. ACM, 2010.
[22] M. J. Salganik, D. Fazito, N. Bertoni, A. H. Abdo, M. B. Mello, and F. I. Bastos. Assessing network
scale-up estimates for groups most at risk of HIV/AIDS: evidence from a multiple-method study of heavy
drug users in Curitiba, Brazil. American Journal of Epidemiology, 174(10):1190?1196, 2011.
[23] M. Shokoohi, M. R. Baneshi, and A.-a. Haghdoost. Size estimation of groups at high risk of HIV/AIDS
using network scale up in Kerman, Iran. Int?l J. Prev. Medi., 3(7):471, 2012.
[24] S. Xing and B.-P. Paris. Measuring the size of the internet via importance sampling. J. Sel. Areas Commun,
21(6):922?933, 2003.
9
| 6229 |@word briefly:2 faculty:2 version:2 sex:4 termination:2 confirms:2 pulse:6 bn:4 pick:2 moment:5 initial:3 contains:1 united:1 outperforms:2 medi:1 crawling:1 must:9 partition:7 plot:1 update:2 maxv:4 intelligence:2 selected:3 guess:3 half:1 accordingly:1 record:2 provides:1 characterization:1 node:10 org:1 zhang:1 along:2 beta:4 become:2 suspicious:1 prove:1 consists:1 prev:1 introduce:1 inter:5 indeed:1 expected:3 behavior:2 enumeration:1 window:2 increasing:1 becomes:8 spain:1 estimating:13 bounded:3 biostatistics:1 medium:1 developed:1 unobserved:1 impractical:1 guarantee:1 mitigate:1 every:3 multidimensional:1 tie:1 grant:1 intervention:1 superiority:1 appear:1 overestimate:5 positive:3 service:3 engineering:1 scientist:2 died:1 treat:1 tends:1 local:1 despite:1 mccarty:2 approximately:1 might:1 studied:3 china:1 challenging:1 range:3 earthquake:1 block:9 epidemiological:1 tribution:2 area:1 drug:4 significantly:1 seeing:1 cannot:3 put:1 risk:3 impossible:1 www:1 conventional:1 map:3 equivalent:1 center:3 go:2 independently:4 survey:1 subgraphs:1 estimator:8 sbm:8 his:1 population:17 brazil:1 target:1 commercial:1 suppose:3 user:5 heavily:1 homogeneous:2 us:2 larly:1 element:1 observed:5 min1:1 preprint:1 electrical:1 hv:5 wang:1 connected:2 plo:2 decrease:3 substantial:1 intuition:1 agency:2 pd:2 balanced:1 nimh:1 solving:1 serve:1 negatively:1 joint:6 darpa:1 heat:2 describe:2 emanating:2 newman:2 aggregate:1 choosing:2 outside:4 exhaustive:1 peer:1 hiv:7 victim:2 widely:2 larger:12 denser:2 otherwise:4 koudas:1 gi:9 unseen:1 online:2 sequence:2 propose:7 relevant:1 realization:2 subgraph:6 amin:2 intuitive:1 bao:1 mello:1 regularity:2 requirement:1 assessing:1 ripple:1 illustrate:1 bertoni:1 measured:1 ij:8 op:1 strong:1 involves:1 indicate:1 pii:4 met:1 concentrate:1 liberty:1 closely:1 stochastic:10 public:1 government:1 unsampled:1 assign:1 fix:4 sizing:1 investigation:1 biological:1 summation:1 enumerated:2 extension:1 hold:1 burke:1 sufficiently:2 around:2 great:4 mapping:1 major:1 vary:7 cohesion:1 estimation:19 proc:4 realizes:1 agrees:2 largest:1 city:1 nn12:1 clearly:1 always:2 sel:1 exchangeability:1 varying:1 crawled:1 likelihood:11 indicates:2 contrast:2 blockmodel:9 sigkdd:1 twitter:1 membership:2 nn:1 typically:1 hidden:3 selects:2 e11:1 arg:1 among:3 issue:1 denoted:5 marginal:1 equal:1 construct:1 fuller:1 enlarge:1 sampling:19 represents:1 look:2 serious:1 randomly:6 individual:5 pjj:1 phase:1 connects:2 replacement:2 n1:14 ab:1 interest:5 possibility:1 interquartile:2 intra:5 zheng:1 eval:1 accurate:3 edge:35 partial:1 worker:1 xy:1 indexed:1 walk:2 obscured:1 theoretical:1 subfigure:3 infected:1 measuring:1 karrer:1 vertex:49 subset:2 deviation:6 uniform:6 too:3 connect:1 varies:2 thanks:1 epidemiology:3 randomized:1 accessible:1 stay:1 interdisciplinary:1 destination:1 connecting:2 again:3 aaai:1 satisfied:2 corner:1 american:1 abdul:1 japan:1 distribute:1 int:1 vi:3 tion:1 try:2 view:3 publicationes:1 towsley:1 tukey:2 bution:1 xing:1 ni:15 accuracy:2 variance:12 who:4 yield:1 koike:1 bayesian:2 accurately:2 researcher:2 classified:1 networking:1 mathematicae:1 suffers:1 reach:1 ed:1 infection:1 facebook:1 kl2:1 proof:3 sampled:8 ncats:1 popular:1 knowledge:2 color:2 improves:2 higher:1 methodology:2 wei:1 erd:21 formulation:1 shrink:3 box:2 crp:2 until:2 horizontal:1 web:3 o:20 ganesh:1 google:1 reveal:2 gray:2 effect:17 true:3 unbiased:4 hence:3 moore:1 illustrated:1 deal:1 white:3 cohesive:2 outline:1 demonstrate:1 invisible:1 covert:1 p12:6 performs:1 novel:2 nih:1 superior:1 significant:1 composition:1 imc:1 gibbs:1 reconfirms:1 salganik:2 similarly:2 podc:1 access:1 etc:1 pu:3 posterior:18 aldous:1 commun:1 driven:2 scenario:2 certain:1 yi:23 minimum:2 greater:1 ii:2 full:1 multiple:1 pnas:2 reduces:2 iqr:1 academic:1 clinical:1 lin:3 divided:1 award:2 plugging:1 controlled:2 impact:1 arxiv:3 disaster:3 audience:1 proposal:8 addition:5 want:1 respondent:2 interval:1 else:1 median:3 limn:1 biased:2 induced:5 tend:2 subject:1 undirected:1 p01:2 member:2 integer:2 call:1 counting:3 exceed:1 bernstein:1 enough:3 variety:2 simi:1 fit:1 restaurant:2 forthcoming:1 economic:1 idea:1 cn:4 decline:1 whether:1 expression:1 bastos:1 enumerate:1 detailed:1 involve:1 outward:2 iran:1 category:3 generate:3 http:1 overlay:1 exist:3 percentage:2 estimated:2 disjoint:1 tkde:1 affected:2 group:6 key:1 killworth:4 p30:1 p11:9 changing:1 prevent:1 boxplots:2 v1:1 graph:30 asymptotically:2 fraction:7 sum:4 run:5 massouli:1 almost:2 decide:1 wu:1 forrest:2 kerman:1 bound:1 internet:6 yale:5 dition:1 infinity:1 constraint:1 min:5 injection:1 department:2 according:3 combination:2 smaller:7 rev:1 intuitively:1 gradually:1 census:1 karbasi:3 pr:5 taken:2 equation:1 agree:1 know:5 letting:1 end:2 generalizes:1 observe:12 ulse:38 sigchi:1 existence:1 remaining:1 graphical:1 chinese:2 murray:1 nyi:21 pled:1 costly:1 said:1 exhibit:4 dp:3 link:1 p22:9 separate:1 accommodates:1 evenly:1 topic:1 evaluate:2 economist:1 berger:1 illustration:1 ratio:4 balance:1 mexico:1 equivalently:1 difficult:2 unfortunately:1 trace:1 negative:1 rise:1 unknown:4 perform:1 upper:1 vertical:1 observation:3 hereinafter:1 finite:1 situation:2 extended:2 communication:1 d16ap00046:1 noda:1 community:23 paris:1 specified:2 connection:5 chen1:1 barcelona:1 nip:1 robinson:1 bar:2 below:1 challenge:2 hyperlink:2 johnsen:3 max:2 belief:1 greatest:1 event:1 logistical:1 force:1 rely:2 improve:1 axis:2 health:1 crawford:4 prior:7 acknowledgement:1 relative:31 asymptotic:1 girvan:1 sociological:2 whisker:1 men:6 msm:1 proportional:4 chongqing:1 degree:11 incident:2 pij:11 consistent:1 pi:4 heavy:1 repeat:2 supported:1 distribu:1 bias:7 side:1 understand:1 allow:2 institute:1 wide:4 terrorist:2 taking:1 absolute:4 benefit:1 world:8 reside:3 simplified:1 ribeiro:1 social:11 correlate:2 erdos:2 keep:1 global:2 summing:1 quantifies:1 modularity:1 learn:3 robust:2 obtaining:1 somekh:1 da:1 pk:7 spread:1 main:1 whole:1 n2:9 positively:2 fig:14 aid:3 n:3 precision:1 sub:8 lc:1 inferring:1 wish:1 pe:1 lw:6 shelley:2 renyi:2 young:1 theorem:6 instagram:1 evidence:1 exists:3 nichd:1 importance:1 illustrates:1 conditioned:3 chen:3 generalizing:1 logarithmic:2 eij:1 forming:2 contained:2 hladik:1 marketer:1 springer:1 wolf:1 acm:5 conditional:5 viewed:1 quantifying:1 towards:1 appreciable:1 kochen:1 hard:1 experimentally:1 uniformly:2 total:15 called:3 bernard:4 experimental:1 e:7 meaningful:1 select:1 demography:1 support:4 people:7 guo:1 newark:1 rape:1 p21:1 |
5,780 | 623 | Performance Through Consistency:
MS-TDNN's for Large Vocabulary
Continuous Speech Recognition
Joe Tebelskis and Alex Waibel
School of Computf'f Science
Carnegie MeHon University
Pittsburgh, PA 15213
Abstract
Connectionist Rpeech recognition systems are often handicapped by
an inconsistency between training and testing criteria. This problem is addressed by the Multi-State Time Delay Neural Network
(MS-TDNN), a hierarchical phonf'mp and word classifier which uses
DTW to modulate its connectivit.y pattern, and which is directly
trained on word-level targets. The consistent use of word accuracy as a criterion during bot.h t.raining and testing leads to very
high system performance, even wif II limited training dat.a. Until
now, the MS-TDN N has been appli('d primarily to small vocabulary recognition and word spotting tasks. In this papf'f we apply
the architecture to large vocabulary continuous speech recognition,
and demonstrate that our MS-TDNN outperforms all ot,hf'r systems that have been tested on tht' eMU Conference Registration
database.
1
INTRODUCTION
Neural networks hold great promise in the area of speech recognition. But in order
to fulfill their promise, they must be used "properly". One ohviolls conditioll of
"proper" use is that both training and testing should use a consistent error criterion. Unfortunately, in speech recognition, this ohvious condition is often violafeo:
networks are frequently trained using phoneme-level criteria (phoneme classificaf iun
696
MS-TDNN's for Large Vocabulary Continuous Speech Recognition
or acoustic prediction), while the testing criterion is word recognit.ion accuracy. If
phoneme recognition were perfect, then word recognition would also be perfect; but
of course this is not. the case, and the errors which are inevitably made are opt.imized
for the wrong criterion, resulting in subopt.imal word recognition accuracy.
The Multi-State Time Delay Neural Network (MS-TDNN) has recently been proposed as a solution to this problem [1]. The MS-TDNN is a hierarchically struct.ured
classifier which consistently uses word accuracy as a criterion for bot.h training and
testing. It has so far yielded excellent results on small vocabulary recognition [1, 2, 3]
and a word spotting task [4]. In the present paper, we review the MS-TDNN architecture, discuss its application to large vocabulary continuous speech recognition,
and present some favorable experimental results on this task.
2
RESOLVING INCONSISTENCIES:
MS-TDNN ARCHITECTURE
In this section we motivate the design of our MS-TDNN by showing how a series of
intermediate designs resolve successive inconsistencies.
The preliminary system architecture, shown in Figure l(a), simply consists of a
phoneme classifier, in this case a TDNN, whose outputs are copied into a DTW
matrix, in which continuous speech recognition is performed. Many existing systems
are based on this type of approach. \rVhile this is fine for bootstrapping purposes,
the design is ultimately suboptimal because the training criterion is inconsistent
with the testing criterion: phoneme classification is not word classification.
To address this inconsistency we must train t.he network explicitly to perform word
classification. To this end, we define a word layer with one unit for each word in the
vocabulary; the idea is illustrated in Figure 1(b) for the word "cat". We correlate the
activation of the word unit with the associat.ed DTW score by estahlishing connections from the DTW alignment path to the word unit. Also, we give the phonemes
within a word independently trainable weights, to enhance word discrimination (for
example, to discriminate "cat" from "mat" it may be useful to give special emphasis
to the first phoneme); these weights are tif'd over all frames in which the phoneme
occurs. Thus a word unit is an ordinary unit, except that its connectivity to the
preceding layer is determined dynamically, and its net input should be normalized
by the total duration of the word. The word unit is trained on a target of 1 or
0, depending if the word is correct or incorrp.ct for the current segment of speech,
and the resulting error is backpropagat.ed t.hrough the entire net.work. Thus, word
discrimination is treated very much like phoneme discrimination.
Although network (b) resolves the original inconsistency, it now suffers from a secondary one - namely, that the weight.s leading to a word unit are used during
training but ignored during testing, since nTW is still performed entirely in the
DTW layer. We resolve this inconsistency hy "pushing down tht'se weights one
level, as shown in Figure l(c). Now the phoneme activations are no longer directly
copied into the DTW layer, but instead are modulated by a weight and bias before
being stored there (DTW units are linear); and t.he word unit has constant weights,
and no bias. During word-level training, error is still backpropagat.ed from targets
at the word level, but bia.ses and weight.s are modified only at t.he DTW lev(') and
j
,
697
698
Tebelskis and Waibel
"CAT"
r~n
Word [
DTW[
...oC
oC
U
U
T7eBt
...
copy
copy
N
N
...!
...;
?oC
?oC
u
u
??
??
TDNN
hidden
hiddell
Speech: "CAT"
Speech: "CAT"
(a)
(b)
"CAT"
"CAT"
Word [
DTW[
r"f-in
...oC
U
r . f-;n
...oC
u
w2
N
N
...;!
...;!
U
?oC
w3
U
?oC
??
TDNN
hidtUn
Speech: "CAT"
Speech: "CAT'
(e)
(d)
Figure 1: Resolving inconsistencies: (a) TDNN+DTW. (b) Adding word layer.
(c) Pushing down weights. (d) Linear word units, for continuous speech recognition.
MS-TDNN's for Large Vocabulary Continuous Speech Recognition
below. Note that this transformed network is not exactly equivalent to the previous one, but it preserves the properties that there are separate learned weights
associated with each phoneme, and there is an effective bias for each word.
Network (c) is still flawed by a minor inconsistency, arising from its sigmoidal word
unit. The problem does not exist for isolat.ed word recognition, since any monotonic function (sigmoidal or otherwise) will correlate the highest word activation
with the highest DTW score. However, for continuous speech recognition, which
concatenates words into a sentence, the optimal sum of sigmoids may not correspond to the optimal sigmoid of a Slim, leading to an inconsistency between word
and sentence recognition. Linear word units, as shown in (d), resolve this problem; in practice we have found that linear word units perform slightly better than
sigmoidal word units.
The resulting architecture is called a "Multi-State TDNN" because it integrates
the DTW alignment of multiple states into a TDNN to perform word classification.
While an MS-TDNN for small vocabulary recognition can be based on word models
with non-shared states (4], a large vocabulary MS-TDNN must be based on shared
units of speech, such as phonemes. In our system, the TDNN (first three layers)
is shared by all words in the vocabulary, while each word requires only oue nOI1shared weight and bias for each of its phonemes. Thus the number of parameters in
the MS-TDNN remains moderate even for a large vocabulary, and it can make the
most of limited training data. Moreover, new words can be added to the vocabulary
without retraining, by simply defining a new DTW layer for each new word, with
incoming weights and biases initialized to 1.0 and 0.0, respectively.
Given constant weights under the word layer, word level training is really just another way of viewing DTW level training; but the former is conceptually simpler
because there is a single binary target for each word, which makes word level discrimination very straightforward. For a large vocabulary, discriminating against all
incorrect words would be very expensive, so we discriminate against only a small
number of close matches (typically 1).
Word level training yields better word classification than phoneme level traiuing.
In one experiment, for example, we bootstrapped our system with phoneme level
training (as shown in Figure la), and found that word recognition accuracy asymptoted at 71% on a test set. We then continued training from the word level (as
shown in Figure 1d), and found that word accuracy improved to 81% on the test
set. It is worth noting that in an intermediate experiment , even when we held the
DTW layer's incoming weights and biases constant (at 1.0 and 0.0 respectively),
thus adding no new trainable parameters to the system, we found that word level
training still improved the word accuracy from 71% to 75% on the test set, as a
consequence of word-level discrimination.
3
BALANCING THE TRAINING SET
The MS-TDNN must be first bootstrapped wit.h phoneme level training. In our
early experiments we had difficulty bootst.rapping the TDNN, not only because our
training set was unbalanced, but also because the vast majority of phonemes were
being trained on a target of 0, so that t.he negative training was overwhelming and
699
700
Tebelskis and Waibel
NON
A
E
o
N
T
Y
Z
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
?
? ?
0
0
0
A
?
0
0
0
0
0
0
T
YET
0
0
0
0
0
0
0
0
0
0
0
0
??
0
?
0
0
0
0
0
0
0
0
0
?
0
0
Scale backprop error by:
NON
A
T
YET
A -117 -117 -117 1.0 -117 -117 -117 -117
E -117 -117 -117 -117 -117 -117 1.0 -117
o . . ..
N 112 -1/6 112
T -1/6 -116 -116
Y . . ..
Z -118 -1/8 -1/8
-1/6 -116 -116 -1/6
-1/6 112 -1/6 -1/6
-116
112
-1/8 -1/8 -118 -118 -118
Figure 2: Balancing the training set: "No not yet" .
defeating the positive training. In order to address these problpms, we normalized
the amount of error back propagated from each phoneme unit so that the l'elative
influence of positive and negative training was balanced out over the entire training
set.
This apparently novel technique is illustrated in Figure 2. Given the utterance "No
not yet" , for example, we observe that thpre are t.wo frames each of "N" and "T",
one frame of several other phonemes, and zero of others. Based on these counts,
we compute a backpropagation scaling fact.or for each phoneme in each frame, as
shown in the bottom half of the figure.
We found that this technique was indisppnsible when bootst.rapping with the
squared error criterion, E = 2:(Tj - Y;)2. In subsequent experiments, we found
that it was still somewhat helpful but no longer necessary when training with the
McClelland error function, E = - 2: 10g(1- (T; - y;)2), or with the Cross Entropy
error function, E = - 2:(1;;logYi) + (1- Td /og(1- Yi). We attribute this difference
to the fact that the sum squared error function is merely a quadratic error function,
whereas the latter two functions tend towards infinite error as the difference between
the target and actual activation approaches its maximum value, compensating more
forcefully for the flat behavior encouraged hy all the negative training.
4
EXPERIMENTAL RESULTS
Wp have trained and tested our MS-TDNN on two recordings of 200 English sentences from the CMU Conference Registrat.ion database, recorded by one male
speaker using a close-speaking microphone. Our speaker-dependpnt t.esting results
MS-TDNN's for Large Vocabulary Continuous Speech Recognition
Table 1: Comparison of speech recognition systems applied to the CMU Conference
Registration Database. HMM-n = HMM with n mixture densities [5]. LPNN =
Linked Predictive Neural Network [6]. HCNN
Hidden Control Neural Network
[7]. LVQ = Learned Vector Quantization [8]. TDNN corresponds to MS-TDNN
without word-level training. (Perplexity 402( a) used only 41 test sentences; 402(b)
used 204 test sentences.)
=
" System
HMM-I
HMM-5
HMM-10
LPNN
HCNN
LVg
TDNN
MS-TDNN
7
96%
97%
97%
98%
98%
98%
perplexity
111
402(a)
55%
71%
58%
75%
66%
41%
60%
75%
84%
74%
78%
72%
82%
81 '70
402(b)
II
61%
64%
70%
are given in Table 1, along with comparative results from several other systems.
It can be seen that the MS-TDNN has outperformed all other systems that have
been compared on this database. This particular MS-TDNN used 16 melscale spectral input units, 20 hidden units, 120 phoneme units (40 phonemes with a 3-state
model), 5,487 DTW units, and 402 word units, with 3 and 5 delays respectively in
the first two layers of weights, giving a total of 24,074 weights; it used symmetric
(-1..1) unit activations and inputs, and linear DT'\! units and word units. Word
level training was performed using the Classification Figure of Merit (CFM) error
function, E = (1 + (Ye - Yc))2, in which the correct word (with activation Yc ) is
explicitly discriminated from the best incorrect word (with activation Yc); CFM
was somewhat better than MSE for word level training, although the opposite was
true for phoneme level training. Negative word level training was performed only if
the two words were sufficiently confusable, in order to avoid disrupting the network
on behalf of words that had already been well-learned.
More recently we have begun experiments on the speaker independent Resource
Management database, containing nearly 4000 training sentences. To date we have
primarily focused on bootstrapping the phoneme level TDNN on this database,
without doing much word level training; but. early experiments suggest we may reasonably expect another 4% improvement from word level training. Our preliminary
results are shown in Table 2, compared against. two other systems: an early version
of Sphinx [9], and a simple but large MLP which has been trained as a phoneme
classifier [10]. Each of these systems uses cont.ext independent phoneme models,
with multiple states per phoneme, and includes differenced coefficients in its input
representation. It can be seen that our TDNN outperforms this version of Sphinx
while using a comparable number of parameters, but is outperformed by the MLP
which has an order of magnitude more paramet.ers. (We note that t.he MLP also uses
phonological models to enhance its performance, and uses online training with ran-
701
702
Tebelskis and Waibel
Table 2: Context-independent systems applied to the Resource Management
database.
" System
Early Sphinx
MLP iICSI-SRIl
TDNN
MS-TDNN
I parameters
35,000
300,000
42,000
75,000
perplexity
60
1000
76%
36%
94%
75%
79%
43%
(+4%)
-
II
dom sampling rather than updating the weights after each sentence.) In any case,
we suggest that the MLP might further improve its performance by incorporating
word level training.
5
REMAINING INCONSISTENCIES
While the MS-TDNN was designed for consistency, it is not yet entirely consistent.
For example, the MS-TDNN's training algorithm assumes that the network connectivity is fixed; but in fact the connect.ivit.y at the word level varies, depending
on the DTW alignment path during the current iteration. We presume that this
is a negligible factor, however, by the time the training has asymptoted and the
segmentation has stabilized.
A more serious inconsistency arises during discriminative training. In our MSTDNN, negative training is performed at known word boundaries; this is inconsistent because word boundaries are in faet. unknown during testing. It would be
better to discriminate against words found hy a free alignment, as suggested by Hild
[3]. Unfortunately this is an expensive operation, and it proved impractical for our
system.
6
CONCLUSION
We have shown that the perfOl'mance of a connectionist speech recognition system can be improved by resolving inconsistf'ncie:s in its design. Specifically, by
introducing word level training into a TDNN phoneme classifier (thus defining an
MS-TDNN), the training and testing crit,e:ria become consistent, enhancing the system's word recognition accuracy. We applied our MS-TDNN archit.ecture to the
task of large vocabulary continuous speech recognition, and found that it outperforms all other systems that have been evaluated on the CMU Conference Registration database. In addition, preliminary results suggest that t.he MS-TDNN
may perform well on the large vocabulary Rf'source Management database, using
a relatively small number of free paramet.ers. Our future work will focus on t.his
investigation.
MS? TDNN's for Large Vocabulary Continuous Speech Recognition
Acknowledgements
The authors gratefully acknowledge the support of DARPA and the National Science
Foundation.
References
[1] P. Haffner, M. Franzini, and A. Waibel. Integrating Time Alignment and
Connectionist Networks for High Performance Continuous Speech Recognition.
In Proc. International Conference on Acoustics, Speech, and Signal Processing
(ICASSP), 1991.
[2] P. Haffner. Connectionist Word-Level Classification in Speech Recognition. In
Proc. ICASSP, 1992.
[3] H. Hild and A. Waibel. Connected Letter Recognition with a Multi-State Time
Delay Neural Network. In Advances in Neural Information Processing Systems
5, Morgan Kaufmann Publishers, 1993.
[4] T. Zeppenfeld and A. ,-vaibel. A Hybrid Neural Network, Dynamic Programming Word Spotter. In Proc. ICASSP, 1992.
[5] O. Schmidbauer. An LVQ Based Reference Model for Speaker-Independent and
-Adaptive Speech Recognition. Technical Report, Carnegie Mellon University,
1991.
[6] J. Tebelskis, A. Waibel, B. Petek, and O. Schmidbauer. Continuous Speech
Recognition using Linked Predictive Neural Networks. In Proc. ICASSP, 1991.
[7] B. Petek and J. Tebelskis. Context-Dependent Hidden Control Neural Network
Architecture for Continuous Speech Recognition. In Proc. ICASSP, 1992.
[8] O. Schmidbauer and J. Tebelskis. An LVQ Based Reference Model for Speaker
Adaptive Speech Recognition. In Proc. ICASSP, 1992.
[9] K. F. Lee. Large Vocabulary Speaker-Independent Continuous Speech Recognition: The SPHINX System. PhD Thesis, Carnegie Mellon University, 1988.
[10] M. Cohen, H. Franco, N. Morgan, D. Rumelhart, and V. Abrash. Cont.extDependent Multiple Distribution Phonetic Modeling with MLP's. In Advances
in Neural Information Processing Systems 5, Morgan Kaufmann Publishers,
1993.
703
| 623 |@word version:2 ivit:1 retraining:1 tif:1 series:1 score:2 bootstrapped:2 outperforms:3 existing:1 current:2 activation:7 yet:5 must:4 subsequent:1 designed:1 discrimination:5 half:1 ria:1 successive:1 sigmoidal:3 simpler:1 along:1 become:1 incorrect:2 consists:1 behavior:1 frequently:1 multi:4 compensating:1 td:1 resolve:4 actual:1 overwhelming:1 moreover:1 bootstrapping:2 impractical:1 exactly:1 classifier:5 wrong:1 control:2 unit:24 lvg:1 before:1 positive:2 negligible:1 consequence:1 ext:1 lev:1 path:2 might:1 emphasis:1 dynamically:1 limited:2 testing:9 practice:1 backpropagation:1 area:1 word:80 integrating:1 suggest:3 close:2 traiuing:1 influence:1 context:2 equivalent:1 straightforward:1 independently:1 duration:1 focused:1 wit:1 continued:1 his:1 tht:2 target:6 programming:1 us:5 pa:1 rumelhart:1 recognition:34 expensive:2 updating:1 zeppenfeld:1 database:9 bottom:1 connected:1 highest:2 ran:1 balanced:1 dynamic:1 dom:1 ultimately:1 trained:6 motivate:1 segment:1 crit:1 predictive:2 icassp:6 darpa:1 cat:9 train:1 effective:1 recognit:1 iun:1 whose:1 s:1 otherwise:1 online:1 net:2 slim:1 date:1 ntw:1 hrough:1 comparative:1 perfect:2 depending:2 minor:1 school:1 correct:2 attribute:1 appli:1 viewing:1 forcefully:1 backprop:1 really:1 preliminary:3 investigation:1 opt:1 paramet:2 hold:1 hild:2 sufficiently:1 great:1 cfm:2 early:4 purpose:1 favorable:1 proc:6 integrates:1 outperformed:2 defeating:1 modified:1 fulfill:1 rather:1 avoid:1 og:1 focus:1 properly:1 consistently:1 improvement:1 helpful:1 dependent:1 entire:2 typically:1 hidden:4 transformed:1 classification:7 special:1 phonological:1 flawed:1 encouraged:1 sampling:1 nearly:1 future:1 connectionist:4 others:1 report:1 serious:1 primarily:2 preserve:1 national:1 mlp:6 alignment:5 male:1 mixture:1 melscale:1 tj:1 held:1 necessary:1 initialized:1 confusable:1 modeling:1 ordinary:1 introducing:1 delay:4 stored:1 connect:1 varies:1 density:1 international:1 ured:1 discriminating:1 lee:1 enhance:2 connectivity:2 squared:2 mstdnn:1 recorded:1 management:3 containing:1 thesis:1 sphinx:4 leading:2 includes:1 coefficient:1 mp:1 explicitly:2 performed:5 apparently:1 linked:2 doing:1 hf:1 accuracy:8 phoneme:28 kaufmann:2 correspond:1 yield:1 ecture:1 conceptually:1 worth:1 presume:1 suffers:1 ed:4 against:4 associated:1 propagated:1 proved:1 begun:1 segmentation:1 schmidbauer:3 back:1 dt:1 improved:3 evaluated:1 just:1 until:1 ye:1 normalized:2 true:1 former:1 symmetric:1 wp:1 illustrated:2 during:7 speaker:6 abrash:1 oc:8 m:28 criterion:10 disrupting:1 demonstrate:1 novel:1 recently:2 sigmoid:1 discriminated:1 cohen:1 he:6 mellon:2 consistency:2 gratefully:1 had:2 longer:2 moderate:1 perplexity:3 phonetic:1 binary:1 inconsistency:11 yi:1 seen:2 morgan:3 somewhat:2 preceding:1 signal:1 ii:3 resolving:3 multiple:3 technical:1 match:1 imized:1 cross:1 prediction:1 subopt:1 enhancing:1 cmu:3 iteration:1 ion:2 whereas:1 addition:1 fine:1 addressed:1 source:1 publisher:2 ot:1 w2:1 recording:1 tend:1 inconsistent:2 noting:1 intermediate:2 wif:1 w3:1 architecture:6 suboptimal:1 opposite:1 idea:1 haffner:2 wo:1 speech:29 speaking:1 backpropagat:2 ignored:1 useful:1 se:1 amount:1 mcclelland:1 exist:1 stabilized:1 bot:2 arising:1 per:1 carnegie:3 promise:2 mat:1 registration:3 vast:1 merely:1 spotter:1 sum:2 bia:1 letter:1 petek:2 scaling:1 comparable:1 entirely:2 layer:10 ct:1 copied:2 quadratic:1 yielded:1 alex:1 flat:1 hy:3 tebelskis:7 franco:1 relatively:1 waibel:7 slightly:1 lpnn:2 resource:2 remains:1 discus:1 count:1 merit:1 end:1 operation:1 mance:1 apply:1 observe:1 hierarchical:1 spectral:1 struct:1 original:1 assumes:1 remaining:1 pushing:2 tdn:1 archit:1 giving:1 dat:1 franzini:1 added:1 already:1 occurs:1 behalf:1 separate:1 majority:1 hmm:5 cont:2 unfortunately:2 negative:5 design:4 proper:1 unknown:1 perform:4 acknowledge:1 inevitably:1 defining:2 frame:4 namely:1 trainable:2 connection:1 sentence:7 differenced:1 acoustic:2 learned:3 emu:1 address:2 suggested:1 spotting:2 below:1 pattern:1 yc:3 handicapped:1 rf:1 treated:1 difficulty:1 hybrid:1 imal:1 oue:1 improve:1 dtw:18 tdnn:41 utterance:1 review:1 acknowledgement:1 expect:1 foundation:1 consistent:4 balancing:2 course:1 copy:2 english:1 free:2 bias:6 boundary:2 raining:1 vocabulary:19 author:1 made:1 adaptive:2 far:1 correlate:2 incoming:2 pittsburgh:1 discriminative:1 continuous:15 table:4 concatenates:1 reasonably:1 mse:1 excellent:1 hierarchically:1 associat:1 down:2 showing:1 er:2 incorporating:1 joe:1 quantization:1 adding:2 phd:1 magnitude:1 sigmoids:1 entropy:1 simply:2 monotonic:1 corresponds:1 modulate:1 lvq:3 hcnn:2 towards:1 shared:3 determined:1 except:1 infinite:1 specifically:1 microphone:1 total:2 called:1 discriminate:3 secondary:1 experimental:2 la:1 support:1 latter:1 modulated:1 unbalanced:1 arises:1 tested:2 |
5,781 | 6,230 | Attend, Infer, Repeat:
Fast Scene Understanding with Generative Models
S. M. Ali Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa,
David Szepesvari, Koray Kavukcuoglu, Geoffrey E. Hinton
{aeslami,heess,theophane,tassa,dsz,korayk,geoffhinton}@google.com
Google DeepMind, London, UK
Abstract
We present a framework for efficient inference in structured image models that explicitly reason about objects. We achieve this by performing probabilistic inference
using a recurrent neural network that attends to scene elements and processes them
one at a time. Crucially, the model itself learns to choose the appropriate number
of inference steps. We use this scheme to learn to perform inference in partially
specified 2D models (variable-sized variational auto-encoders) and fully specified
3D models (probabilistic renderers). We show that such models learn to identify
multiple objects ? counting, locating and classifying the elements of a scene ?
without any supervision, e.g., decomposing 3D images with various numbers of
objects in a single forward pass of a neural network at unprecedented speed. We
further show that the networks produce accurate inferences when compared to
supervised counterparts, and that their structure leads to improved generalization.
1
Introduction
The human percept of a visual scene is highly structured. Scenes naturally decompose into objects
that are arranged in space, have visual and physical properties, and are in functional relationships with
each other. Artificial systems that interpret images in this way are desirable, as accurate detection of
objects and inference of their attributes is thought to be fundamental for many problems of interest.
Consider a robot whose task is to clear a table after dinner. To plan its actions it will need to determine
which objects are present, what classes they belong to and where each one is located on the table.
The notion of using structured models for image understanding has a long history (e.g., ?vision
as inverse graphics? [4]), however in practice it has been difficult to define models that are: (a)
expressive enough to capture the complexity of natural scenes, and (b) amenable to tractable inference.
Meanwhile, advances in deep learning have shown how neural networks can be used to make
sophisticated predictions from images using little interpretable structure (e.g., [10]). Here we explore
the intersection of structured probabilistic models and deep networks. Prior work on deep generative
methods (e.g., VAEs [9]) have been mostly unstructured, therefore despite producing impressive
samples and likelihood scores their representations have lacked interpretable meaning. On the other
hand, structured generative methods have largely been incompatible with deep learning, and therefore
inference has been hard and slow (e.g., via MCMC).
Our proposed framework achieves scene interpretation via learned, amortized inference, and it
imposes structure on its representation through appropriate partly- or fully-specified generative
models, rather than supervision from labels. It is important to stress that by training generative
models, the aim is not primarily to obtain good reconstructions, but to produce good representations,
in other words to understand scenes. We show experimentally that by incorporating the right kinds of
structures, our models produce representations that are more useful for downstream tasks than those
produced by VAEs or state-of-the-art generative models such as DRAW [3].
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
The proposed framework crucially allows for reasoning about the complexity of a given scene
(the dimensionality of its latent space). We demonstrate that via an Occam?s razor type effect,
this makes it possible to discover the underlying causes of a dataset of images in an unsupervised
manner. For instance, the model structure will enforce that a scene is formed by a variable number
of entities that appear in different locations, but the process of learning will identify what these
scene elements look like and where they appear in any given image. The framework also combines
high-dimensional distributed representations with directly interpretable latent variables (e.g., affine
pose). This combination makes it easier to avoid the pitfalls of models that are too unconstrained
(leading to data-hungry learning) or too rigid (leading to failure via mis-specification).
The main contributions of the paper are as follows. First, in Sec. 2 we formalize a scheme for
efficient variational inference in latent spaces of variable dimensionality. The key idea is to treat
inference as an iterative process, implemented as a recurrent neural network that attends to one object
at a time, and learns to use an appropriate number of inference steps for each image. We call the
proposed framework Attend-Infer-Repeat (AIR). End-to-end learning is enabled by recent advances
in amortized variational inference, e.g., combining gradient based optimization for continuous latent
variables with black-box optimization for discrete ones. Second, in Sec. 3 we show that AIR allows
for learning of generative models that decompose multi-object scenes into their underlying causes,
e.g., the constituent objects, in an unsupervised manner. We demonstrate these capabilities on MNIST
digits (Sec. 3.1), overlapping sprites and Omniglot glyphs (appendices H and G). We show that
model structure can provide an important inductive bias that is not easily learned otherwise, leading
to improved generalization. Finally, in Sec. 3.2 we demonstrate how our inference framework can
be used to perform inference for a 3D rendering engine with unprecedented speed, recovering the
counts, identities and 3D poses of complex objects in scenes with significant occlusion in a single
forward pass of a neural network, providing a scalable approach to ?vision as inverse graphics?.
2
Approach
In this paper we take a Bayesian perspective of scene interpretation, namely that of treating this task
as inference in a generative model. Thus given an image x and a model px? (x|z)pz? (z) parameterized
by ? we wish to recover the underlying scene description z by computing the posterior p(z|x) =
px? (x|z)pz? (z)/p(x). In this view, the prior pz? (z) captures our assumptions about the underlying
scene, and the likelihood px? (x|z) is our model of how a scene description is rendered to form an
image. Both can take various forms depending on the problem at hand and we will describe particular
instances in Sec. 3. Together, they define the language that we use to describe a scene.
Many real-world scenes naturally decompose into objects. We therefore make the modeling assumption that the scene description is structured into groups of variables zi , where each group describes
the attributes of one of the objects in the scene, e.g., its type, appearance, and pose. Since the number
of objects will vary from scene to scene, we assume models of the following form:
Z
N
X
p? (x) =
pN (n) pz? (z|n)px? (x|z)dz.
(1)
n=1
This can be interpreted as follows. We first sample the number of objects n from a suitable prior
(for instance a Binomial distribution) with maximum value N . The latent, variable length, scene
descriptor z = (z1 , z2 , . . . , zn ) is then sampled from a scene model z ? pz? (?|n). Finally, we render
the image according to x ? px? (?|z). Since the indexing of objects is arbitrary, pz? (?) is exchangeable
and px? (x|?) is permutation invariant, and therefore the posterior over z is exchangeable.
The prior and likelihood terms can take different forms. We consider two scenarios: For 2D scenes
(Sec. 3.1), each object is characterized in terms of a learned distributed continuous representation for
its shape, and a continuous 3-dimensional variable for its pose (position and scale). For 3D scenes
(Sec. 3.2), objects are defined in terms of a categorical variable that characterizes their identity, e.g.,
sphere, cube or cylinder, as well as their positions and rotations. We refer to the two kinds of variables
for each object i in both scenarios as ziwhat and ziwhere respectively, bearing in mind that their meaning
(e.g., position and scale in pixel space vs. position and orientation in 3D space) and their data type
(continuousQvs. discrete) will vary. We further assume that zi are independent under the prior, i.e.,
n
pz? (z|n) = i=1 pz? (zi ), but non-independent priors, such as a distribution over hierarchical scene
graphs (e.g., [28]), can also be accommodated. Furthermore, while the number of objects is bounded
as per Eq. 1, it is relatively straightforward to relax this assumption.
2
z
x
z
x
Decoder
y
z1
z3
z1
z2
z3
h1
h2
h3
x
x
Decoder
y
Figure 1: Left: A single random variable z produces the observation x (the image). The relationship
between z and x is specified by a model. Inference is the task of computing likely values of z given x.
Using an auto-encoding architecture, the model (red arrow) and its inference network (black arrow)
can be trained end-to-end via gradient descent. Right: For most images of interest, multiple latent
variables (e.g., multiple objects) give rise to the image. We propose an iterative, variable-length
inference network (black arrows) that attends to one object at a time, and train it jointly with its model.
The result is fast, feed-forward, interpretable scene understanding trained without supervision.
2.1
Inference
Despite their natural appeal, inference for most models in the form of Eq. 1 is intractable due
to the dimensionality of the integral. We therefore employ an amortized variational approximation to the true posterior by learning a distribution q (z, n|x) parameterized by that minimizes
KL [q (z, n|x)||pz? (z, n|x)]. While such approximations have recently been used successfully in
a variety of works [21, 9, 18] the specific form of our model poses two additional difficulties.
Trans-dimensionality: As a challenging departure from classical latent space models, the size of the
latent space Rn (i.e., the number of objects) is a random variable itself, which necessitates evaluating
pN (n|x) = pz? (z, n|x)dz, for all n = 1...N . Symmetry: There are strong symmetries that arise, for
instance, from alternative assignments of objects appearing in an image x to latent variables zi .
We address these challenges by formulating inference as an iterative process implemented as a
recurrent neural network, which infers the attributes of one object at a time. The network is run
for N steps and in each step explains one object in the scene, conditioned on the image and on its
knowledge of previously explained objects (see Fig. 1).
To simplify sequential reasoning about the number of objects, we parameterize n as a variable length
latent vector zpres using a unary code: for a given value n, zpres is the vector formed of n ones followed
by one zero. Note that the two representations are equivalent. The posterior takes the following form:
n
Y
n+1
i
q (z, zpres |x) = q (zpres
= 0|z1:n , x) q (zi , zpres
= 1|x, z1:i 1 ).
(2)
i=1
q is implemented as a neural network that, in each step, outputs the parameters of the sampling
distributions over the latent variables, e.g., the mean and standard deviation of a Gaussian distribution
for continuous variables. zpres can be understood as an interruption variable: at each time step, if
the network outputs zpres = 1, it describes at least one more object and proceeds, but if it outputs
zpres = 0, no more objects are described, and inference terminates for that particular datapoint.
Note that conditioning of zi |x, z1:i 1 is critical to capture dependencies between the latent variables
zi in the posterior, e.g., to avoid explaining the same object twice. The specifics of the networks that
achieve this depend on the particularities of the models and we will describe them in detail in Sec. 3.
2.2
Learning
We can jointly optimize the parameters ? of the model and of the inference network by maximizing
the lower
bound oni the marginal likelihood of an image under the model: log p? (x) L(?, ) =
h
(x,z,n)
Eq log qp?(z,n,|x)
with respect ? and . L is called the negative free energy. We provide an outline
of how to construct an estimator of the gradient of this quantity below, for more details see [23].
@
Computing a Monte Carlo estimate of @?
L is relatively straightforward: given a sample from the
approximate posterior (z, zpres ) ? q (?|x) (i.e., when the latent variables have been ?filled in?) we
@
can readily compute @?
log p? (x, z, n) provided p is differentiable in ?.
3
Computing a Monte Carlo estimate of @@ L is more involved. As discussed above, the RNN that
implements q produces the parameters of the sampling distributions for the scene variables z
and presence variables zpres . For a time step i, denote with ! i all the parameters of the sampling
i
distributions of variables in (zpres
, zi ). We parameterize the dependence of this distribution on
z1:i 1 and x using a recurrent function R (?) implemented as a neural network such that (! i , hi ) =
i 1
R
via chain rule: @L/@ =
P (x, h i) withi hidden variables h. The full gradient is obtained
i
i @L/@! ? @! / . Below we explain how to compute @L/@! . We first rewrite our cost function
(x,z,n)
as follows: L(?, ) = Eq [`(?, , z, n)] where `(?, , z, n) is defined as log qp?(z,n,|x)
. Let z i be an
i
arbitrary element of the vector (zi , zpres
) of type {what, where, pres}. How to proceed depends on
i
whether z is continuous or discrete.
Continuous: Suppose z i is a continuous variable. We use the path-wise estimator (also known as
the ?re-parameterization trick?, e.g., [9, 23]), which allows us to ?back-propagate? through the random
variable z i . For many continuous variables (in fact, without loss of generality), z i can be sampled as
h(?, ! i ), where h is a deterministic transformation function, and ? a random variable from a fixed
@L
noise distribution p(?) giving the gradient estimate: @!
, z, n)/@z i ? @h/@! i .
i ? @`(?,
i
Discrete: For discrete scene variables (e.g., zpres
) we cannot compute the gradient @L/@!ji by
back-propagation. Instead we use the likelihood ratio estimator [18, 23]. Given a posterior
sample (z, n) ? q (?|x) we can obtain a Monte Carlo estimate of the gradient: @L/@! i ?
@ log q(z i |! i )/@! i `(?, , z, n). In the raw form presented here this gradient estimate is likely
to have high variance. We reduce its variance using appropriately structured neural baselines [18]
that are functions of the image and the latent variables produced so far.
3
Models and Experiments
We first apply AIR to a dataset of multiple MNIST digits, and show that it can reliably learn to detect
and generate the constituent digits from scratch (Sec. 3.1). We show that this provides advantages over
state-of-the-art generative models such as DRAW [3] in terms of computational effort, generalization
to unseen datasets, and the usefulness of the inferred representations for downstream tasks. We also
apply AIR to a setting where a 3D renderer is specified in advance. We show that AIR learns to use
the renderer to infer the counts, identities and poses of multiple objects in synthetic and real table-top
scenes with unprecedented speed (Sec. 3.2 and appendix J).
Details of the AIR model and networks used in the 2D experiments are shown in Fig. 2. The
i
generative model (Fig. 2, left) draws n ? Geom(?) digits {yatt
}, scales and shifts them according to
i
zwhere ? N (0, ?) using spatial transformers, and sums the results {y i } to form the image. Each digit
is obtained by first sampling a latent code ziwhat from the prior ziwhat ? N (0, 1) and propagating it
through a decoder network. The learnable parameters of the generative model are the parameters of
this decoder network. The AIR inference network (Fig. 2, middle) produces three sets of variables
for each entity at every time-step: a 1-dimensional Bernoulli variable indicating the entity?s presence,
a C-dimensional distributed vector describing its class or appearance (ziwhat ), and a 3-dimensional
vector specifying the affine parameters of its position and scale (ziwhere ). Fig. 2 (right) shows the
interaction between the inference and generation networks at every time-step. The inferred pose is
used to attend to a part of the image (using a spatial transformer) to produce xiatt , which is processed
to produce the inferred code zicode and the reconstruction of the contents of the attention window
i
i
yatt
. The same pose information is used by the generative model to transform yatt
to obtain yi . This
i
contribution is only added to the canvas y if zpres
was inferred to be true.
For the dataset of MNIST digits, we also investigate the behavior of a variant, difference-AIR
(DAIR), which employs a slightly different recurrent architecture for the inference network (see
Fig. 8 in appendix). As opposed to AIR which computes zi via hi and x, DAIR reconstructs at
every time step i a partial reconstruction xi of the data x, which is set as the mean of the distribution
px? (x|z1 , z2 , . . . , zi 1 ). We create an error canvas xi = xi x, and the DAIR inference equation
R is then specified as (! i , hi ) = R ( xi , hi 1 ).
4
1
zwhere
2
zwhat
y1att
2
zwhere
y2att
y1
z1pres z1what z1where
z2pres z2what z2where
z3pres z3what z3where
h1
h2
h3
Decoder
i
xatt
...
i
zwhere
hi
i
yatt
yi
i
zpres
y2
x
i
zwhat
VAE
1
zwhat
x
y
x
...
y
200k
10k
1k
Data
Figure 2: AIR in practice: Left: The assumed generative model. Middle: AIR inference for this
model. The contents of the grey box are input to the decoder. Right: Interaction between the inference
i
and generation networks at every time-step. In our experiments the relationship between xiatt and yatt
is modeled by a VAE, however any generative model of patches could be used (even, e.g., DRAW).
Figure 3: Multi-MNIST learning: Left above: Images from the dataset. Left below: Reconstructions
at different stages of training along with a visualization of the model?s attention windows. The 1st,
2nd and 3rd time-steps are displayed using red, green and blue borders respectively. A video of this
sequence is provided in the supplementary material. Above right: Count accuracy over time. The
model detects the counts of digits accurately, despite having never been provided supervision. Chance
accuracy is 25%. Below right: The learned scanning policy for 3 different runs of training (only
differing in the random seed). We visualize empirical heatmaps of the attention windows? positions
(red, and green for the first and second time-steps respectively). As expected, the policy is random.
This suggests that the policy is spatial, as opposed to identity- or size-based.
3.1
Multi-MNIST
We begin with a 50?50 dataset of multi-MNIST digits. Each image contains zero, one or two
non-overlapping random MNIST digits with equal probability. The desired goal is to train a network
that produces sensible explanations for each of the images. We train AIR with N = 3 on 60,000 such
images from scratch, i.e., without a curriculum or any form of supervision by maximizing L with
respect to the parameters of the inference network and the generative model. Upon completion of
training we inspect the model?s inferences (see Fig. 3, left). We draw the reader?s attention to the
following observations. First, the model identifies the number of digits correctly, due to the opposing
pressures of (a) wanting to explain the scene, and (b) the cost that arises from instantiating an object
under the prior. This is indicated by the number of attention windows in each image; we also plot
the accuracy of count inference over the course of training (Fig. 3, above right). Second, it locates
the digits accurately. Third, the recurrent network learns a suitable scanning policy to ensure that
different time-steps account for different digits (Fig. 3, below right). Note that we did not have to
specify any such policy in advance, nor did we have to build in a constraint to prevent two time-steps
from explaining the same part of the image. Finally, that the network learns to not use the second
time-step when the image contains only a single digit, and to never use the third time-step (images
contain a maximum of two digits). This allows for the inference network to stop upon encountering
i
the first zpres
equaling 0, leading to potential savings in computation during inference.
A video showing real-time inference using AIR has been included in the supplementary material.
We also perform experiments on Omniglot ([13], appendix G) to demonstrate AIR?s ability to parse
glyphs into elements resembling ?strokes?, as well as a dataset of sprites where the scene?s elements
appear under significant overlap (appendix H). See appendices for details and results.
5
DRAW
Data
Data
DAIR
Figure 4: Strong generalization: Left: Reconstructions of images with 3 digits made by DAIR
trained on 0, 1 or 2 digits, as well as a comparison with DRAW. Right: Variational lower bound, and
generalizing / interpolating count accuracy. DAIR out-performs both DRAW and AIR at this task.
3.1.1
Strong Generalization
Since the model learns the concept of a digit independently of the positions or numbers of times it
appears in each image, one would hope that it would be able to generalize, e.g., by demonstrating an
understanding of scenes that have structural differences to training scenes. We probe this behavior
with the following scenarios: (a) Extrapolation: training on images each containing 0, 1 or 2 digits
and then testing on images containing 3 digits, and (b) Interpolation: training on images containing
0, 1 or 3 digits and testing on images containing 2 digits. The result of this experiment is shown in
Fig. 4. An AIR model trained on up to 2 digits is effectively unable to infer the correct count when
presented with an image of 3 digits. We believe this to be caused by the LSTM which learns during
training never to expect more than 2 digits. AIR?s generalization performance is improved somewhat
when considering the interpolation task. DAIR by contrast generalizes well in both tasks (and finds
interpolation to be slightly easier than extrapolation). A closely related baseline is the Deep Recurrent
Attentive Writer (DRAW, [3]), which like AIR, generates data sequentially. However, DRAW has a
fixed and large number of steps (40 in our experiments). As a consequence generative steps do not
correspond to easily interpretable entities, complex scenes are drawn faster and simpler ones slower.
We show DRAW?s reconstructions in Fig. 4. Interestingly, DRAW learns to ignore precisely one digit
in the image. See appendix for further details of these experiments.
3.1.2
Representational Power
A second motivation for the use of structured
models is that their inferences about a scene
provides useful representations for downstream
tasks. We examine this ability by first training an AIR model on 0, 1 or 2 digits and then
produce inferences for a separate collection of Figure 5: Representational power: AIR achieves
images that contains precisely 2 digits. We split high accuracy using only a fraction of the labeled
this data into training and test and consider two data. Left: summing two digits. Right: detecting
tasks: (a) predicting the sum of the two digits if they appear in increasing order. Despite produc(as was done in [1]), and (b) determining if the ing comparable reconstructions, CAE and DRAW
digits appear in an ascending order. We compare inferences are less interpretable than AIR?s and
with a CNN trained from the raw pixels, as well therefore lead to poorer downstream performance.
as interpretations produced by a convolutional
autoencoder (CAE) and DRAW (Fig. 5). We optimize each model?s hyper-parameters (e.g. depth
and size) for maximal performance. AIR achieves high accuracy even when data is scarce, indicating
the power of its disentangled, structured representation. See appendix for further details.
3.2
3D Scenes
The experiments above demonstrate learning of inference and generative networks in models where
we impose structure in the form of a variable-sized representation and spatial attention mechanisms.
We now consider an additional way of imparting knowledge to the system: we specify the generative
model via a 3D renderer, i.e., we completely specify how any scene representation is transformed to
produce the pixels in an image. Therefore the task is to learn to infer the counts, identities and poses
of several objects, given different images containing these objects and an implementation of a 3D
renderer from which we can draw new samples. This formulation of computer vision is often called
?vision as inverse graphics? (see e.g., [4, 15, 7]).
6
(h) AIR (g) Real (f) AIR (e) Data
(d) Opt. (c) Sup. (b) AIR (a) Data
Figure 6: 3D objects: Left: The task is to infer the identity and pose of a single 3D object. (a) Images
from the dataset. (b) Unsupervised AIR reconstructions. (c) Supervised reconstructions. Note poor
performance on cubes due to their symmetry. (d) Reconstructions after direct gradient descent. This
approach is less stable and much more susceptible to local minima. Right: AIR can learn to recover
the counts, identities and poses of multiple objects in a 3D table-top scene. (e,g) Generated and real
images. (f,h) AIR produces fast and accurate inferences which we visualize using the renderer.
The primary challenge in this view of computer vision is that of inference. While it is relatively easy
to specify high-quality models in the form of probabilistic renderers, posterior inference is either
extremely expensive or prone to getting stuck in local minima (e.g., via optimization or MCMC). In
addition, probabilistic renderers (and in particular renderers) typically are not capable of providing
gradients with respect to their inputs, and 3D scene representations often involve discrete variables,
e.g., mesh identities. We address these challenges by using finite-differencing to obtain a gradient
through the renderer, using the score function estimator to get gradients with respect to discrete
variables, and using AIR inference to handle correlated posteriors and variable-length representations.
We demonstrate the capabilities of this approach by first considering scenes consisting of only one
of three objects: a red cube, a blue sphere, and a textured cylinder (see Fig. 6a). Since the scenes
only consist of single objects, the task is only to infer the identity (cube, sphere, cylinder) and pose
(position and rotation) of the object present in the image. We train a single-step (N = 1) AIR
inference network for this task. The network is only provided with unlabeled images and is trained to
maximize the likelihood of those images under the model specified by the renderer. The quality of
the inferred scene representations produced is visually inspected in Fig. 6b. The network accurately
and reliably infers the identity and pose of the object present in the scene. In contrast, an identical
network trained to predict the ground-truth identity and pose values of the training data (in a similar
style to [11]) has much more difficulty in accurately determining the cube?s orientation (Fig. 6c).
The supervised loss forces the network to predict the exact angle of rotation. However this is not
identifiable from the image due to rotational symmetry, which leads to conditional probabilities that
are multi-modal and difficult to represent using standard network architectures. We also compare
with direct optimization of the likelihood from scratch for every test image (Fig. 6d), and observe that
this method is slower, less stable and more susceptible to local minima. So not only does amortization
reduce the cost of inference, but it also overcomes the pitfalls of independent gradient optimization.
We finally consider a more complex setup, where we infer the counts, identities and positions of a
variable number of crockery items, as well as the camera position, in a table-top scene. This would
be of critical importance to a robot, say, which is tasked with clearing the table. The goal is to
learn to perform this task with as little supervision as possible, and indeed we observe that with
AIR it is possible to do so with no supervision other than a specification of the renderer. We show
reconstructions of AIR?s inferences on generated data, as well as real images of a table with varying
numbers of plates, in Fig. 6 and Fig. 7. AIR?s inferences of counts, identities and positions are
accurate for the most part. For transfer to real scenes we perform random color and size pertubations
to rendered objects during training, however we note that robust transfer remains a challenging
problem in general. We provide a quantitative comparison of AIR?s inference robustness and accuracy
on generated scenes with that of a fully supervised network in Fig. 7. We consider two scenarios: one
where each object type only appears exactly once, and one where objects can repeat in the scene. A
naive supervised setup struggles with object repetitions or when an arbitrary ordering of the objects
is imposed by the labels, however training is more straightforward when there are no repetitions. AIR
achieves competitive reconstruction and counts despite the added difficulty of object repetitions.
7
Figure 7: 3D scenes details: Left: Ground-truth object and camera positions with inferred positions
overlayed in red (note that inferred cup is closely aligned with ground-truth, thus not clearly visible).
We demonstrate fast inference of all relevant scene elements using the AIR framework. Middle: AIR
produces significantly better reconstructions and count accuracies than a supervised method on data
that contains repetitions, and is even competitive on simpler data. Right: Heatmap of object locations
at each time-step (top). The learned policy appears to be more dependent on identity (bottom).
4
Related Work
Deep neural networks have had great success in learning to predict various quantities from images,
e.g., object classes [10], camera positions [8] and actions [20]. These methods work best when
large labeled datasets are available for training. At the other end of the spectrum, e.g., in ?vision
as inverse graphics?, only a generative model is specified in advance and prediction is treated as an
inference problem, which is then solved using MCMC or message passing at test-time. These models
range from highly specified [17, 16], to partially specified [28, 24, 25], to largely unspecified [22].
Inference is very challenging and almost always the bottle-neck in model design.
Several works exploit data-driven predictions to empower the ?vision as inverse graphics? paradigm
[5, 7]. For instance, in PICTURE [11], the authors use a deep network to distill the results of slow
MCMC, speeding up predictions at test-time. Variational auto-encoders [21, 9] and their discrete
counterparts [18] made the important contribution of showing how the gradient computations for
learning of amortized inference and generative models could be interleaved, allowing both to be
learned simultaneously in an end-to-end fashion (see also [23]). Works like that of [12] aim to learn
disentangled representations in an auto-encoding framework using special network structures and / or
careful training schemes. It is also worth noting that attention mechanisms in neural networks have
been studied in discriminative and generative settings, e.g., [19, 6, 3].
AIR draws upon, extends and links these ideas. By its nature AIR is also related to the following
problems: counting [14, 27], pondering [2], and gradient estimation through renderers [15]. It is the
combination of these elements that unlocks the full capabilities of the proposed approach.
5
Discussion
In this paper our aim has been to learn unsupervised models that are good at scene understanding, in
addition to scene reconstruction. We presented several principled models that learn to count, locate,
classify and reconstruct the elements of a scene, and do so in a fraction of a second at test-time. The
main ingredients are (a) building in meaning using appropriate structure, (b) amortized inference that
is attentive, iterative and variable-length, and (c) end-to-end learning.
We demonstrated that model structure can provide an important inductive bias that gives rise to
interpretable representations that are not easily learned otherwise. We also showed that even for
sophisticated models or renderers, fast inference is possible. We do not claim to have found an ideal
model for all images; many challenges remain, e.g., the difficulty of working with the reconstruction
loss and that of designing models rich enough to capture all natural factors of variability.
Learning in AIR is most successful when the variance of the gradients is low and the likelihood is well
suited to the data. It will be of interest to examine the scaling of variance with the number of objects
and alternative likelihoods. It is straightforward to extend the framework to semi- or fully-supervised
settings. Furthermore, the framework admits a plug-and-play approach where existing state-of-the-art
detectors, classifiers and renderers are used as sub-components of an AIR inference network. We
plan to investigate these lines of research in future work.
8
References
[1] Jimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu. Multiple Object Recognition with
Visual Attention. In ICLR, 2015.
[2] Alex Graves. Adaptive computation time for recurrent neural networks. abs/1603.08983, 2016.
[3] Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Rezende, and Daan Wierstra. DRAW: A
Recurrent Neural Network For Image Generation. In ICML, 2015.
[4] Ulf Grenander. Pattern Synthesis: Lectures in Pattern Theory. 1976.
[5] Geoffrey E. Hinton, Peter Dayan, Brendan J. Frey, and Randford M. Neal. The "wake-sleep"
algorithm for unsupervised neural networks. Science, 268(5214), 1995.
[6] Max Jaderberg, Karen Simonyan, Andrew Zisserman, and Koray Kavukcuoglu. Spatial Transformer Networks. 2015.
[7] Varun Jampani, Sebastian Nowozin, Matthew Loper, and Peter V. Gehler. The Informed
Sampler: A Discriminative Approach to Bayesian Inference in Generative Computer Vision
Models. CVIU, 2015.
[8] Alex Kendall, Matthew Grimes, and Roberto Cipolla. PoseNet: A Convolutional Network for
Real-Time 6-DOF Camera Relocalization. In ICCV, 2015.
[9] Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. arXiv preprint
arXiv:1312.6114, 2013.
[10] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. ImageNet Classification with Deep
Convolutional Neural Networks. In NIPS 25, 2012.
[11] Tejas D. Kulkarni, Pushmeet Kohli, Joshua B. Tenenbaum, and Vikash K. Mansinghka. Picture:
A probabilistic programming language for scene perception. In CVPR, 2015.
[12] Tejas D Kulkarni, William F. Whitney, Pushmeet Kohli, and Josh Tenenbaum. Deep Convolutional Inverse Graphics Network. In NIPS 28. 2015.
[13] Brenden M. Lake, Ruslan Salakhutdinov, and Joshua B. Tenenbaum. Human-level concept
learning through probabilistic program induction. Science, 350(6266), 2015.
[14] Victor Lempitsky and Andrew Zisserman. Learning To Count Objects in Images. In NIPS 23.
2010.
[15] Matthew M. Loper and Michael J. Black. OpenDR: An Approximate Differentiable Renderer.
In ECCV, volume 8695, 2014.
[16] Vikash Mansinghka, Tejas Kulkarni, Yura Perov, and Josh Tenenbaum. Approximate Bayesian
Image Interpretation using Generative Probabilistic Graphics Programs. In NIPS 26. 2013.
[17] Brian Milch, Bhaskara Marthi, Stuart Russell, David Sontag, Daniel L. Ong, and Andrey
Kolobov. BLOG: Probabilistic Models with Unknown Objects. In International Joint Conference on Artificial Intelligence, pages 1352?1359, 2005.
[18] Andriy Mnih and Karol Gregor. Neural Variational Inference and Learning. In ICML, 2014.
[19] Volodymyr Mnih, Nicolas Heess, Alex Graves, and Koray Kavukcuoglu. Recurrent Models of
Visual Attention. In NIPS 27, 2014.
[20] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G.
Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan
Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement
learning. Nature, 518, 2015.
[21] Danilo J. Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic Backpropagation and
Approximate Inference in Deep Generative Models. In ICML, 2014.
[22] Ruslan Salakhutdinov and Geoffrey Hinton. Deep Boltzmann Machines. In AISTATS, 2009.
[23] John Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. Gradient Estimation
Using Stochastic Computation Graphs. In NIPS 28. 2015.
[24] Yichuan Tang, Ruslan Salakhutdinov, and Geoffrey Hinton. Tensor Analyzers. In ICML, 2013.
[25] Yichuan Tang, Nitish Srivastava, and Ruslan Salakhutdinov. Learning Generative Models With
Visual Attention. In NIPS 27, 2014.
[26] Emanuel Todorov, Tom Erez, and Yuval Tassa. MuJoCo: A physics engine for model-based
control. In ICIRS, 2012.
[27] Jianming Zhang, Shuga Ma, Mehrnoosh Sameki, Stan Sclaroff, Margrit Betke, Zhe Lin, Xiaohui
Shen, Brian Price, and Radom?r M?ech. Salient Object Subitizing. In CVPR, 2015.
[28] Song-Chun Zhu and David Mumford. A Stochastic Grammar of Images. Foundations and
Trends in Computer Graphics and Vision, 2(4), 2006.
9
| 6230 |@word kohli:2 cnn:1 middle:3 nd:1 grey:1 pieter:1 crucially:2 propagate:1 pressure:1 contains:4 score:2 daniel:1 interestingly:1 existing:1 com:1 z2:3 diederik:1 readily:1 john:1 mesh:1 visible:1 shape:1 treating:1 interpretable:7 plot:1 v:1 generative:25 intelligence:1 item:1 parameterization:1 ivo:1 amir:1 provides:2 detecting:1 location:2 simpler:2 zhang:1 wierstra:3 along:1 direct:2 combine:1 manner:2 indeed:1 expected:1 behavior:2 nor:1 examine:2 multi:5 salakhutdinov:4 detects:1 pitfall:2 little:2 window:4 considering:2 increasing:1 spain:1 theophane:3 underlying:4 discover:1 bounded:1 provided:4 begin:1 what:3 kind:2 interpreted:1 minimizes:1 deepmind:1 unspecified:1 dharshan:1 informed:1 differing:1 transformation:1 quantitative:1 every:5 exactly:1 classifier:1 uk:1 exchangeable:2 control:2 appear:5 producing:1 danihelka:1 attend:3 understood:1 treat:1 local:3 struggle:1 consequence:1 frey:1 eslami:1 despite:5 encoding:3 path:1 interpolation:3 black:4 twice:1 studied:1 specifying:1 challenging:3 suggests:1 mujoco:1 range:1 camera:4 testing:2 practice:2 implement:1 backpropagation:1 digit:30 demis:1 riedmiller:1 rnn:1 empirical:1 thought:1 significantly:1 word:1 petersen:1 get:1 cannot:1 unlabeled:1 milch:1 transformer:3 bellemare:1 optimize:2 equivalent:1 deterministic:1 imposed:1 dz:2 maximizing:2 resembling:1 straightforward:4 attention:10 demonstrated:1 independently:1 jimmy:1 helen:1 shen:1 unstructured:1 estimator:4 rule:1 disentangled:2 enabled:1 handle:1 notion:1 suppose:1 inspected:1 play:1 exact:1 programming:1 designing:1 renderers:7 element:9 amortized:5 trick:1 expensive:1 located:1 recognition:1 jampani:1 trend:1 labeled:2 gehler:1 bottom:1 preprint:1 solved:1 capture:4 parameterize:2 equaling:1 ordering:1 russell:1 principled:1 complexity:2 ong:1 trained:7 depend:1 rewrite:1 ali:1 upon:3 writer:1 completely:1 textured:1 necessitates:1 easily:3 cae:2 joint:1 various:3 train:4 fast:5 describe:3 london:1 monte:3 ech:1 artificial:2 hyper:1 oni:1 dof:1 whose:1 supplementary:2 cvpr:2 say:1 relax:1 otherwise:2 particularity:1 reconstruct:1 ability:2 simonyan:1 grammar:1 unseen:1 jointly:2 itself:2 transform:1 shakir:1 advantage:1 differentiable:2 unprecedented:3 sequence:1 grenander:1 reconstruction:15 propose:1 interaction:2 maximal:1 aligned:1 combining:1 relevant:1 achieve:2 representational:2 description:3 constituent:2 getting:1 sutskever:1 produce:13 karol:2 silver:1 object:55 attends:3 andrew:2 recurrent:10 propagating:1 pose:14 depending:1 completion:1 kolobov:1 h3:2 mansinghka:2 eq:4 strong:3 implemented:4 recovering:1 korayk:1 correct:1 attribute:3 closely:2 stochastic:3 human:3 material:2 explains:1 abbeel:1 generalization:6 decompose:3 opt:1 brian:2 heatmaps:1 ground:3 visually:1 great:1 seed:1 predict:3 visualize:2 claim:1 matthew:3 achieves:4 vary:2 estimation:2 ruslan:4 label:2 repetition:4 pres:1 successfully:1 create:1 hope:1 clearly:1 gaussian:1 always:1 aim:3 rather:1 avoid:2 pn:2 dinner:1 rusu:1 varying:1 vae:2 rezende:2 loper:2 legg:1 bernoulli:1 likelihood:9 contrast:2 brendan:1 baseline:2 detect:1 inference:59 dependent:1 rigid:1 dayan:1 unary:1 typically:1 hidden:1 transformed:1 pixel:3 classification:1 orientation:2 plan:2 art:3 special:1 spatial:5 heatmap:1 cube:5 marginal:1 construct:1 never:3 having:1 equal:1 koray:5 sampling:4 identical:1 saving:1 stuart:1 look:1 unsupervised:5 icml:4 veness:1 future:1 simplify:1 primarily:1 employ:2 simultaneously:1 occlusion:1 consisting:1 opposing:1 overlayed:1 ab:1 cylinder:3 detection:1 william:1 ostrovski:1 interest:3 once:1 message:1 highly:2 investigate:2 mnih:4 joel:1 grime:1 chain:1 amenable:1 accurate:4 poorer:1 integral:1 capable:1 partial:1 filled:1 accommodated:1 re:1 desired:1 instance:5 classify:1 modeling:1 whitney:1 zn:1 assignment:1 perov:1 cost:3 deviation:1 distill:1 usefulness:1 krizhevsky:1 successful:1 graphic:8 too:2 encoders:2 dependency:1 scanning:2 synthetic:1 andrey:1 st:1 fundamental:1 lstm:1 international:1 probabilistic:9 physic:1 michael:1 together:1 synthesis:1 ilya:1 opposed:2 choose:1 reconstructs:1 containing:5 leading:4 style:1 account:1 potential:1 volodymyr:3 sec:10 ioannis:1 explicitly:1 caused:1 depends:1 view:2 h1:2 extrapolation:2 kendall:1 characterizes:1 red:5 sup:1 recover:2 competitive:2 capability:3 bayes:1 contribution:3 formed:2 air:41 accuracy:8 convolutional:4 descriptor:1 largely:2 percept:1 variance:4 correspond:1 identify:2 ulf:1 generalize:1 bayesian:3 raw:2 kavukcuoglu:5 accurately:4 produced:4 carlo:3 worth:1 history:1 stroke:1 datapoint:1 explain:2 detector:1 sebastian:1 failure:1 attentive:2 energy:1 involved:1 mohamed:1 naturally:2 mi:1 sampled:2 stop:1 dataset:7 emanuel:1 knowledge:2 color:1 dimensionality:4 infers:2 formalize:1 sophisticated:2 back:2 appears:3 feed:1 supervised:7 danilo:2 varun:1 specify:4 improved:3 modal:1 zisserman:2 arranged:1 done:1 box:2 formulation:1 generality:1 furthermore:2 stage:1 canvas:2 hand:2 working:1 parse:1 expressive:1 overlapping:2 propagation:1 google:2 quality:2 indicated:1 believe:1 building:1 glyph:2 effect:1 contain:1 true:2 y2:1 counterpart:2 inductive:2 concept:2 neal:1 during:3 razor:1 plate:1 stress:1 outline:1 demonstrate:7 performs:1 hungry:1 reasoning:2 radom:1 weber:2 image:53 variational:8 meaning:3 recently:1 wise:1 charles:1 rotation:3 functional:1 physical:1 qp:2 ji:1 conditioning:1 tassa:3 volume:1 belong:1 interpretation:4 discussed:1 extend:1 interpret:1 significant:2 refer:1 cup:1 rd:1 unconstrained:1 erez:1 omniglot:2 analyzer:1 language:2 had:1 robot:2 specification:2 supervision:7 impressive:1 encountering:1 renderer:9 stable:2 posterior:9 recent:1 showed:1 perspective:1 driven:1 scenario:4 blog:1 success:1 yi:2 joshua:2 victor:1 minimum:3 additional:2 somewhat:1 impose:1 determine:1 maximize:1 paradigm:1 xiaohui:1 semi:1 multiple:7 desirable:1 full:2 infer:8 ing:1 faster:1 characterized:1 plug:1 long:1 sphere:3 lin:1 locates:1 prediction:4 scalable:1 variant:1 instantiating:1 vision:9 tasked:1 arxiv:2 represent:1 addition:2 wake:1 appropriately:1 shane:1 call:1 structural:1 counting:2 presence:2 empower:1 split:1 enough:2 easy:1 rendering:1 variety:1 noting:1 ideal:1 zi:11 todorov:1 architecture:3 andriy:1 reduce:2 idea:2 andreas:1 shift:1 vikash:2 whether:1 effort:1 song:1 render:1 peter:2 locating:1 sontag:1 karen:1 sprite:2 proceed:1 cause:2 action:2 passing:1 deep:12 heess:4 useful:2 clear:1 involve:1 relocalization:1 tenenbaum:4 processed:1 generate:1 produc:1 per:1 correctly:1 blue:2 discrete:8 georg:1 group:2 key:1 salient:1 demonstrating:1 drawn:1 prevent:1 pondering:1 graph:2 downstream:4 fraction:2 sum:2 run:2 inverse:6 parameterized:2 angle:1 extends:1 almost:1 reader:1 patch:1 lake:1 draw:17 incompatible:1 appendix:8 unlocks:1 scaling:1 comparable:1 interleaved:1 bound:2 hi:5 followed:1 sleep:1 identifiable:1 constraint:1 precisely:2 alex:6 scene:57 generates:1 speed:3 nitish:1 extremely:1 formulating:1 performing:1 rendered:2 px:7 relatively:3 martin:1 structured:9 according:2 combination:2 poor:1 describes:2 terminates:1 slightly:2 remain:1 explained:1 invariant:1 indexing:1 iccv:1 equation:1 visualization:1 previously:1 remains:1 describing:1 count:15 mechanism:2 mind:1 tractable:1 ascending:1 end:9 antonoglou:1 generalizes:1 decomposing:1 available:1 lacked:1 apply:2 probe:1 hierarchical:1 observe:2 appropriate:4 enforce:1 stig:1 appearing:1 alternative:2 robustness:1 hassabis:1 slower:2 binomial:1 top:4 ensure:1 exploit:1 giving:1 build:1 classical:1 gregor:2 tensor:1 added:2 quantity:2 mumford:1 primary:1 dependence:1 interruption:1 gradient:17 iclr:1 unable:1 separate:1 link:1 entity:4 decoder:6 sensible:1 fidjeland:1 reason:1 induction:1 length:5 code:3 modeled:1 relationship:3 z3:2 providing:2 ratio:1 rotational:1 differencing:1 difficult:2 mostly:1 susceptible:2 setup:2 negative:1 rise:2 ba:1 implementation:1 reliably:2 design:1 boltzmann:1 policy:6 perform:5 allowing:1 unknown:1 inspect:1 observation:2 kumaran:1 datasets:2 daan:3 finite:1 descent:2 tom:1 displayed:1 hinton:5 variability:1 y1:1 rn:1 locate:1 arbitrary:3 brenden:1 inferred:7 david:4 namely:1 bottle:1 specified:10 kl:1 z1:8 imagenet:1 yura:1 engine:2 marthi:1 learned:7 barcelona:1 kingma:1 nip:8 trans:1 address:2 able:1 proceeds:1 below:5 pattern:2 perception:1 departure:1 challenge:4 geom:1 program:2 green:2 max:2 video:2 explanation:1 power:3 suitable:2 critical:2 natural:3 difficulty:4 overlap:1 predicting:1 force:1 curriculum:1 wanting:1 scarce:1 treated:1 zhu:1 scheme:3 picture:2 identifies:1 stan:1 categorical:1 auto:5 autoencoder:1 naive:1 roberto:1 speeding:1 prior:8 understanding:5 schulman:1 determining:2 graf:4 fully:4 loss:3 permutation:1 expect:1 lecture:1 generation:3 geoffrey:5 ingredient:1 h2:2 foundation:1 affine:2 imposes:1 classifying:1 occam:1 amortization:1 nowozin:1 eccv:1 prone:1 course:1 repeat:3 clearing:1 free:1 bias:2 understand:1 explaining:2 distributed:3 depth:1 world:1 evaluating:1 rich:1 computes:1 forward:3 made:2 collection:1 stuck:1 author:1 adaptive:1 reinforcement:1 far:1 welling:1 pushmeet:2 approximate:4 yichuan:2 ignore:1 jaderberg:1 overcomes:1 sequentially:1 summing:1 assumed:1 xi:4 discriminative:2 zhe:1 spectrum:1 continuous:8 latent:15 iterative:4 table:7 nature:2 learn:9 transfer:2 szepesvari:1 nicolas:3 robust:1 symmetry:4 geoffhinton:1 bearing:1 complex:3 meanwhile:1 interpolating:1 marc:1 did:2 aistats:1 main:2 arrow:3 border:1 noise:1 arise:1 motivation:1 fig:19 fashion:1 andrei:1 slow:2 sub:1 position:14 wish:1 third:2 learns:8 tang:2 bhaskara:1 specific:2 showing:2 learnable:1 appeal:1 pz:10 admits:1 chun:1 incorporating:1 intractable:1 mnist:7 consist:1 sequential:1 effectively:1 importance:1 conditioned:1 cviu:1 easier:2 sclaroff:1 suited:1 intersection:1 generalizing:1 explore:1 appearance:2 likely:2 visual:5 josh:2 partially:2 cipolla:1 sadik:1 srivastava:1 truth:3 chance:1 ma:1 tejas:3 conditional:1 lempitsky:1 sized:2 identity:14 goal:2 king:1 careful:1 price:1 content:2 hard:1 experimentally:1 included:1 yuval:2 sampler:1 beattie:1 called:2 pas:2 partly:1 neck:1 vaes:2 indicating:2 arises:1 kulkarni:3 mcmc:4 scratch:3 correlated:1 |
5,782 | 6,231 | A Probabilistic Framework for Deep Learning
Ankit B. Patel
Baylor College of Medicine, Rice University
[email protected],[email protected]
Tan Nguyen
Rice University
[email protected]
Richard G. Baraniuk
Rice University
[email protected]
Abstract
We develop a probabilistic framework for deep learning based on the Deep Rendering Mixture Model (DRMM), a new generative probabilistic model that explicitly
capture variations in data due to latent task nuisance variables. We demonstrate
that max-sum inference in the DRMM yields an algorithm that exactly reproduces
the operations in deep convolutional neural networks (DCNs), providing a first
principles derivation. Our framework provides new insights into the successes and
shortcomings of DCNs as well as a principled route to their improvement. DRMM
training via the Expectation-Maximization (EM) algorithm is a powerful alternative
to DCN back-propagation, and initial training results are promising. Classification
based on the DRMM and other variants outperforms DCNs in supervised digit
classification, training 2-3? faster while achieving similar accuracy. Moreover, the
DRMM is applicable to semi-supervised and unsupervised learning tasks, achieving results that are state-of-the-art in several categories on the MNIST benchmark
and comparable to state of the art on the CIFAR10 benchmark.
1
Introduction
Humans are adept at a wide array of complicated sensory inference tasks, from recognizing objects
in an image to understanding phonemes in a speech signal, despite significant variations such as
the position, orientation, and scale of objects and the pronunciation, pitch, and volume of speech.
Indeed, the main challenge in many sensory perception tasks in vision, speech, and natural language
processing is a high amount of such nuisance variation. Nuisance variations complicate perception
by turning otherwise simple statistical inference problems with a small number of variables (e.g.,
class label) into much higher-dimensional problems. The key challenge in developing an inference
algorithm is then how to factor out all of the nuisance variation in the input. Over the past few decades,
a vast literature that approaches this problem from myriad different perspectives has developed, but
the most difficult inference problems have remained out of reach.
Recently, a new breed of machine learning algorithms have emerged for high-nuisance inference
tasks, achieving super-human performance in many cases. A prime example of such an architecture
is the deep convolutional neural network (DCN), which has seen great success in tasks like visual
object recognition and localization, speech recognition and part-of-speech recognition.
The success of deep learning systems is impressive, but a fundamental question remains: Why do they
work? Intuitions abound to explain their success. Some explanations focus on properties of feature
invariance and selectivity developed over multiple layers, while others credit raw computational
power and the amount of available training data. However, beyond these intuitions, a coherent
theoretical framework for understanding, analyzing, and synthesizing deep learning architectures has
remained elusive.
In this paper, we develop a new theoretical framework that provides insights into both the successes
and shortcomings of deep learning systems, as well as a principled route to their design and improvement. Our framework is based on a generative probabilistic model that explicitly captures variation
due to latent nuisance variables. The Rendering Mixture Model (RMM) explicitly models nuisance
variation through a rendering function that combines task target variables (e.g., object class in an
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
object recognition) with a collection of task nuisance variables (e.g., pose). The Deep Rendering
Mixture Model (DRMM) extends the RMM in a hierarchical fashion by rendering via a product of
affine nuisance transformations across multiple levels of abstraction. The graphical structures of the
RMM and DRMM enable efficient inference via message passing (e.g., using the max-sum/product
algorithm) and training via the expectation-maximization (EM) algorithm. A key element of our
framework is the relaxation of the RMM/DRMM generative model to a discriminative one in order to
optimize the bias-variance tradeoff. Below, we demonstrate that the computations involved in joint
MAP inference in the relaxed DRMM coincide exactly with those in a DCN.
The intimate connection between the DRMM and DCNs provides a range of new insights into how
and why they work and do not work. While our theory and methods apply to a wide range of different
inference tasks (including, for example, classification, estimation, regression, etc.) that feature a
number of task-irrelevant nuisance variables (including, for example, object and speech recognition),
for concreteness of exposition, we will focus below on the classification problem underlying visual
object recognition. The proofs of several results appear in the Appendix.
2
Related Work
Theories of Deep Learning. Our theoretical work shares similar goals with several others such
as the i-Theory [1] (one of the early inspirations for this work), Nuisance Management [24], the
Scattering Transform [6], and the simple sparse network proposed by Arora et al. [2].
Hierarchical Generative Models. The DRMM is closely related to several hierarchical models,
including the Deep Mixture of Factor Analyzers [27] and the Deep Gaussian Mixture Model [29].
Like the above models, the DRMM attempts to employ parameter sharing, capture the notion of
nuisance transformations explicitly, learn selectivity/invariance, and promote sparsity. However,
the key features that distinguish the DRMM approach from others are: (i) The DRMM explicitly
models nuisance variation across multiple levels of abstraction via a product of affine transformations.
This factorized linear structure serves dual purposes: it enables (ii) tractable inference (via the maxsum/product algorithm), and (iii) it serves as a regularizer to prevent overfitting by an exponential
reduction in the number of parameters. Critically, (iv) inference is not performed for a single variable
of interest but instead for the full global configuration of nuisance variables. This is justified in lownoise settings. And most importantly, (v) we can derive the structure of DCNs precisely, endowing
DCN operations such as the convolution, rectified linear unit, and spatial max-pooling with principled
probabilistic interpretations. Independently from our work, Soatto et al. [24] also focus strongly on
nuisance management as the key challenge in defining good scene representations. However, their
work considers max-pooling and ReLU as approximations to a marginalized likelihood, whereas our
work interprets those operations differently, in terms of max-sum inference in a specific probabilistic
generative model. The work on the number of linear regions in DCNs [14] is complementary to our
own, in that it sheds light on the complexity of functions that a DCN can compute. Both approaches
could be combined to answer questions such as: How many templates are required for accurate
discrimination? How many samples are needed for learning? We plan to pursue these questions in
future work.
Semi-Supervised Neural Networks. Recent work in neural networks designed for semi-supervised
learning (few labeled data, lots of unlabeled data) has seen the resurgence of generative-like approaches, such as Ladder Networks [17], Stacked What-Where Autoencoders (SWWAE) [31] and
many others. These network architectures augment the usual task loss with one or more regularization
term, typically including an image reconstruction error, and train jointly. A key difference with our
DRMM-based approach is that these networks do not arise from a proper probabilistic density and as
such they must resort to learning the bottom-up recognition and top-down reconstruction weights
separately, and they cannot keep track of uncertainty.
3
The Deep Rendering Mixture Model: Capturing Nuisance Variation
Although we focus on the DRMM in this paper, we define and explore several other interesting
variants, including the Deep Rendering Factor Model (DRFM) and the Evolutionary DRMM (EDRMM), both of which are discussed in more detail in [16] and the Appendix. The E-DRMM
is particularly important, since its max-sum inference algorithm yields a decision tree of the type
employed in a random decision forest classifier[5].
2
A
B
Rendering Mixture Model
Deep Rendering
Mixture Model
a
c
I
C
Deep Sparse Path
Model
cL
g
gL
zL
zL
1
gL
1
Rendering Factor Model
c
z
a
..
.
I
g
z1
g1
I
Figure 1: Graphical model depiction of (A) the Shallow Rendering Models and (B) the DRMM. All
dependence on pixel location x has been suppressed for clarity. (C) The Sparse Sum-over-Paths
formulation of the DRMM. A rendering path contributes only if it is active (green arrows).
3.1 The (Shallow) Rendering Mixture Model
The RMM is a generative probabilistic model for images that explicitly models the relationship
between images I of the same object c subject to nuisance g 2 G, where G is the set of all nuisances
(see Fig. 1A for the graphical model depiction).
c ? Cat({?c }c2C ),
g ? Cat({?g }g2G ), a ? Bern({?a }a2A ),
I = a?cg + noise.
(1)
Here, ?cg is a template that is a function of the class c and the nuisance g. The switching variable
a 2 A = {ON, OFF} determines whether or not to render the template at a particular patch; a
sparsity prior on a thus encourages each patch to have a few causes. The noise distribution is from the
exponential family, but without loss of generality we illustrate below using Gaussian noise N (0, 2 1).
We assume that the noise is i.i.d. as a function of pixel location x and that the class and nuisance
variables are independently distributed according to categorical distributions. (Independence is
merely a convenience for the development; in practice, g can depend on c.) Finally, since the world is
spatially varying and an image can contain a number of different objects, it is natural to break the
image up into a number of patches, that are centered on a single pixel x. The RMM described in (1)
then applies at the patch level, where c, g, and a depend on pixel/patch location x. We will omit the
dependence on x when it is clear from context.
Inference in the Shallow RMM Yields One Layer of a DCN. We now connect the RMM with the
computations in one layer of a deep convolutional network (DCN). To perform object recognition
with the RMM, we must marginalize out the nuisance variables g and a. Maximizing the log-posterior
over g 2 G and a 2 A and then choosing the most likely class yields the max-sum classifier
c?(I) = argmax max max ln p(I|c, g, a) + ln p(c, g, a)
c2C
g2G a2A
(2)
that computes the most likely global configuration of target and nuisance variables for the image.
Assuming that Gaussian noise is added to the template, the image is normalized so that kIk2 = 1,
and c, g are uniformly distributed, (2) becomes
c?(I) ? argmax max max a(hwcg |Ii + bcg ) + ba = argmax max ReLu(hwcg |Ii + bcg ) + b0 (3)
c2C
g2G a2A
c2C
g2G
where ReLU(u) ? (u)+ = max{u, 0} is the soft-thresholding operation performed by the rectified linear units in modern DCNs. Here we have reparameterized the RMM model from the
1
moment parameters ? ? { 2 , ?cg , ?a } to the
? natural
? parameters ?(?) ? {wcg ? 2 ?cg , bcg ?
1
k?cg k22 , ba ? ln p(a) = ln ?a , b0 ? ln
generative parameter constraints.
2
2
p(a=1)
p(a=0)
3
. The relationships ?(?) are referred to as the
We now demonstrate that the sequence of operations in the max-sum classifier in (3) coincides exactly
with the operations involved in one layer of a DCN: image normalization, linear template matching,
thresholding, and max pooling. First, the image is normalized (by assumption). Second, the image is
filtered with a set of noise-scaled rendered templates wcg . If we assume translational invariance in
the RMM, then the rendered templates wcg yield a convolutional layer in a DCN [10] (see Appendix
Lemma A.2). Third, the resulting activations (log-probabilities of the hypotheses) are passed through
a pooling layer; if g is a translational nuisance, then taking the maximum over g corresponds to max
pooling in a DCN. Fourth, since the switching variables are latent (unobserved), we max-marginalize
over them during classification. This leads to the ReLU operation (see Appendix Proposition A.3).
3.2 The Deep Rendering Mixture Model: Capturing Levels of Abstraction
Marginalizing over the nuisance g 2 G in the RMM is intractable for modern datasets, since G will
contain all configurations of the high-dimensional nuisance variables g. In response, we extend the
RMM into a hierarchical Deep Rendering Mixture Model (DRMM) by factorizing g into a number of
different nuisance variables g (1) , g (2) , . . . , g (L) at different levels of abstraction. The DRMM image
generation process starts at the highest level of abstraction (` = L), with the random choice of the
object class c(L) and overall nuisance g (L) . It is then followed by random choices of the lower-level
details g (`) (we absorb the switching variable a into g for brevity), progressively rendering more
concrete information level-by-level (` ! ` 1), until the process finally culminates in a fully rendered
D-dimensional image I (` = 0). Generation in the DRMM takes the form:
c(L) ? Cat({?c(L) }), g (`) ? Cat({?g(`) }) 8` 2 [L]
(4)
(1)
(2)
(L 1)
?c(L) g ? ?g ?c(L) ? ?g(1) ?g(2) ? ? ? ?g(L
1)
(L)
?g(L) ?c(L)
(5)
2
I ? N (?c(L) g , ? 1D ),
(6)
where the latent variables, parameters, and helper variables are defined in full detail in Appendix B.
The DRMM is a deep Gaussian Mixture Model (GMM) with special constraints on the latent variables.
Here, c(L) 2 C L and g (`) 2 G ` , where C L is the set of target-relevant nuisance variables, and G ` is the
set of all target-irrelevant nuisance variables at level `. The rendering path is defined as the sequence
(c(L) , g (L) , . . . , g (`) , . . . , g (1) ) from the root (overall class)
Qdown to the individual pixels at ` = 0.
?c(L) g is the template used to render the image, and ?g ? ` ?g(`) represents the sequence of local
nuisance transformations that partially render finer-scale details as we move from abstract to concrete.
(`)
(`)
Note that each ?g(`) is an affine transformation with a bias term ?g(`) that we have suppressed for
clarity. Fig. 1B illustrates the corresponding graphical model. As before, we have suppressed the
dependence of g (`) on the pixel location x(`) at level ` of the hierarchy.
Sum-Over-Paths Formulation of the DRMM. We can rewrite the DRMM generation process
by expanding out the matrix multiplications into scalar products. This yields an interesting new
P (L) (L)
(1) (1)
perspective on the DRMM, as each pixel intensity Ix = p p ap ? ? ? p ap is the sum over all
active paths to that pixel, of the product of weights along that path. A rendering path p is active iff
Q (`)
every switch on the path is active i.e. ` ap = 1 . While exponentially many possible rendering
paths exist, only a very small fraction, controlled by the sparsity of a, are active. Fig. 1C depicts the
sum-over-paths formulation graphically.
Recursive and Nonnegative Forms. We can rewrite the DRMM into a recursive form as z (`) =
(`+1)
?g(`+1) z (`+1) , where z (L) ? ?c(L) and z (0) ? I. We refer to the helper latent variables z (`) as
intermediate rendered templates. We also define the Nonnegative DRMM (NN-DRMM) as a DRMM
with an extra nonnegativity constraint on the intermediate rendered templates, z (`)
08` 2 [L].
The latter is enforced in training via the use of a ReLu operation in the top-down reconstruction
phase of inference. Throughout the rest of the paper, we will focus on the NN-DRMM, leaving the
unconstrained DRMM for future work. For brevity, we will drop the NN prefix.
Factor Model. We also define and explore a variant of the DRMM that where the top-level latent
variable is Gaussian: z (L+1) ? N (0, 1d ) 2 Rd and the recursive generation process is otherwise
(`+1)
identical to the DRMM: z (`) = ?g(`+1) z (`+1) where g (L+1) ? c(L) . We call this the Deep Rendering
Factor Model (DRFM). The DRFM is closely related to the Spike-and-Slab Sparse Coding model
[22]. Below we explore some training results, but we leave most of the exploration for future work.
(see Fig. 3 in Appendix C for architecture of the RFM, the shallow version of the DRFM)
4
Q
Number of Free Parameters.
to the shallow RMM, which has D |C L | ` |G ` | parameters,
P `+1Compared
the DRMM has only ` |G |D` D`+1 parameters, an exponential reduction in the number of free
parameters (Here G L+1 ? C L and D` is the number of units in the `-th layer with D0 ? D). This
enables efficient inference, learning, and better generalization. Note that we have assumed dense
(fully connected) ?g ?s here; if we impose more structure (e.g. translation invariance), the number of
parameters will be further reduced.
Bottom-Up Inference. As in the shallow RMM, given an input image I the DRMM classifier infers
the most likely global configuration {c(L) , g (`) }, ` = 0, 1, . . . , L by executing the max-sum/product
message passing algorithm in two stages: (i) bottom-up (from fine-to-coarse) to infer the overall class
label c?(L) and (ii) top-down (from coarse-to-fine) to infer the latent variables g?(`) at all intermediate
levels `. First, we will focus on the fine-to-coarse pass since it leads directly to DCNs.
Using (3), the fine-to-coarse NN-DRMM inference algorithm for inferring the most likely cateogry
c?L is given by
argmax max ?Tc(L) g I = argmax max ?Tc(L)
c(L) 2C
g2G
c(L) 2C
g2G
1
Y
?Tg(`) I
`=L
= argmax ?Tc(L) max ?Tg(L) ? ? ? max ?Tg(1) |I = ? ? ? ? argmax ?Tc(L) I (L) . (7)
g (L) 2G L
g (1) 2G 1
c(L) 2C
c(L) 2C
|
{z
}
? I1
Here, we have assumed the bias terms ?g(`) = 0. In the second line, we used the max-product
algorithm (distributivity of max over products i.e. for a > 0, max{ab, ac} = a max{b, c}). See
Appendix B for full details. This enables us to rewrite (7) recursively:
I (`+1) ?
max
g (`+1) 2G `+1
(?g(`+1) )T I (`) = MaxPool(ReLu(Conv(I (`) ))),
| {z }
(8)
?W (`+1)
where I (`) is the output feature maps of layer `, I (0) ? I and W (`) are the filters/weights for layer `.
Comparing to (3), we see that the `-th iteration of (7) and (8) corresponds to feedforward propagation
in the `-th layer of a DCN. Thus a DCN?s operation has a probabilistic interpretation as fine-to-coarse
inference of the most probable configuration in the DRMM.
Top-Down Inference. A unique contribution of our generative model-based approach is that we have
a principled derivation of a top-down inference algorithm for the NN-DRMM (Appendix B). The
resulting algorithm amounts to a simple top-down reconstruction term I?n = ?g?n ?c?(L) .
n
Discriminative Relaxations: From Generative to Discriminative Classifiers. We have constructed a correspondence between the DRMM and DCNs, but the mapping is not yet complete.
In particular, recall the generative constraints on the weights and biases. DCNs do not have such
constraints ? their weights and biases are free parameters. As a result, when faced with training data
that violates the DRMM?s underlying assumptions, the DCN will have more freedom to compensate.
In order to complete our mapping from the DRMM to DCNs, we relax these parameter constraints,
allowing the weights and biases to be free and independent parameters. We refer to this process as a
discriminative relaxation of a generative classifier ([15, 4], see the Appendix D for details).
3.3 Learning the Deep Rendering Model via the Expectation-Maximization (EM) Algorithm
We describe how to learn the DRMM parameters from training data via the hard EM algorithm in
Algorithm 1. The DRMM E-Step consists of bottom-up and top-down (reconstruction) E-steps at
each layer ` in the model. The ncg ? p(c, g|In ; ?) are the responsibilities, where for brevity we have
absorbed a into g. The DRMM M-step consists of M-steps for each layer ` in the model. The per-layer
M-step in turn consists of a responsibility-weighted regression, where GLS(yn ? xn ) denotes the
solution to a generalized Least Squares regression problem that predict targets yn from predictors
xn and is closely related to the SVD. The Iversen bracket is defined as JbK ? 1 if expression b is
true and is 0 otherwise. There are several interesting and useful features of the EM algorithm. First,
we note that it is a derivative-free alternative to the back propagation algorithm for training that is
both intuitive and potentially much faster (provided a good implementation for the GLS problem).
Second, it is easily parallelized over layers, since the M-step updates each layer separately (model
parallelism). Moreover, it can be extended to a batch version so that at each iteration the model is
5
Algorithm 1 Hard EM and EG Algorithms for the DRMM
E-step:
M-step:
G-step:
c?n , g?n = argmax
c,g
?
? g(`) = GLS In(`
?
|{z}
ncg
1)
? z?n(`) | g (`) = g?n(`)
? g(`) / r? `DRM M (?)
?
g (`)
?
8g (`)
simultaneously updated using separate subsets of the data (data parallelism). This will enable training
to be distributed easily across multiple machines. In this vein, our EM algorithm shares several
features with the ADMM-based Bregman iteration algorithm in [28]. However, the motivation there is
from an optimization perspective and so the resulting training algorithm is not derived from a proper
probabilistic density. Third, it is far more interpretable via its connections to (deep) sparse coding
and to the hard EM algorithm for GMMs. The sum-over-paths formulation makes it particularly clear
that the mixture components are paths (from root to pixels) in the DRMM.
G-step. For the training results in this paper, we use the Generalized EM algorithm wherein we
replace the M-step with a gradient descent based G-step (see Algorithm 1). This is useful for
comparison with backpropagation-based training and for ease of implementation.
Flexibility and Extensibility. Since we can choose different priors/types for the nuisances g, the
larger DRMM family could be useful for modeling a wider range of inputs, including scenes, speech
and text. The EM algorithm can then be used to train the whole system end-to-end on different
sources/modalities of labeled and unlabeled data. Moreover, the capability to sample from the model
allows us to probe what is captured by the DRMM, providing us with principled ways to improve the
model. And finally, in order to properly account for noise/uncertainty, it is possible in principle to
extend this algorithm into a soft EM algorithm. We leave these interesting extensions for future work.
3.4
New Insights into Deep Convnets
DCNs are Message Passing Networks. The convolution, Max-Pooling and ReLu operations in a
DCN correspond to max-sum/product inference in a DRMM. Note that by ?max-sum-product? we
mean a novel combination of max-sum and max-product as described in more detail in the proofs in
the Appendix. Thus, we see that architectures and layer types commonly used in today?s DCNs can
be derived from precise probabilistic assumptions that entirely determine their structure. The DRMM
therefore unifies two perspectives ? neural network and probabilistic inference (see Table 2 in the
Appendix for details).
Shortcomings of DCNs. DCNs perform poorly in categorizing transparent objects [20]. This
might be explained by the fact that transparent objects generate pixels that have multiple sources,
conflicting with the DRMM sparsity prior on a, which encourages few sources. DCNs also fail to
classify slender and man-made objects [20]. This is because of the locality imposed by the locallyconnected/convolutional layers, or equivalently, the small size of the template ?c(L) g in the DRMM.
As a result, DCNs fail to model long-range correlations.
Class Appearance Models and Activity Maximization. The DRMM enables us to understand how
trained DCNs distill and store knowledge from past experiences in their parameters. Specifically, the
DRMM generates rendered templates ?c(L) g via a mixture of products of affine transformations, thus
implying that class appearance models in DCNs are stored in a similar factorized-mixture form over
multiple levels of abstraction. As a result, it is the product of all the filters/weights over all layers
that yield meaningful images of objects (Eq. 6). We can also shed new light on another approach
to understanding DCN memories that proceeds by searching for input images that maximize the
activity of a particular class unit (say, class of cats) [23], a technique we call activity maximization.
Results from activity maximization on a high performance DCN trained on 15 million images is
shown in Fig. 1 of [23]. The resulting images reveal much about how DCNs store memories. Using
the DRMM, the solution Ic?(L) of the activity maximization for class c(L) can be derived as the sum
of individual activity-maximizing patches IP? i , each of which is a function of the learned DRMM
P
P
?
?
parameters (see Appendix E). In particular, Ic?(L) ? Pi 2P IP? i (c(L) , gP
) / Pi 2P ?(c(L) , gP
).
i
i
6
Accuracy&Rate&
Layer&
Figure 2: Information about latent nuisance variables at each layer (Left), training results from EG
for RFM (Middle) and DRFM (Right) on MNIST, as compared to DCNs of the same configuration.
This implies that Ic?(L) contains multiple appearances of the same object but in various poses. Each
?
activity-maximizing patch has its own pose gP
, consistent with Fig. 1 of [23] and our own extensive
i
experiments with AlexNet, VGGNet, and GoogLeNet (data not shown). Such images provide strong
confirmational evidence that the underlying model is a mixture over nuisance parameters, as predcted
by the DRMM.
Unsupervised Learning of Latent Task Nuisances. A key goal of representation learning is to
disentangle the factors of variation that contribute to an image?s appearance. Given our formulation of
the DRMM, it is clear that DCNs are discriminative classifiers that capture these factors of variation
with latent nuisance variables g. As such, the theory presented here makes a clear prediction that for
a DCN, supervised learning of task targets will lead to unsupervised learning of latent task nuisance
variables. From the perspective of manifold learning, this means that the architecture of DCNs is
designed to learn and disentangle the intrinsic dimensions of the data manifolds.
In order to test this prediction, we trained a DCN to classify synthetically rendered images of
naturalistic objects, such as cars and cats, with variation in factors such as location, pose, and lighting.
After training, we probed the layers of the trained DCN to quantify how much linearly decodable
information exists about the task target c(L) and latent nuisance variables g. Fig. 2 (Left) shows that
the trained DCN possesses significant information about latent factors of variation and, furthermore,
the more nuisance variables, the more layers are required to disentangle the factors. This is strong
evidence that depth is necessary and that the amount of depth required increases with the complexity
of the class models and the nuisance variations.
4
Experimental Results
We evaluate the DRMM and DRFM?s performance on the MNIST dataset, a standard digit classification benchmark with a training set of 60,000 28 ? 28 labeled images and a test set of 10,000 labeled
images. We also evaluate the DRMM?s performance on CIFAR10, a dataset of natural objects which
include a training set of 50,000 32 ? 32 labeled images and a test set of 10,000 labeled images. In all
experiments, we use a full E-step that has a bottom-up phase and a principled top-down reconstruction
phase. In order to approximate the class posterior in the DRMM, we include a Kullback-Leibler
divergence term between the inferred posterior p(c|I) and the true prior p(c) as a regularizer [9].
We also replace the M-step in the EM algorithm of Algorithm 1 by a G-step where we update
the model parameters via gradient descent. This variant of EM is known as the Generalized EM
algorithm [3], and here we refer to it as EG. All DRMM experiments were done with the NN-DRMM.
Configurations of our models and the corresponding DCNs are provided in the Appendix I.
Supervised Training. Supervised training results are shown in Table 3 in the Appendix. Shallow
RFM: The 1-layer RFM (RFM sup) yields similar performance to a Convnet of the same configuration
(1.21% vs. 1.30% test error). Also, as predicted by the theory of generative vs discriminative
classifiers, EG training converges 2-3x faster than a DCN (18 vs. 40 epochs to reach 1.5% test error,
Fig. 2, middle). Deep RFM: Training results from an initial implementation of the 2-layer DRFM
EG algorithm converges 2 3? faster than a DCN of the same configuration, while achieving a
similar asymptotic test error (Fig. 2, Right). Also, for completeness, we compare supervised training
for a 5-layer DRMM with a corresponding DCN, and they show comparable accuracy (0.89% vs
0.81%, Table 3).
7
Unsupervised Training. We train the RFM and the 5-layer DRMM unsupervised with NU images,
followed by an end-to-end re-training of the whole model (unsup-pretr) using NL labeled images. The
results and comparison to the SWWAE model are shown in Table 1. The DRMM model outperforms
the SWWAE model in both scenarios (Filters and reconstructed images from the RFM are available
in the Appendix 4.)
Table 1: Comparison of Test Error rates (%) between best DRMM variants and other best published
results on MNIST dataset for the semi-supervised setting (taken from [31]) with NU = 60K
unlabeled images, of which NL 2 {100, 600, 1K, 3K} are labeled.
Model
Convnet [10]
MTC [18]
PL-DAE [11]
WTA-AE [13]
SWWAE dropout [31]
M1+TSVM [8]
M1+M2 [8]
Skip Deep Generative Model [12]
LadderNetwork [17]
Auxiliary Deep Generative Model [12]
catGAN [25]
ImprovedGAN [21]
RFM
DRMM 2-layer semi-sup
DRMM 5-layer semi-sup
DRMM 5-layer semi-sup NN+KL
SWWAE unsup-pretr [31]
RFM unsup-pretr
DRMM 5-layer unsup-pretr
NL = 100
22.98
12.03
10.49
8.71 ? 0.34
11.82 ? 0.25
3.33 ? 0.14
1.32
1.06 ? 0.37
0.96
1.39 ? 0.28
0.93 ? 0.065
14.47
11.81
3.50
0.57
16.2
12.03
NL = 600
7.86
5.13
5.03
2.37
3.31 ? 0.40
5.72
2.59 ? 0.05
5.61
3.73
1.56
NL = 1K
6.45
3.64
3.46
1.92
2.83 ? 0.10
4.24
2.40 ? 0.02
0.84 ? 0.08
4.67
2.88
1.67
NL = 3K
3.35
2.57
2.69
2.10 ? 0.22
3.49
2.18 ? 0.04
2.96
1.72
0.91
9.80
5.65
3.61
6.135
4.64
2.73
4.41
2.95
1.68
Semi-Supervised Training. For semi-supervised training, we use a randomly chosen subset of
NL = 100, 600, 1K, and 3K labeled images and NU = 60K unlabeled images from the training
and validation set. Results are shown in Table 1 for a RFM, a 2-layer DRMM and a 5-layer DRMM
with comparisons to related work. The DRMMs performs comparably to state-of-the-art models.
Specially, the 5-layer DRMM yields the best results when NL = 3K and NL = 600 while results in
the second best result when NL = 1K. We also show the training results of a 9-layer DRMM on
CIFAR10 in Table 4 in Appendix H. The DRMM yields comparable results on CIFAR10 with the
best semi-supervised methods. For more results and comparison with other works, see Appendix H.
5
Conclusions
Understanding successful deep vision architectures is important for improving performance and
solving harder tasks. In this paper, we have introduced a new family of hierarchical generative
models, whose inference algorithms for two different models reproduce deep convnets and decision
trees, respectively. Our initial implementation of the DRMM EG algorithm outperforms DCN backpropagation in both supervised and unsupervised classification tasks and achieves comparable/stateof-the-art performance on several semi-supervised classification tasks, with no architectural hyperparameter tuning.
Acknowledgments. Thanks to Xaq Pitkow and Ben Poole for helpful feedback. ABP and RGB
were supported by IARPA via DoI/IBC contract D16PC00003. 1 RGB was supported by NSF
CCF-1527501, AFOSR FA9550-14-1-0088, ARO W911NF-15-1-0316, and ONR N00014-12-10579. TN was supported by an NSF Graduate Reseach Fellowship and NSF IGERT Training Grant
(DGE-1250104).
1
The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of
the authors and should not be interpreted as necessarily representing the official policies or endorsements, either
expressed or implied, of IARPA, DoI/IBC, or the U.S. Government.
8
References
[1] F. Anselmi, J. Z. Leibo, L. Rosasco, J. Mutch, A. Tacchetti, and T. Poggio. Magic materials: a theory of
deep hierarchical architectures for learning sensory representations. MIT CBCL Technical Report, 2013.
[2] S. Arora, A. Bhaskara, R. Ge, and T. Ma. Provable bounds for learning some deep representations. arXiv
preprint arXiv:1310.6343, 2013.
[3] C. M. Bishop. Pattern Recognition and Machine Learning, volume 4. Springer New York, 2006.
[4] C. M. Bishop, J. Lasserre, et al. Generative or discriminative? getting the best of both worlds. Bayesian
Statistics, 8:3?24, 2007.
[5] L. Breiman. Random forests. Machine learning, 45(1):5?32, 2001.
[6] J. Bruna and S. Mallat. Invariant scattering convolution networks. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 35(8):1872?1886, 2013.
[7] I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio. Maxout networks. arXiv
preprint arXiv:1302.4389, 2013.
[8] D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling. Semi-supervised learning with deep generative
models. In Advances in Neural Information Processing Systems, pages 3581?3589, 2014.
[9] D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
[10] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, 1998.
[11] D.-H. Lee. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural
networks. In Workshop on Challenges in Representation Learning, ICML, volume 3, 2013.
[12] L. Maal?e, C. K. S?nderby, S. K. S?nderby, and O. Winther. Auxiliary deep generative models. arXiv
preprint arXiv:1602.05473, 2016.
[13] A. Makhzani and B. J. Frey. Winner-take-all autoencoders. In Advances in Neural Information Processing
Systems, pages 2773?2781, 2015.
[14] G. F. Montufar, R. Pascanu, K. Cho, and Y. Bengio. On the number of linear regions of deep neural
networks. In Advances in Neural Information Processing Systems, pages 2924?2932, 2014.
[15] A. Ng and M. Jordan. On discriminative vs. generative classifiers: A comparison of logistic regression and
naive bayes. Advances in neural information processing systems, 14:841, 2002.
[16] A. B. Patel, T. Nguyen, and R. G. Baraniuk. A probabilistic theory of deep learning. arXiv preprint
arXiv:1504.00641, 2015.
[17] A. Rasmus, M. Berglund, M. Honkala, H. Valpola, and T. Raiko. Semi-supervised learning with ladder
networks. In Advances in Neural Information Processing Systems, pages 3532?3540, 2015.
[18] S. Rifai, Y. N. Dauphin, P. Vincent, Y. Bengio, and X. Muller. The manifold tangent classifier. In Advances
in Neural Information Processing Systems, pages 2294?2302, 2011.
[19] S. Rifai, P. Vincent, X. Muller, X. Glorot, and Y. Bengio. Contractive auto-encoders: Explicit invariance
during feature extraction. In Proceedings of the 28th international conference on machine learning
(ICML-11), pages 833?840, 2011.
[20] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla,
M. Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer
Vision, 115(3):211?252, 2015.
[21] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for
training gans. arXiv preprint arXiv:1606.03498, 2016.
[22] A.-S. Sheikh, J. A. Shelton, and J. L?cke. A truncated em approach for spike-and-slab sparse coding.
Journal of Machine Learning Research, 15(1):2653?2687, 2014.
[23] K. Simonyan, A. Vedaldi, and A. Zisserman. Deep inside convolutional networks: Visualising image
classification models and saliency maps. arXiv preprint arXiv:1312.6034, 2013.
[24] S. Soatto and A. Chiuso. Visual representations: Defining properties and deep approximations. In
International Conference on Learning Representations, 2016.
[25] J. T. Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial
networks. arXiv preprint arXiv:1511.06390, 2015.
[26] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014.
[27] Y. Tang, R. Salakhutdinov, and G. Hinton. Deep mixtures of factor analysers. arXiv preprint
arXiv:1206.4635, 2012.
[28] G. Taylor, R. Burmeister, Z. Xu, B. Singh, A. Patel, and T. Goldstein. Training neural networks without
gradients: A scalable admm approach. arXiv preprint arXiv:1605.02026, 2016.
[29] A. van den Oord and B. Schrauwen. Factoring variations in natural images with deep gaussian mixture
models. In Advances in Neural Information Processing Systems, pages 3518?3526, 2014.
[30] V. N. Vapnik and V. Vapnik. Statistical learning theory, volume 1. Wiley New York, 1998.
[31] J. Zhao, M. Mathieu, R. Goroshin, and Y. LeCun. Stacked what-where autoencoders. arXiv preprint
arXiv:1506.02351, 2016.
9
| 6231 |@word middle:2 version:2 rgb:2 harder:1 recursively:1 moment:1 reduction:2 configuration:9 contains:1 initial:3 document:1 prefix:1 outperforms:3 past:2 comparing:1 activation:1 yet:1 must:2 enables:4 designed:2 drop:1 progressively:1 update:2 discrimination:1 interpretable:1 generative:21 implying:1 v:5 intelligence:1 fa9550:1 filtered:1 provides:3 coarse:5 contribute:1 location:5 completeness:1 pascanu:1 along:1 constructed:1 chiuso:1 consists:3 combine:1 inside:1 indeed:1 salakhutdinov:1 abound:1 spain:1 becomes:1 moreover:3 underlying:3 conv:1 factorized:2 provided:2 alexnet:1 what:3 interpreted:1 pursue:1 developed:2 unobserved:1 transformation:6 pseudo:1 every:1 shed:2 zaremba:1 exactly:3 classifier:10 scaled:1 zl:2 unit:4 grant:1 omit:1 appear:1 yn:2 thereon:1 before:1 local:1 frey:1 switching:3 despite:1 encoding:1 analyzing:1 path:13 ap:3 might:1 ease:1 range:4 catgan:1 graduate:1 contractive:1 unique:1 acknowledgment:1 lecun:2 practice:1 recursive:3 backpropagation:2 digit:2 riedmiller:1 vedaldi:1 matching:1 naturalistic:1 cannot:1 unlabeled:4 convenience:1 marginalize:2 context:1 optimize:1 gls:3 map:3 imposed:1 dcn:24 elusive:1 maximizing:3 graphically:1 independently:2 simplicity:1 pitkow:1 m2:1 insight:4 array:1 importantly:1 searching:1 notion:1 variation:15 updated:1 target:7 tan:1 hierarchy:1 today:1 mallat:1 hypothesis:1 goodfellow:2 element:1 recognition:11 particularly:2 nderby:2 labeled:9 vein:1 bottom:5 preprint:12 visualising:1 capture:4 region:2 connected:1 montufar:1 highest:1 extensibility:1 principled:6 intuition:2 complexity:2 warde:1 trained:5 depend:2 rewrite:3 solving:1 singh:1 myriad:1 unsup:4 localization:1 easily:2 joint:1 differently:1 cat:6 various:1 regularizer:2 derivation:2 stacked:2 train:3 shortcoming:3 describe:1 doi:2 analyser:1 choosing:1 pronunciation:1 whose:1 emerged:1 larger:1 say:1 relax:1 otherwise:3 statistic:1 simonyan:1 g1:1 breed:1 transform:1 jointly:1 gp:3 ankit:1 ip:2 sequence:3 net:1 reconstruction:6 aro:1 product:14 relevant:1 iff:1 flexibility:1 poorly:1 intuitive:1 getting:1 converges:2 ben:1 leave:2 executing:1 object:18 wider:1 derive:1 pose:4 develop:2 illustrate:1 ac:1 b0:2 eq:1 strong:2 auxiliary:2 predicted:1 skip:1 implies:1 goroshin:1 quantify:1 closely:3 filter:3 centered:1 human:2 exploration:1 enable:2 mtc:1 violates:1 material:1 government:2 transparent:2 generalization:1 proposition:1 probable:1 extension:1 pl:1 d16pc00003:1 credit:1 ic:3 cbcl:1 great:1 mapping:2 predict:1 slab:2 achieves:1 early:1 purpose:2 estimation:1 applicable:1 label:3 honkala:1 weighted:1 mit:1 gaussian:6 super:1 breiman:1 varying:1 categorizing:1 derived:3 focus:6 rezende:1 improvement:2 properly:1 likelihood:1 adversarial:1 cg:5 helpful:1 inference:24 cke:1 abstraction:6 factoring:1 nn:7 typically:1 reproduce:2 i1:1 abp:1 pixel:10 translational:2 classification:9 dual:1 dauphin:1 augment:1 stateof:1 overall:3 development:1 plan:1 art:4 spatial:1 special:1 brox:1 extraction:1 ng:1 identical:1 represents:1 unsupervised:7 icml:2 promote:1 ibc:2 future:4 report:1 others:4 mirza:1 richard:1 employ:1 few:4 modern:2 randomly:1 drm:1 decodable:1 simultaneously:1 divergence:1 individual:2 argmax:8 phase:3 attempt:1 ab:1 freedom:1 interest:1 message:3 rfm:11 mixture:18 bracket:1 nl:10 farley:1 light:2 copyright:1 reseach:1 accurate:1 bregman:1 helper:2 cifar10:4 experience:1 necessary:1 poggio:1 tree:2 iv:1 taylor:1 re:1 dae:1 theoretical:3 classify:2 soft:2 modeling:1 w911nf:1 maximization:7 tg:3 distill:1 subset:2 predictor:1 recognizing:1 successful:1 stored:1 connect:1 answer:1 encoders:1 combined:1 cho:1 thanks:1 density:2 fundamental:1 winther:1 international:3 oord:1 probabilistic:13 off:1 contract:1 lee:1 maxpool:1 ncg:2 concrete:2 gans:1 schrauwen:1 management:2 choose:1 rosasco:1 huang:1 berglund:1 resort:1 derivative:1 zhao:1 account:1 distribute:1 coding:3 explicitly:6 performed:2 break:1 lot:1 root:2 responsibility:2 view:1 sup:4 start:1 bayes:2 complicated:1 capability:1 tsvm:1 annotation:1 contribution:1 disclaimer:1 square:1 accuracy:3 convolutional:7 phoneme:1 variance:1 yield:10 correspond:1 igert:1 saliency:1 raw:1 unifies:1 bayesian:1 vincent:2 critically:1 comparably:1 lighting:1 rectified:2 finer:1 published:1 russakovsky:1 explain:1 reach:2 sharing:1 complicate:1 involved:2 mohamed:1 proof:2 dataset:3 recall:1 knowledge:1 car:1 infers:1 goldstein:1 back:2 scattering:2 higher:1 supervised:18 response:1 wherein:1 mutch:1 improved:1 formulation:5 done:1 zisserman:1 strongly:1 generality:1 furthermore:1 stage:1 autoencoders:3 until:1 convnets:2 correlation:1 su:1 propagation:3 logistic:1 reveal:1 dge:1 k22:1 contain:2 normalized:2 true:2 ccf:1 soatto:2 inspiration:1 regularization:1 spatially:1 leibler:1 eg:6 during:2 nuisance:39 encourages:2 coincides:1 generalized:3 complete:2 demonstrate:3 tn:1 performs:1 image:34 variational:1 novel:1 recently:1 endowing:1 winner:1 exponentially:1 volume:4 million:1 discussed:1 interpretation:2 extend:2 googlenet:1 m1:2 significant:2 refer:3 rd:1 unconstrained:1 tuning:1 analyzer:1 language:1 bruna:1 impressive:1 depiction:2 etc:1 disentangle:3 posterior:3 own:3 recent:1 perspective:5 irrelevant:2 prime:1 scenario:1 route:2 selectivity:2 store:2 n00014:1 onr:1 success:5 muller:2 seen:2 captured:1 relaxed:1 impose:1 xaq:1 employed:1 parallelized:1 deng:1 determine:1 maximize:1 signal:1 semi:15 ii:4 multiple:7 full:4 infer:2 d0:1 technical:1 faster:4 compensate:1 long:1 controlled:1 pitch:1 variant:5 regression:4 prediction:2 ae:1 vision:3 expectation:3 scalable:1 arxiv:24 iteration:3 normalization:1 justified:1 whereas:1 fellowship:1 separately:2 fine:5 krause:1 leaving:1 source:3 modality:1 extra:1 rest:1 specially:1 posse:1 pooling:6 subject:1 gmms:1 jordan:1 call:2 synthetically:1 intermediate:3 iii:1 feedforward:1 bengio:5 rendering:21 switch:1 independence:1 relu:7 bernstein:1 architecture:8 interprets:1 rifai:2 haffner:1 tradeoff:1 c2c:4 whether:1 expression:1 passed:1 rmm:15 render:3 speech:7 passing:3 cause:1 york:2 deep:40 useful:3 clear:4 karpathy:1 amount:4 adept:1 category:1 reduced:1 generate:1 exist:1 nsf:3 governmental:1 track:1 per:1 probed:1 hyperparameter:1 key:6 achieving:4 clarity:2 prevent:1 gmm:1 leibo:1 vast:1 relaxation:3 concreteness:1 merely:1 sum:16 fraction:1 enforced:1 baraniuk:2 powerful:1 uncertainty:2 fourth:1 springenberg:2 extends:1 family:3 throughout:1 architectural:1 patch:7 endorsement:1 decision:3 appendix:17 comparable:4 entirely:1 dropout:1 capturing:2 layer:34 followed:2 distinguish:1 bound:1 correspondence:1 courville:1 nonnegative:2 activity:7 precisely:1 constraint:6 scene:2 generates:1 orientation:1 rendered:7 developing:1 according:1 richb:1 combination:1 across:3 em:15 suppressed:3 sheikh:1 shallow:7 wta:1 explained:1 invariant:1 den:1 taken:1 ln:5 remains:1 turn:1 fail:2 needed:1 ge:1 tractable:1 serf:2 end:4 maal:1 available:2 operation:10 apply:1 probe:1 hierarchical:6 salimans:1 alternative:2 batch:1 anselmi:1 top:9 denotes:1 include:2 graphical:4 marginalized:1 iversen:1 medicine:1 implied:1 move:1 question:3 added:1 spike:2 dependence:3 usual:1 makhzani:1 evolutionary:1 gradient:4 convnet:2 separate:1 valpola:1 manifold:3 considers:1 provable:1 assuming:1 relationship:2 rasmus:1 providing:2 baylor:1 equivalently:1 difficult:1 potentially:1 resurgence:1 synthesizing:1 ba:2 magic:1 design:1 implementation:4 proper:2 policy:1 satheesh:1 perform:2 allowing:1 convolution:3 datasets:1 benchmark:3 descent:2 truncated:1 reparameterized:1 defining:2 extended:1 hinton:1 precise:1 culminates:1 tacchetti:1 intensity:1 inferred:1 introduced:1 required:3 kl:1 extensive:1 connection:2 z1:1 imagenet:1 bcm:1 coherent:1 learned:1 conflicting:1 herein:1 barcelona:1 nu:3 nip:1 kingma:2 beyond:1 proceeds:1 below:4 perception:2 parallelism:2 poole:1 pattern:2 sparsity:4 challenge:5 max:32 including:6 explanation:1 green:1 memory:2 power:1 natural:5 turning:1 representing:1 improve:1 ladder:2 mathieu:1 vggnet:1 arora:2 reprint:1 raiko:1 categorical:2 auto:2 naive:1 faced:1 prior:4 understanding:4 literature:1 tangent:1 text:1 multiplication:1 marginalizing:1 epoch:1 asymptotic:1 afosr:1 loss:2 fully:2 a2a:3 distributivity:1 interesting:4 generation:4 validation:1 affine:4 consistent:1 principle:2 thresholding:2 share:2 pi:2 translation:1 gl:2 supported:3 free:5 bern:1 bias:6 dosovitskiy:1 understand:1 wide:2 template:12 taking:1 sparse:6 distributed:3 van:1 feedback:1 dimension:1 xn:2 world:2 depth:2 computes:1 sensory:3 author:1 collection:1 commonly:1 coincide:1 made:1 nguyen:2 far:1 welling:2 transaction:1 reconstructed:1 approximate:1 patel:3 absorb:1 kullback:1 keep:1 reproduces:1 overfitting:1 global:3 active:5 assumed:2 discriminative:8 factorizing:1 latent:14 decade:1 khosla:1 why:2 table:7 lasserre:1 promising:1 learn:3 expanding:1 contributes:1 forest:2 improving:1 bottou:1 cl:1 necessarily:1 official:1 improvedgan:1 main:1 dense:1 linearly:1 arrow:1 motivation:1 noise:7 arise:1 whole:2 iarpa:2 complementary:1 xu:1 fig:9 referred:1 swwae:5 g2g:6 depicts:1 fashion:1 wiley:1 position:1 nonnegativity:1 inferring:1 explicit:1 exponential:3 intimate:1 third:2 ix:1 tang:1 bhaskara:1 down:8 remained:2 specific:1 bishop:2 striving:1 evidence:2 glorot:1 intractable:1 intrinsic:1 mnist:4 exists:1 workshop:1 vapnik:2 notwithstanding:1 illustrates:1 chen:1 authorized:1 locality:1 tc:4 explore:3 likely:4 appearance:4 absorbed:1 visual:4 expressed:1 contained:1 partially:1 scalar:1 applies:1 springer:1 radford:1 corresponds:2 determines:1 rice:6 drfm:7 ma:2 goal:2 cheung:1 exposition:1 maxout:1 replace:2 admm:2 man:1 hard:3 specifically:1 uniformly:1 lemma:1 pas:1 invariance:5 svd:1 experimental:1 meaningful:1 college:1 latter:1 brevity:3 evaluate:2 shelton:1 |
5,783 | 6,232 | Learning Treewidth-Bounded Bayesian Networks
with Thousands of Variables
Mauro Scanagatta
IDSIA? , SUPSI? , USI?
Lugano, Switzerland
[email protected]
Giorgio Corani
IDSIA? , SUPSI? , USI?
Lugano, Switzerland
[email protected]
Cassio P. de Campos
Queen?s University Belfast
Northern Ireland, UK
[email protected]
Marco Zaffalon
IDSIA?
Lugano, Switzerland
[email protected]
Abstract
We present a method for learning treewidth-bounded Bayesian networks from
data sets containing thousands of variables. Bounding the treewidth of a Bayesian
network greatly reduces the complexity of inferences. Yet, being a global property
of the graph, it considerably increases the difficulty of the learning process. Our
novel algorithm accomplishes this task, scaling both to large domains and to large
treewidths. Our novel approach consistently outperforms the state of the art on
experiments with up to thousands of variables.
1
Introduction
We consider the problem of structural learning of Bayesian networks with bounded treewidth,
adopting a score-based approach. Learning the structure of a bounded treewidth Bayesian network is
an NP-hard problem (Korhonen and Parviainen, 2013). Yet learning Bayesian networks with bounded
treewidth is necessary to allow exact tractable inference, since the worst-case inference complexity is
exponential in the treewidth k (under the exponential time hypothesis) (Kwisthout et al., 2010).
A pioneering approach, polynomial in both the number of variables and the treewidth bound, has
been proposed in Elidan and Gould (2009). It incrementally builds the network; at each arc addition
it provides an upper-bound on the treewidth of the learned structure. The limit of this approach is that,
as the number of variables increases, the gap between the bound and the actual treewidth becomes
large, leading to sparse networks. An exact method has been proposed in Korhonen and Parviainen
(2013), which finds the highest-scoring network with the desired treewidth. However, its complexity
increases exponentially with the number of variables n. Thus it has been applied in experiments with
15 variables at most. Parviainen et al. (2014) adopted an anytime integer linear programming (ILP)
approach, called TWILP. If the algorithm is given enough time, it finds the highest-scoring network
with bounded treewidth. Otherwise it returns a sub-optimal DAG with bounded treewidth. The ILP
problem has an exponential number of constraints in the number of variables; this limits its scalability,
even if the constraints can be generated online. Berg et al. (2014) casted the problem of structural
learning with limited treewidth as a problem of weighted partial Maximum Satisfiability. They solved
the problem exactly through a MaxSAT solver and performed experiments with 30 variables at most.
Nie et al. (2014) proposed an efficient anytime ILP approach with a polynomial number of constraints
?
Istituto Dalle Molle di studi sull?Intelligenza Artificiale (IDSIA)
Scuola universitaria professionale della Svizzera italiana (SUPSI)
?
Universit? della Svizzera italiana (USI)
?
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
in the number of variables. Yet they report that the quality of the solutions quickly degrades as the
number of variables exceeds a few dozens and that no satisfactory solutions are found with data sets
containing more than 50 variables. Approximate approaches are therefore needed to scale to larger
domains.
Nie et al. (2015) proposed the method S2. It exploits the notion of k-tree, which is an undirected
maximal graph with treewidth k. A Bayesian network whose moral graph is a subgraph of a k-tree
has treewidth bounded by k. S2 is an iterative algorithm. Each iteration consists of two steps: a)
sampling uniformly a k-tree from the space of k-trees and b) recovering a DAG whose moral graph
is a sub-graph of the most promising sampled k-tree. The goodness of the k-tree is assessed via a
so-called informative score. Nie et al. (2016) further refine this idea, obtaining via A* the k-tree
which maximizes the informative score. This algorithm is called S2+.
Recent structural learning algorithms with unbounded treewidth (Scanagatta et al., 2015) can cope
with thousands of variables. Yet the unbounded treewidth provides no guarantee about the tractability
of the inferences of the learned models. We aim at filling this gap, learning treewidth-bounded
Bayesian network models in domains with thousands of variables.
We propose two novel methods for learning Bayesian networks with bounded treewidth. They exploit
the fact that any k-tree can be constructed by an iterative procedure that adds one variable at a time.
We propose an iterative procedure that, given an order on the variables, builds a DAG G adding one
variable at a time. The moral graph of G is ensured to be subgraph of a k-tree. The k-tree is designed
as to maximize the score of the resulting DAG. This is a major difference with respect to previous
works (Nie et al., 2015, 2016) in which the k-trees were randomly sampled. We propose both an
exact and an approximated variant of our algorithm; the latter is necessary to scale to thousands of
variables.
We discuss that the search space of the presented algorithms does not span the whole space of
bounded-treewidth DAGs. Yet our algorithms consistently outperform the state-of-the-art competitors
for structural learning with bounded treewidth. For the first time we present experimental results for
structural learning with bounded treewidth for domains involving up to ten thousand variables.
Software and supplementary material are available from http://blip.idsia.ch.
2
Structural learning
Consider the problem of learning the structure of a Bayesian network from a complete data set of N
instances D = {D1 , ..., DN }. The set of n categorical random variables is X = {X1 , ..., Xn }. The
goal is to find the best DAG G = (V, E), where V is the collection of nodes and E is the collection
of arcs. E can be represented by the set of parents ?1 , ..., ?n of each variable.
Different scores can be used to assess the fit of a DAG; we adopt the Bayesian information criterion
(or simply BIC). The BIC score is decomposable, being constituted by the sum of the scores of the
individual variables:
BIC(G) =
=
n
X
BIC(Xi , ?i ) =
i=1
n X
X
(
i=1
n
X
(LL(Xi |?i ) + Pen(Xi , ?i )) =
i=1
log N
Nx,? ??x|? ?
(|Xi | ? 1)(|?i |))
??|?i |,x?|Xi |
2
where ??x|? is the maximum likelihood estimate of the conditional probability P (Xi = x|?i = ?),
Nx,? represents the number of times (X = x ? ?i = ?) appears in the data set, and | ? | indicates the
size of the Cartesian product space of the variables given as argument. Thus |Xi | is the number of
states of Xi and |?i | is the product of the number of states of the parents of Xi .
Exploiting decomposability, we first identify independently for each variable a list of candidate
parent sets (parent set identification). Later, we select for each node the parent set that yields the
highest-scoring treewidth-bounded DAG (structure optimization).
2
2.1
Treewidth and k-trees
We illustrate the concept of treewidth following the notation of Elidan and Gould (2009). We
denote an undirected graph as H = (V, E) where V is the vertex set and E is the edge set. A tree
decomposition of H is a pair (C, T ) where C = {C1 , C2 , ..., Cm } is a collection of subsets of V and
T is a tree on C, so that:
? ?m
i=1 Ci = V ;
? for every edge which connects the vertices v1 and v2 , there is a subset Ci which contains
both v1 and v2 ;
? for all i, j, k in {1, 2, ..m} if Cj is on the path between Ci and Ck in T then Ci ? Ck ? Cj .
The width of a tree decomposition is max(|Ci |) ? 1 where |Ci | is the number of vertices in Ci . The
treewidth of H is the minimum width among all possible tree decompositions of G.
The treewidth can be equivalently defined in terms of a triangulation of H. A triangulated graph is an
undirected graph in which every cycle of length greater than three contains a chord. The treewidth of
a triangulated graph is the size of the maximal clique of the graph minus one. The treewidth of H is
the minimum treewidth over all the possible triangulations of H.
The treewidth of a Bayesian network is characterized with respect to all possible triangulations of its
moral graph. The moral graph M of a DAG is an undirected graph that includes an edge i ? j for
every edge i ? j in the DAG and an edge p ? q for every pair of edges p ? i, q ? i in the DAG.
The treewidth of a DAG is the minimum treewidth over all the possible triangulations of its moral
graph M . Thus the maximal clique of any moralized triangulation of G is an upper bound on the
treewidth of the model.
k-trees An undirected graph Tk = (V, E) is a k-tree if it is a maximal graph of tree-width k: any
edge added to Tk increases its treewidth. A k-tree is inductively defined as follows (Patil, 1986).
Consider a (k + 1)-clique, namely a complete graph with k + 1 nodes. A (k + 1)-clique is a k-tree. A
(k + 1)-clique can be decomposed into multiple k-cliques. Let us denote by z a node not yet included
in the list of vertices V . Then the graph obtained by connecting z to every node of a k-clique of Tk
is also a k-tree. The treewidth of any subgraph of a k-tree (partial k-tree) is bounded by k. Thus a
DAG whose moral graph is subgraph of a k-tree has treewidth bounded by k.
3
Incremental treewidth-bounded structure learning
Our approach for the structure optimization task proceeds by repeatedly sampling an order ? over
the variables and then identifying the highest-scoring DAG with bounded treewidth consistent with
the order. An effective approach for structural learning based on order sampling has been introduced
by Teyssier and Koller (2012); however it does not enforce any treewidth constraint.
The size of the search space of orders is n!; this is smaller than the search space of the k-trees,
O(enlog(nk) ). Once the order ? is sampled, we incrementally learn the DAG. At each iteration the
moralization of the DAG is by design a subgraph of a k-tree. The treewidth of the DAG eventually
obtained is thus bounded by k. The algorithm proceeds as follows.
Initialization The initial k-tree Kk+1 is constituted by the complete clique over the first k + 1
variables in the order. The initial DAG Gk+1 is learned over the same k + 1 variables. Since (k + 1)
is a tractable number of variables, we exactly learn Gk+1 adopting the method of Cussens (2011).
The moral graph of Gk+1 is a subgraph of Kk+1 and thus Gk+1 has bounded treewidth.
Addition of the subsequent nodes We then iteratively add each remaining variable in the order.
Consider the next variable in the order, X?i , where i ? {k + 2, ..., n}. Let us denote by Gi?1
and Ki?1 the DAG and the k-tree which have to be updated by adding X?i . We add X?i to Gi?1 ,
constraining its parent set ??i to be a k-clique (or a subset of) in Ki?1 . This yields the updated DAG
Gi . We then update the k-tree, connecting X?i to such k-clique. This yields the k-tree Ki ; it contains
an additional k + 1-clique compared to Ki?1 . By construction, Ki is also a k-tree. The moral graph
of Gi cannot add arc outside this (k + 1)-clique; thus it is a subgraph of Ki .
Pruning orders The initial k-tree Kk+1 and the initial DAG Gk+1 depend on which are the first
k + 1 variables in the order, but not on their relative positions. Thus all the orders which differ only
3
as for the relative position of the first k + 1 elements are equivalent for our algorithm: they yield the
same Kk+1 and Gk+1 . Thus once we sample an order and perform structural learning, we prune the
(k + 1)! ? 1 orders which are equivalent to the current one.
In order to choose the parent set to be assigned to each variable added to the graph we propose two
algorithms: k-A* and k-G.
3.1
k-A*
We formulate the problem as a shortest path finding problem. We define each state as a step towards
the completion of the structure, where a new variable is added to the DAG G. Given X?i the variable
assigned in the state S, we define a successor state of S for each k-clique to which we can link
X?i+1 . The approach to solve the problem is based on a path-finding A* search, with cost function
for state S defined as f (S) = g(S) + h(S). The goal is to find the state which minimizes f (S) once
all variables have been assigned.
We define g(S) and h(S) as:
g(S) =
i
X
score(X?j ,??j ) ,
h(S) =
n
X
best(X?j ) .
j=i+1
j=0
g(S) is the cost from the initial state to S; it corresponds to the sum of scores of the already assigned
parent sets.
h(S) is the estimated cost from S to the goal. It is the sum of the best assignable parent sets for the
remaining variables. Variable Xa can have Xb as parent only if Xb ? Xa .
The A* approach requires the h function to be admissible. The function h is admissible if the
estimated cost is never greater than the true cost to the goal state. Our approach satisfies this property
since the true cost of each step (score of chosen parent set for X?i+1 ) is always equal to or greater
than the estimated one (the score of the best selectable parent set for X?i+1 ).
The previous discussion implies that h is consistent, meaning that for any state S and its successor
T , h(S) ? h(T ) + c(S, T ), where c(S, T ) is the cost of the edges added in T . The function f is
monotonically non-decreasing on any path, and the algorithm is guaranteed to find the optimal path
as long as the goal state is reachable. Additionally there is no need to process a node more than once,
as no node will be explored a second time with a lower cost.
3.2
k-G
A very high number of variables might prevent the use of k-A*. For those cases we propose k-G as a
greedy alternative approach, which chooses at each step the best local parent set. Given the set of
existing k-clique in K as KC , we choose as parent set for X?i :
?X?i = argmax score(?) .
??c,c?KC
3.3
Space of learnable DAGs
A reverse topological order is an order {v1 , ...vn } over the vertexes V of a DAG in which each vi
appears before its parents ?i . The search space of our algorithms is restricted to the DAGs whose
reverse topological order, when used as variable elimination order, has treewidth k. This prevents
recovering DAGs which have bounded treewidth but lack this property.
We start by proving by induction that the reverse topological order has treewidth k in the DAGs
recovered by our algorithms. Consider the incremental construction of the DAG previously discussed.
The initial DAG Gk+1 is induced over k + 1 variables; thus every elimination ordering has treewidth
bounded by k.
For the inductive case, assume that Gi?1 satisfies the property. Consider the next variable in the
order, X?i , where i ? {k + 2, ..., n}. Its parent set ??i is a subset of a k-clique in Ki?1 . The
only neighbors of X?i in the updated DAG Gi are its parents ??i . Consider performing variable
elimination on the moral graph of Gi , using a reverse topological order. Then X?i will be eliminated
before ??i , without introducing fill-in edges. Thus the treewidth associated to any reverse topological
order is bounded by k. This property inductively applies to the addition also of the following nodes
up to X?n .
4
Inverted trees An example of DAG non recoverable by our algorithms is the specific class of
polytrees that we call inverted trees, that is, DAGs with out-degree equal to one. An inverted tree
with m levels and treewidth k can be built as follows. Take the root node (level one) and connect it to
k child nodes (level two). Connect each node of level two to k child nodes (level three). Proceed in
this way up to the m-th level and then invert the direction of all the arcs.
Figure 1 shows an inverted tree with k=2 and m=3. It has treewidth two, since its moral graph
is constituted by the cliques {A,B,E}, {C,D,F}, {E,F,G}. The treewidth associated to the reverse
topological order is instead three, using the order G, F, D, C, E, A, B.
B
A
D
C
E
F
G
Figure 1: Example of inverted tree.
If we run our algorithms with bounded treewidth k=2, it will be unable to recover the actual inverted
tree. It will instead identify a high-scoring DAG whose reverse topological order has treewidth 2.
4
Experiments
We compare k-A*, k-G, S2, S2+ and TWILP in various experiments. We compare them through
an indicator which we call W-score: the percentage of worsening of the BIC score of the selected
treewidth-bounded method compared to the score of the Gobnilp solver Cussens (2011). Gobnilp
achieves higher scores than the treewidth-bounded methods since it has no limits on the treewidth.
Let us denote by G the BIC score achieved by Gobnilp and by T the BIC score obtained by the given
treewidth-bounded method. Notice that both G and T are negative. The W-score is W = G?T
G . W
stands for worsening and thus lower values of W are better. The lowest value of W is zero, while
there is no upper bound.
We adopt BIC as a scoring function. The reason is that an algorithm for approximate exploration of
the parent sets (Scanagatta et al., 2015) allowing high in-degree even on large domains exists at the
moment only for BIC.
4.1
Parent set score exploration
Before performing structural learning it is necessary to compute the scores of the candidate parent sets
for each node (parent set exploration). The different structural learning methods are then provided
with the same score of the parent sets.
A treewidth k implies that one should explore all the parent sets up to size k; thus the complexity of
parent set exploration increases exponentially with the treewidth. To let the parent set exploration
scale efficiently with large treewidths and large number of variables we apply the approach of
Scanagatta et al. (2015). It guides the exploration towards the most promising parent sets (with size
up to k) without scoring them all. This is done on the basis of an approximated score function that is
computed in constant time. The actual score of the most promising parent sets is eventually computed.
We allow 60 seconds of time for the computation of the scores of the parent set of each variable, in
each data set.
4.2
Our implementation of S2 and S2+
Here we provide some details of our implementation of S2 and S2+. The second phase of both S2
and S2+ looks for a DAG whose moralization is a subgraph of a chosen k-tree. For this task Nie et al.
(2014) adopt an approximate approach based on partial order sampling (Algorithm 2). We found that
using Gobnilp for this task yields consistently slightly higher scores; thus we adopt this approach in
our implementation. We believe that it is due to the fact that constraining the structure optimization
to a subjacent graph of a k-tree results in a small number of allowed arcs for the DAG. Thus our
implementation of S2 and S2+ finds the highest-scoring DAG whose moral graph is a subgraph of
the provided k-tree.
5
4.3
Learning inverted trees
As already discussed our approach cannot learn an inverted tree with k parents per node if the given
bounded treewidth is k. In this section we study this worst-case scenario.
We start with treewidth k = 2. We consider the number of variables n ? {21, 41, 61, 81, 101}. For
each value of n we generate 5 different inverted trees. To generate as inverted tree we first select a
root variable X and add k parents to it as ?X ; then we continue by randomly choosing a leaf of the
graph (at a generic iteration, there are leaves at different distance from X) and adding k parents to it,
until the the graph contains n variables.
All variables are binary and we sample their conditional probability tables from a Beta(1,1). We
sample 10,000 instances from each generated inverted tree.
W -score
We then perform structural learning with k-A*, k-G, S2, S2+ and TWILP setting k = 2 as limit on
the treewidth. We allow each method to run for ten minutes. S2, S2+ and TWILP could in principle
recover the true structure, which is prevented to our algorithms. The results are shown in Fig.2.
Qualitatively similar results are obtained repeating the experiments with k = 4.
0.1
S2
TWILP
S2+
k-G
k-A*
0.05
0
20
30
40
50
60
70
80
90
100
Number of variables
Figure 2: Structural learning results when the actual DAGs are inverted trees (k=2). Each point
represent the mean W-score over 5 experiments. Lower values of the W -score are better.
Despite the unfavorable setting, both k-G and k-A* yield DAGs with higher score than S2, S2+ and
TWILP consistently for each value of n. For n = 20 they found a close approximation to the optimal
graph. S2, S2+ and TWILP found different structures, with close score.
Thus the limitation of the space of learnable DAGs does not hurt the performance of k-G and k-A*.
In fact S2 could theoretically recover the actual DAG, but this is not feasible in practice as it requires
a prohibitive number of samples from the space of k-trees. The exact solver TWILP was unable to
find the exact solution within the time limit; thus it returned a best solution achieved in the time limit.
S2
S2+
k-G
k-A*
Iterations 803150
3
7176
66
Median
-273600 -267921 -261648 -263250
Max
-271484 -266593 -258601 -261474
Table 1: Statistics of the solutions yielded by different methods on an inverted tree (n = 100, k = 4).
We further investigate the differences between methods in Table 1. Iterations is the number of
proposed solutions; for S2 and S2+ it corresponds to the number of explored k-trees, while for k-G
and k-A* it corresponds to the number of explored orders. During the execution, S2 samples almost
one million k-trees. Yet it yields the lowest-scoring DAGs among the different methods. This can be
explained considering that a randomly sampled k-tree has a low chance to cover a high-scoring DAG.
S2+ recovers only a few k-trees, but their scores are higher than those of S2. Thus the informative
score is effective at driving the search for good k-trees; yet it does not scale on large data sets as we
will see later. As for our methods, k-G samples a larger number of orders than k-A* does and this
allows it to achieve higher scores, even if it sub-optimally deals with each single order. Such statistics
show a similar pattern also in the next experiments.
6
DATASET
VAR.
nursery
breast
housing
adult
letter
zoo
mushroom
wdbc
audio
community
hill
9
10
14
15
17
17
22
31
62
100
100
GOBNILP
TWILP
S2
S2+
k-G
k-A*
-72159
-2698
-3185
-200142
-181748
-608
-53104
-6919
-2173
-77555
-1277
?72159
?2698
-3213
-200340
-190086
-620
-68298
-7190
-2277
?72159
?2698
-3252
-201235
-189539
-620
-68670
-7213
-2283
-107252
-1641
?72159
?2698
-3247
-200926
-186815
-619
-64769
-7209
-2208
-88350
-1427
?72159
?2698
-3206
-200431
-183369
-615
-57021
-7109
-2201
-82633
-1284
?72159
?2698
?3203
?200363
?183241
?613
?55785
?7088
?2185
?82003
?1279
Table 2: Comparison of the BIC scores yielded by different algorithms on the data sets analyzed by
Nie et al. (2016). The highest-scoring solution with limited treewidth is boldfaced. In the first column
we report the score obtained by Gobnilp without bound on the treewidth.
4.4
Small data sets
We now present experiments on the data sets considered by Nie et al. (2016). They involve up to 100
variables. We set the bounded treewidth to k = 4. We allow each method to run for ten minutes. We
perform 10 experiments on each data set and we report the median scores in Table 2.
On the smallest data sets all methods (including Gobnilp) achieve the same score. As the data sets
becomes larger, both k-A* and k-G achieve higher scores than S2, S2+ and TWILP (which does not
achieve the exact solution). Between our two novel algorithms, k-A* has a slight advantage over k-G.
4.5
Large data sets
We now consider 10 large data sets (100 ? n ? 400) listed in Table 3. We no longer run TWILP, as
it is unable to handle this number of variables.
Data set
Audio
Jester
n
Data set
n
Data set
n
Data set
n
Data set
100
Netflix
100
Retail
135
Andes
223 Pumsb-star
100 Accidents 111 Kosarek 190 MSWeb 294
DNA
Table 3: Large data sets sorted according to the number of variables.
k-A*
S2
n
163
180
S2+
k-G
29/20/24 30/30/29 30/30/30
k-A*
29/27/20 29/27/21
S2
12/13/30
Table 4: Result on the 30 experiments on large data sets. Each cell report how many times the row
algorithm yields a higher score than the column algorithm for treewidth 2/5/8. For instance k-G wins
on all the 30 data sets against S2+ for each considered treewidth.
We consider the following treewidths: k ? {2, 5, 8}. We split each data set randomly into three
subsets. Thus for each treewidth we run 10?3=30 structural learning experiments.
We let each method run for one hour. For S2+, we adopt a more favorable approach, allowing it to
run for one hour; if after one hour the first k-tree was not yet solved, we allow it to run until it has
solved the first k-tree.
In Table 4 we report how many times each method wins against another for each treewidth, out
of 30 experiments. The entries are boldfaced when the number of victories of an algorithm over
another is statistically significant (p-value <0.05) according to the sign-test. Consistently for any
chosen treewidth, k-G is significantly better than any competitor, including k-A*; moreover, k-A* is
significantly better than both S2 and S2+.
7
This can be explained by considering that k-G explores more orders than k-A*, as for a given order it
only finds an approximate solution. The results suggest that it is more important to explore many
orders instead of obtaining the optimal DAG given an order.
4.6
Very large data sets
Eventually we consider 14 very large data sets, containing between 400 and 10000 variables. We
split each algorithm in three subsets. We thus perform 14?3=42 structural learning experiments with
each algorithm.
We include three randomly-generated synthetic data sets containing 2000, 4000 and 10000 variables
respectively. These networks have been generated using the software BNGenerator 4 . Each variable
has a number of states randomly drawn from 2 to 4 and a number of parents randomly drawn from 0
to 6.
n
Data set
Diabets
Pigs
Book
Data set
n
Data set
n
Data set
n
n
Data set
413 EachMovie 500 Reuters-52
889
BBC
1058
R4
441
Link
724
C20NG
910
Ad
1556
R10
500
WebKB
839
Munin
1041
R2
2000
Table 5: Very large data sets sorted according to the number n of variables.
4000
10000
We let each method run for one hour. The only two algorithms able to cope with these data sets are
k-G and S2. For all the experiments, both k-A* and S2+ fail to find even a single solution in the
allowed time limit; we verified this is not due to memory issues. Among them, k-G wins 42 times out
of 42; this dominance is clearly significant. This result is consistently found under each choice of
treewidth (k =2, 5, 8). On average, the improvement of k-G over S2 fills about 60% of the gap which
separates S2 from the unbounded solver.
W-score
102
101
100
10?1
k-G(2)
k-G(5)
k-G(8)
S2(2)
S2(5)
S2(8)
Figure 3: Boxplots of the W-scores, summarizing the results over 14?3=42 structural learning
experiments on very large data sets. Lower W-scores are better. The y-axis is shown in logarithmic
scale. In the label of the x-axis we also report the adopted treewidth for each method: 2, 5 or 8.
The W-scores of such 42 structural learning experiments are summarized in Figure 3. For both S2 and
k-G, a larger treewidth allows to recover a higher-scoring graph. In turn this decreases the W-score.
However k-G scales better than S2 with respect to the treewidth; its W-score decreases more sharply
with the treewidth. For S2, the difference between the treewidth seems negligible from the figure.
This is due to the fact that the graph learned are actually sparse.
Further experimental documentation is available, including how the score achieved by the algorithms
evolve with time, are available from http://blip.idsia.ch.
5
Conclusions
Our novel approaches for treewidth-bounded structure learning scale effectively with both in the
number of variables and the treewidth, outperforming the competitors.
Acknowledgments
Work partially supported by the Swiss NSF grants 200021_146606 / 1 and IZKSZ2_162188.
4
http://sites.poli.usp.br/pmr/ltd/Software/BNGenerator/
8
References
Berg J., J?rvisalo M., and Malone B. Learning optimal bounded treewidth Bayesian networks via
maximum satisfiability. In AISTATS-14: Proceedings of the 17th International Conference on
Artificial Intelligence and Statistics, 2014.
Cussens J. Bayesian network learning with cutting planes. In UAI-11: Proceedings of the 27th
Conference Annual Conference on Uncertainty in Artificial Intelligence, pages 153?160. AUAI
Press, 2011.
Elidan G. and Gould S. Learning bounded treewidth Bayesian networks. In Advances in Neural
Information Processing Systems 21, pages 417?424. Curran Associates, Inc., 2009.
Korhonen J. H. and Parviainen P. Exact learning of bounded tree-width Bayesian networks. In Proc.
16th Int. Conf. on AI and Stat., page 370?378. JMLR W&CP 31, 2013.
Kwisthout J. H. P., Bodlaender H. L., and van der Gaag L. C. The necessity of bounded treewidth
for efficient inference in Bayesian networks. In ECAI-10: Proceedings of the 19th European
Conference on Artificial Intelligence, 2010.
Nie S., Mau? D. D., de Campos C. P., and Ji Q. Advances in learning Bayesian networks of bounded
treewidth. In Advances in Neural Information Processing Systems, pages 2285?2293, 2014.
Nie S., de Campos C. P., and Ji Q. Learning Bounded Tree-Width Bayesian Networks via Sampling.
In ECSQARU-15: Proceedings of the 13th European Conference on Symbol and Quantitative
Approaches to Reasoning with Uncertainty, pages 387?396, 2015.
Nie S., de Campos C. P., and Ji Q. Learning Bayesian networks with bounded treewidth via guided
search. In AAAI-16: Proceedings of the 30th AAAI Conference on Artificial Intelligence, 2016.
Parviainen P., Farahani H. S., and Lagergren J. Learning bounded tree-width Bayesian networks using
integer linear programming. In Proceedings of the 17th International Conference on Artificial
Intelligence and Statistics, 2014.
Patil H. P. On the structure of k-trees. Journal of Combinatorics, Information and System Sciences,
pages 57?64, 1986.
Scanagatta M., de Campos C. P., Corani G., and Zaffalon M. Learning Bayesian Networks with
Thousands of Variables. In NIPS-15: Advances in Neural Information Processing Systems 28,
pages 1855?1863, 2015.
Teyssier M. and Koller D. Ordering-based search: A simple and effective algorithm for learning
Bayesian networks. CoRR, abs/1207.1429, 2012.
9
| 6232 |@word polynomial:2 seems:1 decomposition:3 minus:1 moment:1 necessity:1 initial:6 contains:4 score:47 outperforms:1 existing:1 current:1 recovered:1 worsening:2 yet:9 mushroom:1 subsequent:1 informative:3 designed:1 update:1 greedy:1 selected:1 leaf:2 prohibitive:1 malone:1 intelligence:5 plane:1 provides:2 node:15 unbounded:3 dn:1 constructed:1 c2:1 beta:1 consists:1 boldfaced:2 poli:1 theoretically:1 decomposed:1 decreasing:1 actual:5 solver:4 considering:2 becomes:2 spain:1 provided:2 bounded:38 notation:1 maximizes:1 moreover:1 webkb:1 lowest:2 sull:1 cassio:1 cm:1 minimizes:1 finding:2 guarantee:1 quantitative:1 every:6 auai:1 exactly:2 ensured:1 universit:1 uk:2 grant:1 before:3 giorgio:2 negligible:1 local:1 limit:7 despite:1 path:5 might:1 initialization:1 r4:1 polytrees:1 limited:2 statistically:1 acknowledgment:1 practice:1 swiss:1 procedure:2 significantly:2 suggest:1 cannot:2 close:2 equivalent:2 gaag:1 independently:1 formulate:1 decomposable:1 identifying:1 usi:3 fill:2 proving:1 handle:1 notion:1 hurt:1 updated:3 construction:2 exact:7 programming:2 curran:1 hypothesis:1 associate:1 element:1 idsia:9 approximated:2 documentation:1 solved:3 worst:2 thousand:8 cycle:1 ordering:2 andes:1 highest:6 chord:1 decrease:2 italiana:2 complexity:4 nie:10 inductively:2 depend:1 bbc:1 basis:1 gobnilp:7 represented:1 various:1 effective:3 artificial:5 outside:1 choosing:1 whose:7 larger:4 supplementary:1 solve:1 otherwise:1 statistic:4 gi:7 online:1 housing:1 advantage:1 propose:5 maximal:4 product:2 zaffalon:3 subgraph:9 achieve:4 scalability:1 exploiting:1 parent:32 artificiale:1 incremental:2 tk:3 illustrate:1 ac:1 stat:1 completion:1 recovering:2 treewidth:83 implies:2 triangulated:2 differ:1 switzerland:3 direction:1 guided:1 exploration:6 successor:2 material:1 elimination:3 farahani:1 molle:1 marco:1 considered:2 driving:1 major:1 achieves:1 adopt:5 smallest:1 favorable:1 decampos:1 proc:1 label:1 weighted:1 clearly:1 always:1 aim:1 ck:2 improvement:1 consistently:6 likelihood:1 indicates:1 greatly:1 summarizing:1 inference:5 koller:2 kc:2 issue:1 among:3 jester:1 pumsb:1 art:2 equal:2 once:4 never:1 sampling:5 eliminated:1 represents:1 look:1 filling:1 np:1 report:6 few:2 randomly:7 individual:1 lagergren:1 argmax:1 connects:1 phase:1 ab:1 investigate:1 analyzed:1 xb:2 edge:9 partial:3 necessary:3 istituto:1 tree:61 desired:1 instance:3 column:2 cover:1 moralization:2 goodness:1 queen:1 tractability:1 cost:8 vertex:5 decomposability:1 subset:6 introducing:1 entry:1 supsi:3 optimally:1 connect:2 considerably:1 chooses:1 scanagatta:5 synthetic:1 explores:1 international:2 connecting:2 quickly:1 aaai:2 containing:4 choose:2 conf:1 book:1 leading:1 return:1 de:5 star:1 summarized:1 includes:1 blip:2 inc:1 int:1 combinatorics:1 vi:1 ad:1 performed:1 later:2 root:2 start:2 recover:4 netflix:1 msweb:1 ass:1 efficiently:1 yield:8 identify:2 bayesian:23 identification:1 zoo:1 svizzera:2 competitor:3 against:2 associated:2 di:1 recovers:1 sampled:4 dataset:1 anytime:2 satisfiability:2 cj:2 actually:1 appears:2 higher:8 done:1 xa:2 until:2 lack:1 incrementally:2 quality:1 believe:1 concept:1 true:3 inductive:1 assigned:4 iteratively:1 satisfactory:1 deal:1 ll:1 during:1 width:6 criterion:1 hill:1 complete:3 cp:1 reasoning:1 meaning:1 novel:5 dalle:1 ji:3 exponentially:2 million:1 discussed:2 slight:1 significant:2 dag:43 ai:1 reachable:1 longer:1 add:5 recent:1 triangulation:5 reverse:7 scenario:1 binary:1 continue:1 outperforming:1 der:1 scoring:12 inverted:13 minimum:3 greater:3 additional:1 accident:1 accomplishes:1 prune:1 maximize:1 shortest:1 elidan:3 monotonically:1 recoverable:1 multiple:1 reduces:1 eachmovie:1 exceeds:1 characterized:1 long:1 prevented:1 victory:1 variant:1 involving:1 subjacent:1 breast:1 iteration:5 represent:1 adopting:2 invert:1 achieved:3 c1:1 retail:1 addition:3 cell:1 campos:5 median:2 induced:1 undirected:5 integer:2 call:2 structural:16 constraining:2 split:2 enough:1 fit:1 bic:10 idea:1 br:1 casted:1 ltd:1 moral:12 returned:1 proceed:1 repeatedly:1 involve:1 listed:1 repeating:1 ten:3 dna:1 http:3 corani:2 outperform:1 northern:1 percentage:1 generate:2 nsf:1 notice:1 sign:1 estimated:3 per:1 nursery:1 dominance:1 drawn:2 prevent:1 verified:1 r10:1 boxplots:1 v1:3 graph:32 sum:3 run:9 letter:1 uncertainty:2 almost:1 vn:1 cussens:3 scaling:1 bound:6 ki:7 rvisalo:1 guaranteed:1 topological:7 refine:1 yielded:2 annual:1 constraint:4 sharply:1 qub:1 software:3 argument:1 span:1 performing:2 gould:3 according:3 universitaria:1 smaller:1 slightly:1 usp:1 explained:2 restricted:1 previously:1 belfast:1 discus:1 eventually:3 ilp:3 needed:1 fail:1 turn:1 tractable:2 adopted:2 available:3 apply:1 v2:2 enforce:1 generic:1 alternative:1 bodlaender:1 remaining:2 include:1 patil:2 exploit:2 build:2 maxsat:1 added:4 already:2 degrades:1 teyssier:2 win:3 ireland:1 distance:1 link:2 unable:3 separate:1 mauro:2 nx:2 intelligenza:1 reason:1 studi:1 induction:1 length:1 kk:4 equivalently:1 gk:7 negative:1 design:1 implementation:4 perform:4 allowing:2 upper:3 arc:5 treewidths:3 community:1 introduced:1 pair:2 namely:1 learned:4 twilp:11 barcelona:1 hour:4 nip:2 adult:1 able:1 proceeds:2 selectable:1 pattern:1 pig:1 pioneering:1 built:1 max:2 including:3 memory:1 difficulty:1 indicator:1 axis:2 categorical:1 evolve:1 relative:2 limitation:1 var:1 kwisthout:2 degree:2 consistent:2 principle:1 row:1 supported:1 ecai:1 guide:1 allow:5 neighbor:1 sparse:2 van:1 xn:1 stand:1 collection:3 qualitatively:1 cope:2 approximate:4 pruning:1 cutting:1 clique:16 global:1 uai:1 xi:9 mau:1 search:8 iterative:3 pen:1 professionale:1 table:10 additionally:1 promising:3 learn:3 obtaining:2 european:2 domain:5 aistats:1 constituted:3 bounding:1 s2:52 whole:1 reuters:1 child:2 allowed:2 x1:1 fig:1 site:1 sub:3 position:2 lugano:3 exponential:3 candidate:2 jmlr:1 admissible:2 dozen:1 minute:2 moralized:1 specific:1 learnable:2 list:2 explored:3 r2:1 symbol:1 exists:1 scuola:1 adding:3 effectively:1 corr:1 ci:7 execution:1 cartesian:1 nk:1 gap:3 parviainen:5 wdbc:1 logarithmic:1 simply:1 explore:2 prevents:1 partially:1 applies:1 ch:5 corresponds:3 satisfies:2 chance:1 conditional:2 goal:5 sorted:2 towards:2 feasible:1 hard:1 included:1 korhonen:3 uniformly:1 called:3 experimental:2 unfavorable:1 select:2 berg:2 latter:1 assessed:1 audio:2 d1:1 della:2 |
5,784 | 6,233 | Hierarchical Deep Reinforcement Learning:
Integrating Temporal Abstraction and
Intrinsic Motivation
Tejas D. Kulkarni?
DeepMind, London
[email protected]
Karthik R. Narasimhan?
CSAIL, MIT
[email protected]
Ardavan Saeedi
CSAIL, MIT
[email protected]
Joshua B. Tenenbaum
BCS, MIT
[email protected]
Abstract
Learning goal-directed behavior in environments with sparse feedback is a major
challenge for reinforcement learning algorithms. One of the key difficulties is insufficient exploration, resulting in an agent being unable to learn robust policies.
Intrinsically motivated agents can explore new behavior for their own sake rather
than to directly solve external goals. Such intrinsic behaviors could eventually
help the agent solve tasks posed by the environment. We present hierarchicalDQN (h-DQN), a framework to integrate hierarchical action-value functions, operating at different temporal scales, with goal-driven intrinsically motivated deep
reinforcement learning. A top-level q-value function learns a policy over intrinsic
goals, while a lower-level function learns a policy over atomic actions to satisfy
the given goals. h-DQN allows for flexible goal specifications, such as functions
over entities and relations. This provides an efficient space for exploration in
complicated environments. We demonstrate the strength of our approach on two
problems with very sparse and delayed feedback: (1) a complex discrete stochastic decision process with stochastic transitions, and (2) the classic ATARI game ?
?Montezuma?s Revenge?.
1
Introduction
Learning goal-directed behavior with sparse feedback from complex environments is a fundamental
challenge for artificial intelligence. Learning in this setting requires the agent to represent knowledge at multiple levels of spatio-temporal abstractions and to explore the environment efficiently.
Recently, non-linear function approximators coupled with reinforcement learning [14, 16, 23] have
made it possible to learn abstractions over high-dimensional state spaces, but the task of exploration
with sparse feedback still remains a major challenge. Existing methods like Boltzmann exploration
and Thomson sampling [31, 19] offer significant improvements over -greedy, but are limited due to
the underlying models functioning at the level of basic actions. In this work, we propose a framework that integrates deep reinforcement learning with hierarchical action-value functions (h-DQN),
where the top-level module learns a policy over options (subgoals) and the bottom-level module
learns policies to accomplish the objective of each option. Exploration in the space of goals enables
efficient exploration in problems with sparse and delayed rewards. Additionally, our experiments
indicate that goals expressed in the space of entities and relations can help constraint the exploration
space for data efficient deep reinforcement learning in complex environments.
?
Equal Contribution. Work done while Tejas Kulkarni was affiliated with MIT.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Reinforcement learning (RL) formalizes control problems as finding a policy ? that maximizes
expected future rewards [32]. Value functions V (s) are central to RL, and they cache the utility
of any state s in achieving the agent?s overall objective. Recently, value functions have also been
generalized as V (s, g) in order to represent the utility of state s for achieving a given goal g ?
G [33, 21]. When the environment provides delayed rewards, we adopt a strategy to first learn ways
to achieve intrinsically generated goals, and subsequently learn an optimal policy to chain them
together. Each of the value functions V (s, g) can be used to generate a policy that terminates when
the agent reaches the goal state g. A collection of these policies can be hierarchically arranged
with temporal dynamics for learning or planning within the framework of semi-Markov decision
processes [34, 35]. In high-dimensional problems, these value functions can be approximated by
neural networks as V (s, g; ?).
We propose a framework with hierarchically organized deep reinforcement learning modules working at different time-scales. The model takes decisions over two levels of hierarchy ? (a) a top level
module (meta-controller) takes in the state and picks a new goal, and (b) a lower-level module (controller) uses both the state and the chosen goal to select actions either until the goal is reached or
the episode terminates. The meta-controller then chooses another goal and steps (a-b) repeat. We
train our model using stochastic gradient descent at different temporal scales to optimize expected
future intrinsic (controller) and extrinsic rewards (meta-controller). We demonstrate the strength of
our approach on problems with delayed rewards: (1) a discrete stochastic decision process with a
long chain of states before receiving optimal extrinsic rewards and (2) a classic ATARI game (?Montezuma?s Revenge?) with even longer-range delayed rewards where most existing state-of-art deep
reinforcement learning approaches fail to learn policies in a data-efficient manner.
2
Literature Review
Reinforcement Learning with Temporal Abstractions Learning and operating over different
levels of temporal abstraction is a key challenge in tasks involving long-range planning. In the
context of hierarchical reinforcement learning [2], Sutton et al.[34] proposed the options framework,
which involves abstractions over the space of actions. At each step, the agent chooses either a onestep ?primitive? action or a ?multi-step? action policy (option). Each option defines a policy over
actions (either primitive or other options) and can be terminated according to a stochastic function
?. Thus, the traditional MDP setting can be extended to a semi-Markov decision process (SMDP)
with the use of options. Recently, several methods have been proposed to learn options in real-time
by using varying reward functions [35] or by composing existing options [28]. Value functions have
also been generalized to consider goals along with states [21]. Our work is inspired by these papers
and builds upon them.
Other related work for hierarchical formulations include the MAXQ framework [6], which decomposed the value function of an MDP into combinations of value functions of smaller constituent
MDPs, as did Guestrin et al.[12] in their factored MDP formulation. Hernandez and Mahadevan [13]
combine hierarchies with short-term memory to handle partial observations. In the skill learning literature, Baranes et al.[1] have proposed a goal-driven active learning approach for learning skills in
continuous sensorimotor spaces.
In this work, we propose a scheme for temporal abstraction that involves simultaneously learning
options and a control policy to compose options in a deep reinforcement learning setting. Our
approach does not use separate Q-functions for each option, but instead treats the option as part of
the input, similar to [21]. This has two potential advantages: (1) there is shared learning between
different options, and (2) the model is scalable to a large number of options.
Intrinsic Motivation The nature and origin of ?good? intrinsic reward functions is an open question in reinforcement learning. Singh et al. [27] explored agents with intrinsic reward structures in
order to learn generic options that can apply to a wide variety of tasks. In another paper, Singh
et al. [26] take an evolutionary perspective to optimize over the space of reward functions for the
agent, leading to a notion of extrinsically and intrinsically motivated behavior. In the context of
hierarchical RL, Goel and Huber [10] discuss a framework for sub-goal discovery using the structural aspects of a learned policy model. S?ims?ek et al. [24] provide a graph partitioning approach to
subgoal identification.
2
Schmidhuber [22] provides a coherent formulation of intrinsic motivation, which is measured by
the improvements to a predictive world model made by the learning algorithm. Mohamed and
Rezende [17] have recently proposed a notion of intrinsically motivated learning within the framework of mutual information maximization. Frank et al. [9] demonstrate the effectiveness of artificial
curiosity using information gain maximization in a humanoid robot. Oudeyer et al. [20] categorize
intrinsic motivation approaches into knowledge based methods, competence or goal based methods
and morphological methods. Our work relates to competence based intrinsic motivation but other
complementary methods can be combined in future work.
Object-based Reinforcement Learning Object-based representations [7, 4] that can exploit the
underlying structure of a problem have been proposed to alleviate the curse of dimensionality in
RL. Diuk et al. [7] propose an Object-Oriented MDP, using a representation based on objects and
their interactions. Defining each state as a set of value assignments to all possible relations between
objects, they introduce an algorithm for solving deterministic object-oriented MDPs. Their representation is similar to that of Guestrin et al. [11], who describe an object-based representation in
the context of planning. In contrast to these approaches, our representation does not require explicit
encoding for the relations between objects and can be used in stochastic domains.
Deep Reinforcement Learning Recent advances in function approximation with deep neural networks have shown promise in handling high-dimensional sensory input. Deep Q-Networks and
its variants have been successfully applied to various domains including Atari games [16, 15] and
Go [23], but still perform poorly on environments with sparse, delayed reward signals.
Cognitive Science and Neuroscience The nature and origin of intrinsic goals in humans is a
thorny issue but there are some notable insights from existing literature. There is converging evidence in developmental psychology that human infants, primates, children, and adults in diverse
cultures base their core knowledge on certain cognitive systems including ? entities, agents and their
actions, numerical quantities, space, social-structures and intuitive theories [29]. During curiositydriven activities, toddlers use this knowledge to generate intrinsic goals such as building physically
stable block structures. In order to accomplish these goals, toddlers seem to construct subgoals in
the space of their core knowledge. Knowledge of space can also be utilized to learn a hierarchical
decomposition of spatial environments. This has been explored in Neuroscience with the successor
representation, which represents value functions in terms of the expected future state occupancy.
Decomposition of the successor representation have shown to yield reasonable subgoals for spatial
navigation problems [5, 30].
3
Model
Consider a Markov decision process (MDP) represented by states s ? S, actions a ? A, and
transition function T : (s, a) ? s0 . An agent operating in this framework receives a state s from
the external environment and can take an action a, which results in a new state s0 . We define the
extrinsic reward function as F : (s) ? R. The objective of the agent is to maximize this function
over long periods of time. For example, this function can take the form of the agent?s survival time
or score in a game.
Agents Effective exploration in MDPs is a significant challenge in learning good control policies.
Methods such as -greedy are useful for local exploration but fail to provide impetus for the agent
to explore different areas of the state space. In order to tackle this, we utilize a notion of intrinsic
goals g ? G. The agent focuses on setting and achieving sequences of goals via learning policies
?g in order to maximize cumulative extrinsic reward. In order to learn each ?g , the agent also has a
critic, which provides intrinsic rewards, based on whether the agent is able to achieve its goals (see
Figure 1).
Temporal Abstractions As shown in Figure 1, the agent uses a two-stage hierarchy consisting of
a controller and a meta-controller. The meta-controller receives state st and chooses a goal gt ? G,
where G denotes the set of all possible current goals. The controller then selects an action at using
st and gt . The goal gt remains in place for the next few time steps either until it is achieved or a
terminal state is reached. The internal critic is responsible for evaluating whether a goal has been
reached and provides an appropriate reward rt (g) to the controller. In this work, we make a minimal
3
action
External
Environment
observations
extrinsic
reward
gt+N
gt
Meta
Controller
Q2 (st+N , gt+N ; ?2 )
Q2 (st , g; ?2 )
goal
......
Critic
action
Meta
Controller
st
intrinsic
reward
at
at+1
Meta
Controller
......
st+N
Q1 (st , a; ?1 , gt )
Q1 (st+1 , a; ?1 , gt ) Q1 (st+N , a; ?1 , gt )
Controller
Controller
at+N
Controller
st
....
st+1
Controller
st+N
gt
agent
Figure 1: (Overview) The agent receives sensory observations and produces actions. Separate
DQNs are used inside the meta-controller and controller. The meta-controller looks at the raw
states and produces a policy over goals by estimating the action-value function Q2 (st , gt ; ?2 ) (to
maximize expected future extrinsic reward). The controller takes in states and the current goal, and
produces a policy over actions by estimating the action-value function Q1 (st , at ; ?1 , gt ) to solve
the predicted goal (by maximizing expected future intrinsic reward). The internal critic provides a
positive reward to the controller if and only if the goal is reached. The controller terminates either
when the episode ends or when g is accomplished. The meta-controller then chooses a new g and
the process repeats.
assumption of a binary internal reward i.e. 1 if the goal is reached and 0 otherwise. The objective
P?
0
function for the controller is to maximize cumulative intrinsic reward: Rt (g) = t0 =t ? t ?t rt0 (g).
Similarly, the objective of the meta-controller is to optimize the cumulative extrinsic reward Ft =
P? t0 ?t
ft0 , where ft are reward signals received from the environment. Note that the time
t0 =t ?
scales for Ft and Rt are different ? each ft is the accumulated external reward over the time period
between successive goal selections. The discounting in Ft , therefore, is over sequences of goals
and not lower level actions. This setup is similar to optimizing over the space of optimal reward
functions to maximize fitness [25]. In our case, the reward functions are dynamic and temporally
dependent on the sequential history of goals. Figure 1 illustrates the agent?s use of the hierarchy
over subsequent time steps.
Deep Reinforcement Learning with Temporal Abstractions We use the Deep Q-Learning
framework [16] to learn policies for both the controller and the meta-controller. Specifically, the
controller estimates the following Q-value function:
?
X
0
Q?1 (s, a; g) = max E
? t ?t rt0 | st = s, at = a, gt = g, ?ag
?ag
t0 =t
(1)
= max E rt + ? maxat+1 Q?1 (st+1 , at+1 ; g) | st = s, at = a, gt = g, ?ag
?ag
where g is the agent?s goal in state s and ?ag is the action policy. Similarly, for the meta-controller,
we have:
Q?2 (s, g)
X
t+N
= max?g E
ft0 + ? maxg0 Q?2 (st+N , g 0 ) | st = s, gt = g, ?g
(2)
t0 =t
where N denotes the number of time steps until the controller halts given the current goal, g 0 is the
agent?s goal in state st+N , and ?g is the policy over goals. It is important to note that the transitions
(st , gt , ft , st+N ) generated by Q2 run at a slower time-scale than the transitions (st , at , gt , rt , st+1 )
generated by Q1 .
We can represent Q? (s, g) ? Q(s, g; ?) using a non-linear function approximator with parameters
?. Each Q ? {Q1 , Q2 } can be trained by minimizing corresponding loss functions ? L1 (?1 ) and
L2 (?2 ). We store experiences (st , gt , ft , st+N ) for Q2 and (st , at , gt , rt , st+1 ) for Q1 in disjoint
4
memory spaces D1 and D2 respectively. The loss function for Q1 can then be stated as:
L1 (?1,i ) = E(s,a,g,r,s0 )?D1 (y1,i ? Q1 (s, a; ?1,i , g))2 ,
where i denotes the training iteration number and y1,i = r + ? maxa0 Q1 (s0 , a0 ; ?1,i?1 , g).
(3)
Following [16], the parameters ?1,i?1 from the previous iteration are held fixed when optimizing the
loss function. The parameters ?1 can be optimized using the gradient:
??1,i L1 (?1,i ) = E(s,a,r,s0 ?D1 )
"
#
r + ? max Q1 (s , a ; ?1,i?1 , g) ? Q1 (s, a; ?1,i , g) ??1,i Q1 (s, a; ?1,i , g)
a0
0
0
The loss function L2 and its gradients can be derived using a similar procedure.
Algorithm 1 Learning algorithm for h-DQN
1: Initialize experience replay memories {D1 , D2 } and parameters {?1 , ?2 } for the controller and
meta-controller respectively.
2: Initialize exploration probability 1,g = 1 for the controller for all goals g and 2 = 1 for the
meta-controller.
3: for i = 1, num episodes do
4:
Initialize game and get start state description s
5:
g ? EPS G REEDY(s, G, 2 , Q2 )
6:
while s is not terminal do
7:
F ?0
8:
s0 ? s
9:
while not (s is terminal or goal g reached) do
10:
a ? EPS G REEDY({s, g}, A, 1,g , Q1 )
11:
Execute a and obtain next state s0 and extrinsic reward f from environment
12:
Obtain intrinsic reward r(s, a, s0 ) from internal critic
13:
Store transition ({s, g}, a, r, {s0 , g}) in D1
14:
UPDATE PARAMS (L1 (?1,i ), D1 )
15:
UPDATE PARAMS (L2 (?2,i ), D2 )
16:
F ?F +f
17:
s ? s0
18:
end while
19:
Store transition (s0 , g, F, s0 ) in D2
20:
if s is not terminal then
21:
g ? EPS G REEDY(s, G, 2 , Q2 )
22:
end if
23:
end while
24:
Anneal 2 and 1 .
25: end for
Algorithm 2 : EPS G REEDY(x, B, , Q)
1: if random() < then
2:
return random element from set B
3: else
4:
return argmaxm?B Q(x, m)
5: end if
Algorithm 3 : UPDATE PARAMS(L, D)
1: Randomly sample mini-batches from D
2: Perform gradient descent on loss L(?)
(cf. (3))
Learning Algorithm We learn the parameters of h-DQN using stochastic gradient descent at different time scales ? transitions from the controller are collected at every time step but a transition
from the meta-controller is only collected when the controller terminates (i.e. when a goal is repicked or the episode ends). Each new goal g is drawn in an -greedy fashion (Algorithms 1 & 2)
with the exploration probability 2 annealed as learning proceeds (from a starting value of 1).
In the controller, at every time step, an action is drawn with a goal using the exploration probability
1,g , which depends on the current empirical success rate of reaching g. Specifically, if the success
rate for goal g is > 90%, we set 1,g = 0.1, else 1. All 1,g values are annealed to 0.1. The model
parameters (?1 , ?2 ) are periodically updated by drawing experiences from replay memories D1 and
D2 , respectively (see Algorithm 3).
5
4
Experiments
(1) Discrete stochastic decision process with delayed rewards For our first experiment, we consider a stochastic decision process where the extrinsic reward depends on the history of visited states
in addition to the current state. This task demonstrates the importance of goal-driven exploration in
such environments. There are 6 possible states and the agent always starts at s2 . The agent moves
left deterministically when it chooses left action; but the action right only succeeds 50% of the time,
resulting in a left move otherwise. The terminal state is s1 and the agent receives the reward of 1
when it first visits s6 and then s1 . The reward for going to s1 without visiting s6 is 0.01. This is a
modified version of the MDP in [19], with the reward structure adding complexity to the task (see
Figure 2).
We consider each state as a candidate goal for
0.5
0.5
0.5
0.5
0.5
exploration. This enables and encourages the
agent to visit state s6 (if chosen as a goal) and r = or1/100 s1 0.5 s2 0.5 s3 0.5 s4 0.5 s5 0.5 s6
r=1
hence, learn the optimal policy. For each goal,
1.0
1.0
1.0
1.0
1.0
the agent receives a positive intrinsic reward if
Figure 2: A stochastic decision process where the
and only if it reaches the corresponding state.
reward at the terminal state s1 depends on whether
Results We compare the performance of s6 is visited (r = 1) or not (r = 1/100). Edges
our approach (without deep neural networks) are annotated with transition probabilities (Red aragainst Q-Learning as a baseline (without in- row: move right, Black arrow: move left).
trinsic rewards) in terms of the average extrinsic reward gained in an episode. In our experiments, all parameters are annealed from 1 to 0.1
over 50k steps. The learning rate is set to 2.5 ? 10?4 . Figure 3 plots the evolution of reward for
both methods averaged over 10 different runs. As expected, we see that Q-Learning is unable to find
the optimal policy even after 200 epochs, converging to a sub-optimal policy of reaching state s1
directly to obtain a reward of 0.01. In contrast, our approach with hierarchical Q-estimators learns
to choose goals s4 , s5 or s6 , which statistically lead the agent to visit s6 before going back to s1 .
Our agent obtains a significantly higher average reward of 0.13.
5/18/2016
Reward.html
5/18/2016
Reward
0.18
0.16
0.14
0.12
0.1
0.08
0.06
0.04
0.02
0
0
State 3, State 4, State 5, State 6 | ?lled scatter chart made by Ardavans | plotly
# of visits per episode
1.2
State 3
State 4
State 5
State 6
1
0.8
0.6
0.4
Baseline
Our Approach
50
100
Steps
150
0.2
0
200
2
Export?to?plot.ly??
4
6
Episodes (*1000)
8
10
12
Edit chart ?
Figure 3: (left) Average reward (over 10 runs) of our approach compared to Q-learning. (right)
#visits of our approach to states s3 -s6 (over 1000 episodes). Initial state: s2 , Terminal state: s1 .
?le:///Users/tejas/Documents/deepRelationalRL/dqn/Reward.html
1/1
Figure 3 illustrates that the number of visits to states s3 , s4 , s5 , s6 increases with episodes of training.
Each data point shows the average number of visits for each state over the last 1000 episodes. This
indicates that our model is choosing goals in a way so that it reaches the critical state s6 more often.
https://plot.ly/~ardavans/4.embed
1/1
(2) ATARI game with delayed rewards We now consider ?Montezuma?s Revenge?, an ATARI
game with sparse, delayed rewards. The game requires the player to navigate the explorer (in red)
through several rooms while collecting treasures. In order to pass through doors (in the top right and
top left corners of the figure), the player has to first pick up the key. The player has to then climb
down the ladders on the right and move left towards the key, resulting in a long sequence of actions
before receiving a reward (+100) for collecting the key. After this, navigating towards the door and
opening it results in another reward (+300).
Basic DQN [16] achieves a score of 0 while even the best performing system, Gorila DQN [18],
manages only 4.16 on average. Asynchronous actor critic methods achieve a non-zero score but
require 100s of millions of training frames [15].
6
Architecture
Total extrinsic reward
5/18/2016
Reward.html
Q1(s, a; g)
Linear
400
350
300
250
200
150
100
50
0
ReLU:Linear (h=512)
ReLU:Conv (filter:3, ftr-maps:64, strides:1)
ReLU:Conv (filter:4, ftr-maps:64, strides:2)
ReLU:Conv (filter:8, ftr-maps:32, strides:4)
Our Approach
DQN
0
image (s) + goal (g)
0.5M
1M
5/18/2016
5/18/2016
Steps
1.5M
2M
Export?to?plot.ly??
Bar graph.html
subgoal_6.html
Success ratio for reaching the goal ?key?
Success % of different goals over time
1
0.25
0.8
0.2
0.6
0.15
0.4
0.1
0.2
1/1
0.05
0
0
top-left door
top-right door
middle-ladder
bottom-left-ladder
bottom-right-ladder
key
?le:///Users/tejas/Documents/deepRelationalRL/dqn/Reward.html
0.5M
1M
Steps
1.5M
0
2M
0.5M
Export?to?plot.ly??
1M
Steps
1.5M
2M
Export?to?plot.ly??
Figure 4: (top-left) Architecture: DQN architecture for the controller (Q1 ). A similar architecture
produces Q2 for the meta-controller (without goal as input). (top-right) The joint training learns to
consistently get high rewards. (bottom-left) Goal success ratio: The agent learns to choose the key
more often as training proceeds and is successful at achieving it. (bottom-right) Goal statistics:
During early phases of joint training, all goals are equally preferred due to high exploration but as
training proceeds, the agent learns to select appropriate goals such as the key and bottom-left door.
?le:///Users/tejas/Documents/deepRelationalRL/dqn/subgoal_6.html
1/1
?le:///Users/tejas/Documents/deepRelationalRL/dqn/Bar%20graph.html
1/1
Setup The agent needs intrinsic motivation to explore meaningful parts of the scene before learning about the advantage of obtaining the key. Inspired by developmental psychology literature [29]
and object-oriented MDPs [7], we use entities or objects in the scene to parameterize goals in this
environment. Unsupervised detection of objects in visual scenes is an open problem in computer
vision, although there has been recent progress in obtaining objects directly from image or motion
data [8]. In this work, we built a custom pipeline to provide plausible object candidates. Note that
the agent is still required to learn which of these candidates are worth pursuing as goals. The controller and meta-controller are convolutional neural networks (Figure 4) that learn representations
from raw pixel data. We use the Arcade Learning Environment [3] to perform experiments.
The internal critic is defined in the space of hentity1 , relation, entity2 i, where relation is a function over configurations of the entities. In our experiments, the agent learns to choose entity2 . For
instance, the agent is deemed to have completed a goal (and only then receives a reward) if the agent
entity reaches another entity such as the door. The critic computes binary rewards using the relative
positions of the agent and the goal (1 if the goal was reached). Note that this notion of relational
intrinsic rewards can be generalized to other settings. For instance, in the ATARI game ?Asteroids?,
the agent could be rewarded when the bullet reaches the asteroid or if simply the ship never reaches
an asteroid. In ?Pacman?, the agent could be rewarded if the pellets on the screen are reached. In the
most general case, we can potentially let the model evolve a parameterized intrinsic reward function
given entities. We leave this for future work.
Model Architecture and Training As shown in Figure 4, the model consists of stacked convolutional layers with rectified linear units (ReLU). The input to the meta-controller is a set of four
consecutive images of size 84 ? 84. To encode the goal output from the meta-controller, we append
a binary mask of the goal location in image space along with the original 4 consecutive frames. This
augmented input is passed to the controller. The experience replay memories D1 and D2 were set to
be equal to 106 and 5 ? 104 respectively. We set the learning rate to be 2.5 ? 10?4 , with a discount
rate of 0.99. We follow a two phase training procedure ? (1) In the first phase, we set the exploration
parameter 2 of the meta-controller to 1 and train the controller on actions. This effectively leads to
pre-training the controller so that it can learn to solve a subset of the goals. (2) In the second phase,
we jointly train the controller and meta-controller.
7
Meta
Controller
termination
(death)
goal
reached
Controller
1
2
3
4
5
6
Meta
Controller
goal
reached
Controller
7
8
9
10
11
12
Figure 5: Sample game play on Montezuma?s Revenge: The four quadrants are arranged in a
temporal order (top-left, top-right, bottom-left and bottom-right). First, the meta-controller chooses
key as the goal (illustrated in red). The controller then tries to satisfy this goal by taking a series
of low level actions (only a subset shown) but fails due to colliding with the skull (the episode
terminates here). The meta-controller then chooses the bottom-right ladder as the next goal and the
controller terminates after reaching it. Subsequently, the meta-controller chooses the key and the
top-right door and the controller is able to successfully achieve both these goals.
Results Figure 4 shows reward progress from the joint training phase ? it is evident that the model
starts gradually learning to both reach the key and open the door to get a reward of around +400 per
episode. The agent learns to choose the key more often as training proceeds and is also successful
at reaching it. We observe that the agent first learns to perform the simpler goals (such as reaching
the right door or the middle ladder) and then slowly starts learning the ?harder? goals such as the key
and the bottom ladders, which provide a path to higher rewards. Figure 4 also shows the evolution of
the success rate of goals that are picked. At the end of training, we can see that the ?key?, ?bottomleft-ladder? and ?bottom-right-ladders? are chosen increasingly more often.
In order to scale-up to solve the entire game, several key ingredients are missing, such as ? automatic
discovery of objects from videos to aid the goal parameterization we considered, a flexible shortterm memory, or the ability to intermittently terminate ongoing options. We also show some screenshots from a test run with our agent (with epsilon set to 0.1) in Figure 5, as well as a sample animation
of the run.2
References
[1] A. Baranes and P.-Y. Oudeyer. Active learning of inverse models with intrinsically motivated goal exploration in robots. Robotics and Autonomous Systems, 61(1):49?73, 2013.
[2] A. G. Barto and S. Mahadevan. Recent advances in hierarchical reinforcement learning. Discrete Event
Dynamic Systems, 13(4):341?379, 2003.
[3] M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation
platform for general agents. Journal of Artificial Intelligence Research, 2012.
2
Sample trajectory of a run on ?Montezuma?s Revenge? ? https://goo.gl/3Z64Ji
8
[4] L. C. Cobo, C. L. Isbell, and A. L. Thomaz. Object focused q-learning for autonomous agents. In
Proceedings of AAMAS, pages 1061?1068, 2013.
[5] P. Dayan. Improving generalization for temporal difference learning: The successor representation. Neural Computation, 5(4):613?624, 1993.
[6] T. G. Dietterich. Hierarchical reinforcement learning with the maxq value function decomposition. J.
Artif. Intell. Res.(JAIR), 13:227?303, 2000.
[7] C. Diuk, A. Cohen, and M. L. Littman. An object-oriented representation for efficient reinforcement
learning. In Proceedings of the International Conference on Machine learning, pages 240?247, 2008.
[8] S. Eslami, N. Heess, T. Weber, Y. Tassa, K. Kavukcuoglu, and G. E. Hinton. Attend, infer, repeat: Fast
scene understanding with generative models. arXiv preprint arXiv:1603.08575, 2016.
[9] M. Frank, J. Leitner, M. Stollenga, A. F?orster, and J. Schmidhuber. Curiosity driven reinforcement learning for motion planning on humanoids. Intrinsic motivations and open-ended development in animals,
humans, and robots, page 245, 2015.
[10] S. Goel and M. Huber. Subgoal discovery for hierarchical reinforcement learning using learned policies.
In FLAIRS conference, pages 346?350, 2003.
[11] C. Guestrin, D. Koller, C. Gearhart, and N. Kanodia. Generalizing plans to new environments in relational
mdps. In Proceedings of International Joint conference on Artificial Intelligence, pages 1003?1010, 2003.
[12] C. Guestrin, D. Koller, R. Parr, and S. Venkataraman. Efficient solution algorithms for factored mdps.
Journal of Artificial Intelligence Research, pages 399?468, 2003.
[13] N. Hernandez-Gardiol and S. Mahadevan. Hierarchical memory-based reinforcement learning. In Advances in Neural Information Processing Systems, pages 1047?1053, 2001.
[14] J. Koutn??k, J. Schmidhuber, and F. Gomez. Evolving deep unsupervised convolutional networks for
vision-based reinforcement learning. In Proceedings of the 2014 conference on Genetic and evolutionary
computation, pages 541?548. ACM, 2014.
[15] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu.
Asynchronous methods for deep reinforcement learning. arXiv preprint arXiv:1602.01783, 2016.
[16] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller,
et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529?533, 2015.
[17] S. Mohamed and D. J. Rezende. Variational information maximisation for intrinsically motivated reinforcement learning. In Advances in Neural Information Processing Systems, pages 2116?2124, 2015.
[18] A. Nair, P. Srinivasan, S. Blackwell, C. Alcicek, R. Fearon, A. De Maria, V. Panneershelvam, et al.
Massively parallel methods for deep reinforcement learning. arXiv preprint arXiv:1507.04296, 2015.
[19] I. Osband, C. Blundell, A. Pritzel, and B. Van Roy. Deep exploration via bootstrapped dqn. arXiv preprint
arXiv:1602.04621, 2016.
[20] P.-Y. Oudeyer and F. Kaplan. What is intrinsic motivation? a typology of computational approaches.
Frontiers in neurorobotics, 1:6, 2009.
[21] T. Schaul, D. Horgan, K. Gregor, and D. Silver. Universal value function approximators. In Proceedings
of the 32nd International Conference on Machine Learning (ICML-15), pages 1312?1320, 2015.
[22] J. Schmidhuber. Formal theory of creativity, fun, and intrinsic motivation (1990?2010). Autonomous
Mental Development, IEEE Transactions on, 2(3):230?247, 2010.
[23] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, et al.
Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484?489, 2016.
? S?ims?ek, A. Wolfe, and A. Barto. Identifying useful subgoals in reinforcement learning by local graph
[24] O.
partitioning. In Proceedings of the International conference on Machine learning, pages 816?823, 2005.
[25] S. Singh, R. L. Lewis, and A. G. Barto. Where do rewards come from. In Proceedings of the annual
conference of the cognitive science society, pages 2601?2606, 2009.
[26] S. Singh, R. L. Lewis, A. G. Barto, and J. Sorg. Intrinsically motivated reinforcement learning: An
evolutionary perspective. Autonomous Mental Development, IEEE Transactions on, 2(2):70?82, 2010.
[27] S. P. Singh, A. G. Barto, and N. Chentanez. Intrinsically motivated reinforcement learning. In Advances
in neural information processing systems, pages 1281?1288, 2004.
[28] J. Sorg and S. Singh. Linear options. In Proceedings of the 9th International Conference on Autonomous
Agents and Multiagent Systems, pages 31?38, Richland, SC, 2010.
[29] E. S. Spelke and K. D. Kinzler. Core knowledge. Developmental science, 10(1):89?96, 2007.
[30] K. L. Stachenfeld, M. Botvinick, and S. J. Gershman. Design principles of the hippocampal cognitive
map. In Advances in neural information processing systems, pages 2528?2536, 2014.
[31] B. C. Stadie, S. Levine, and P. Abbeel. Incentivizing exploration in reinforcement learning with deep
predictive models. arXiv preprint arXiv:1507.00814, 2015.
[32] R. S. Sutton and A. G. Barto. Introduction to reinforcement learning. MIT Press Cambridge, 1998.
[33] R. S. Sutton, J. Modayil, M. Delp, T. Degris, P. M. Pilarski, A. White, and D. Precup. Horde: A scalable
real-time architecture for learning knowledge from unsupervised sensorimotor interaction. In The 10th
International Conference on Autonomous Agents and Multiagent Systems, pages 761?768, 2011.
[34] R. S. Sutton, D. Precup, and S. Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial intelligence, 112(1):181?211, 1999.
[35] C. Szepesvari, R. S. Sutton, J. Modayil, S. Bhatnagar, et al. Universal option models. In Advances in
Neural Information Processing Systems, pages 990?998, 2014.
9
| 6233 |@word middle:2 version:1 nd:1 open:4 termination:1 d2:6 decomposition:3 diuk:2 q1:16 pick:2 harder:1 initial:1 configuration:1 series:1 score:3 typology:1 genetic:1 document:4 bootstrapped:1 existing:4 current:5 com:1 gmail:1 scatter:1 guez:1 numerical:1 subsequent:1 periodically:1 sorg:2 enables:2 plot:6 neurorobotics:1 update:3 smdp:1 infant:1 intelligence:5 greedy:3 generative:1 parameterization:1 short:1 core:3 num:1 mental:2 provides:6 location:1 successive:1 simpler:1 along:2 pritzel:1 consists:1 combine:1 compose:1 inside:1 introduce:1 manner:1 mask:1 huber:2 expected:6 behavior:5 planning:4 multi:1 treasure:1 terminal:7 inspired:2 decomposed:1 curse:1 cache:1 conv:3 spain:1 estimating:2 underlying:2 maximizes:1 what:1 atari:6 deepmind:1 q2:9 narasimhan:1 finding:1 ag:5 ended:1 formalizes:1 temporal:13 every:2 collecting:2 fun:1 tackle:1 botvinick:1 demonstrates:1 control:4 partitioning:2 ly:5 unit:1 alcicek:1 before:4 positive:2 attend:1 local:2 treat:1 sutton:5 encoding:1 eslami:1 path:1 hernandez:2 black:1 limited:1 range:2 statistically:1 averaged:1 directed:2 responsible:1 atomic:1 maximisation:1 block:1 procedure:2 area:1 riedmiller:1 empirical:1 evolving:1 universal:2 significantly:1 gorila:1 pre:1 integrating:1 quadrant:1 arcade:2 get:3 selection:1 context:3 bellemare:2 optimize:3 deterministic:1 map:4 missing:1 maximizing:1 annealed:3 primitive:2 go:2 rt0:2 starting:1 focused:1 identifying:1 factored:2 insight:1 estimator:1 s6:10 classic:2 handle:1 notion:4 autonomous:6 updated:1 hierarchy:4 play:1 user:4 us:2 origin:2 element:1 roy:1 approximated:1 wolfe:1 utilized:1 bottom:11 ft:7 module:5 preprint:5 export:4 levine:1 parameterize:1 episode:12 morphological:1 venkataraman:1 goo:1 environment:18 developmental:3 complexity:1 reward:62 littman:1 dynamic:3 trained:1 singh:7 solving:1 predictive:2 upon:1 baranes:2 joint:4 various:1 represented:1 train:3 stacked:1 fast:1 describe:1 london:1 effective:1 artificial:6 sc:1 choosing:1 posed:1 solve:5 plausible:1 drawing:1 otherwise:2 pilarski:1 ability:1 statistic:1 jointly:1 advantage:2 sequence:3 thomaz:1 propose:4 interaction:2 poorly:1 achieve:4 impetus:1 stachenfeld:1 schaul:1 intuitive:1 description:1 constituent:1 produce:4 silver:4 leave:1 object:16 help:2 measured:1 received:1 progress:2 predicted:1 involves:2 indicate:1 come:1 screenshots:1 annotated:1 filter:3 stochastic:10 subsequently:2 exploration:19 human:4 leitner:1 successor:3 require:2 maxa0:1 abbeel:1 generalization:1 creativity:1 alleviate:1 koutn:1 frontier:1 around:1 considered:1 parr:1 major:2 achieves:1 adopt:1 consecutive:2 early:1 integrates:1 visited:2 edit:1 pellet:1 successfully:2 mit:8 always:1 modified:1 rather:1 reaching:6 rusu:1 varying:1 barto:6 encode:1 rezende:2 focus:1 derived:1 improvement:2 consistently:1 maria:1 indicates:1 contrast:2 baseline:2 abstraction:10 dependent:1 dayan:1 accumulated:1 entire:1 a0:2 relation:6 koller:2 going:2 selects:1 pixel:1 overall:1 issue:1 flexible:2 html:8 development:3 animal:1 art:1 spatial:2 initialize:3 mutual:1 platform:1 equal:2 construct:1 never:1 plan:1 veness:2 sampling:1 represents:1 look:1 unsupervised:3 icml:1 future:7 mirza:1 opening:1 few:1 oriented:4 randomly:1 simultaneously:1 intell:1 delayed:9 fitness:1 phase:5 consisting:1 karthik:1 harley:1 detection:1 mnih:2 custom:1 evaluation:1 navigation:1 extrinsically:1 held:1 stollenga:1 chain:2 edge:1 partial:1 experience:4 culture:1 tree:1 re:1 minimal:1 instance:2 assignment:1 maximization:2 subset:2 successful:2 accomplish:2 params:3 chooses:8 combined:1 st:28 fundamental:1 international:6 csail:2 receiving:2 together:1 precup:2 central:1 choose:4 slowly:1 huang:1 external:4 cognitive:4 ek:2 corner:1 leading:1 return:2 potential:1 de:1 degris:1 stride:3 satisfy:2 notable:1 depends:3 try:1 picked:1 reached:10 start:4 red:3 option:19 complicated:1 parallel:1 contribution:1 chart:2 convolutional:3 who:1 efficiently:1 yield:1 identification:1 raw:2 kavukcuoglu:3 manages:1 bhatnagar:1 trajectory:1 worth:1 rectified:1 history:2 reach:7 sensorimotor:2 mohamed:2 gain:1 intrinsically:9 knowledge:8 dimensionality:1 organized:1 back:1 thorny:1 higher:2 jair:1 follow:1 arranged:2 done:1 formulation:3 subgoal:2 execute:1 stage:1 until:3 working:1 receives:6 defines:1 bullet:1 mdp:6 artif:1 dqn:14 building:1 dietterich:1 lillicrap:1 functioning:1 horde:1 evolution:2 discounting:1 hence:1 death:1 jbt:1 illustrated:1 white:1 game:12 during:2 encourages:1 bowling:1 flair:1 generalized:3 hippocampal:1 evident:1 thomson:1 demonstrate:3 l1:4 motion:2 pacman:1 image:4 weber:1 intermittently:1 variational:1 recently:4 rl:4 overview:1 cohen:1 tassa:1 subgoals:4 million:1 eps:4 ims:2 significant:2 s5:3 cambridge:1 chentanez:1 automatic:1 similarly:2 specification:1 robot:3 longer:1 operating:3 stable:1 gt:19 base:1 actor:1 badia:1 own:1 recent:3 perspective:2 optimizing:2 delp:1 driven:4 rewarded:2 schmidhuber:4 store:3 certain:1 ship:1 massively:1 meta:28 binary:3 success:6 approximators:2 accomplished:1 joshua:1 guestrin:4 goel:2 maximize:5 period:2 richland:1 signal:2 semi:3 relates:1 multiple:1 bcs:1 infer:1 offer:1 long:4 equally:1 visit:7 halt:1 ftr:3 converging:2 involving:1 basic:2 scalable:2 controller:61 variant:1 vision:2 physically:1 iteration:2 represent:3 arxiv:10 achieved:1 robotics:1 addition:1 else:2 climb:1 effectiveness:1 seem:1 structural:1 fearon:1 door:9 mahadevan:3 variety:1 relu:5 psychology:2 architecture:6 toddler:2 blundell:1 t0:5 whether:3 motivated:8 utility:2 passed:1 osband:1 action:27 deep:20 heess:1 useful:2 s4:3 discount:1 tenenbaum:1 generate:2 http:2 s3:3 neuroscience:2 extrinsic:11 disjoint:1 per:2 diverse:1 naddaf:1 discrete:4 promise:1 srinivasan:1 key:17 four:2 achieving:4 drawn:2 spelke:1 saeedi:1 utilize:1 graph:4 run:6 inverse:1 parameterized:1 place:1 reasonable:1 pursuing:1 lled:1 decision:9 layer:1 montezuma:5 gomez:1 annual:1 activity:1 strength:2 constraint:1 isbell:1 scene:4 colliding:1 sake:1 aspect:1 performing:1 according:1 combination:1 terminates:6 smaller:1 increasingly:1 mastering:1 skull:1 revenge:5 primate:1 s1:8 den:1 gradually:1 modayil:2 ardavan:1 pipeline:1 remains:2 discus:1 eventually:1 fail:2 end:8 panneershelvam:1 apply:1 observe:1 hierarchical:12 generic:1 appropriate:2 batch:1 slower:1 original:1 top:12 denotes:3 include:1 cf:1 completed:1 exploit:1 epsilon:1 build:1 gregor:1 society:1 objective:5 move:5 question:1 quantity:1 strategy:1 rt:6 traditional:1 visiting:1 evolutionary:3 gradient:5 navigating:1 unable:2 separate:2 entity:8 maddison:1 collected:2 curiositydriven:1 insufficient:1 mini:1 minimizing:1 ratio:2 schrittwieser:1 setup:2 potentially:1 frank:2 stated:1 kaplan:1 append:1 design:1 affiliated:1 policy:25 boltzmann:1 perform:4 observation:3 markov:3 descent:3 horgan:1 defining:1 extended:1 relational:2 hinton:1 y1:2 frame:2 competence:2 required:1 blackwell:1 optimized:1 coherent:1 learned:2 barcelona:1 maxq:2 nip:1 adult:1 able:2 bar:2 curiosity:2 proceeds:4 challenge:5 built:1 including:2 memory:7 max:4 video:1 critical:1 event:1 difficulty:1 explorer:1 scheme:1 occupancy:1 mdps:8 ladder:9 temporally:1 deemed:1 shortterm:1 coupled:1 review:1 literature:4 discovery:3 l2:3 epoch:1 evolve:1 understanding:1 relative:1 graf:2 loss:5 multiagent:2 approximator:1 ingredient:1 gershman:1 integrate:1 humanoid:2 agent:48 s0:12 principle:1 critic:8 row:1 repeat:3 last:1 asynchronous:2 gl:1 formal:1 wide:1 taking:1 sparse:7 van:2 feedback:4 transition:9 world:1 cumulative:3 evaluating:1 sensory:2 computes:1 made:3 reinforcement:33 collection:1 sifre:1 social:1 transaction:2 skill:2 obtains:1 preferred:1 active:2 spatio:1 continuous:1 search:1 additionally:1 learn:15 nature:4 robust:1 composing:1 terminate:1 dqns:1 obtaining:2 szepesvari:1 improving:1 complex:3 ft0:2 anneal:1 domain:2 argmaxm:1 did:1 hierarchically:2 terminated:1 motivation:9 s2:3 arrow:1 animation:1 child:1 complementary:1 aamas:1 augmented:1 screen:1 fashion:1 aid:1 sub:2 position:1 fails:1 explicit:1 deterministically:1 stadie:1 replay:3 candidate:3 learns:11 down:1 incentivizing:1 embed:1 navigate:1 explored:2 evidence:1 survival:1 intrinsic:25 sequential:1 adding:1 importance:1 gained:1 effectively:1 illustrates:2 reedy:4 generalizing:1 simply:1 explore:4 visual:1 expressed:1 driessche:1 lewis:2 acm:1 nair:1 tejas:6 goal:82 towards:2 room:1 shared:1 onestep:1 specifically:2 asteroid:3 total:1 oudeyer:3 pas:1 succeeds:1 player:3 meaningful:1 select:2 internal:5 categorize:1 kulkarni:2 ongoing:1 d1:8 handling:1 |
5,785 | 6,234 | Confusions over Time: An Interpretable Bayesian
Model to Characterize Trends in Decision Making
Himabindu Lakkaraju
Department of Computer Science
Stanford University
[email protected]
Jure Leskovec
Department of Computer Science
Stanford University
[email protected]
Abstract
We propose Confusions over Time (CoT), a novel generative framework which
facilitates a multi-granular analysis of the decision making process. The CoT
not only models the confusions or error properties of individual decision makers
and their evolution over time, but also allows us to obtain diagnostic insights into
the collective decision making process in an interpretable manner. To this end,
the CoT models the confusions of the decision makers and their evolution over
time via time-dependent confusion matrices. Interpretable insights are obtained
by grouping similar decision makers (and items being judged) into clusters and
representing each such cluster with an appropriate prototype and identifying the
most important features characterizing the cluster via a subspace feature indicator
vector. Experimentation with real world data on bail decisions, asthma treatments,
and insurance policy approval decisions demonstrates that CoT can accurately
model and explain the confusions of decision makers and their evolution over
time.
1
Introduction
Several diverse domains such as judiciary, health care, and insurance rely heavily on human decision
making. Since decisions of judges, doctors, and other decision makers impact millions of people, it is
important to reduce errors in judgement. The first step towards reducing such errors and improving
the quality of decision making is to diagnose the errors being made by the decision makers. It is
crucial to not only identify the errors made by individual decision makers and how they change over
time, but also to determine common patterns of errors encountered in the collective decision making
process. This turns out to be quite challenging in practice partly because there is no ground truth
which captures the optimal decision in a given scenario.
Prior research has mainly focussed on modeling decisions of individual decision makers [2, 12]. For
instance, the Dawid-Skene model [2] assumes that each item (eg., a patient) has an underlying true
label (eg., a particular treatment) and a decision maker?s evaluation of the item will be masked by
her own biases and confusions. Confusions of each individual decision maker j are modeled using a
latent confusion matrix ?j , where an entry in the (p, q) cell denotes the probability that an item with
true label p will be assigned a label q by the decision maker j. The true labels of items and latent
confusion matrices of decision makers are jointly inferred as part of the inference process. However,
a major drawback of the Dawid-Skene framework and several of its extensions [16, 17, 12] is that
they do not provide any diagnostic insights into the collective decision making process. Furthermore,
none of these approaches account for temporal changes in the confusions of decision makers.
Here, we propose a novel Bayesian framework, Confusions over Time (CoT), which jointly: 1)
models the confusions of individual decision makers 2) captures the temporal dynamics of their
decision making 3) provides interpretable insights into the collective decision making process, and 4)
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
infers true labels of items. While there has been prior research on each of the aforementioned aspects
independently, there has not been a single framework which ties all of them together in a principled
yet simple way.
The modeling process of the CoT groups decision makers (and items) into clusters. Each such cluster
is associated with a subspace feature indicator vector which determines the most important features
that characterize the cluster, and a prototype which is the representative data point for that cluster. The
prototypes and the subspace feature indicator vectors together contribute to obtaining interpretable
insights into the decision making process. The decisions made by decision makers on items are
modeled as interactions between such clusters. More specifically, each pair of (decision maker,
item) clusters is associated with a set of latent confusion matrices, one for each discrete time instant.
The decisions are then modeled as multinomial variables sampled from such confusion matrices.
The inference process involves jointly inferring cluster assignments, the latent confusion matrices,
prototypes and feature indicator vectors corresponding to each of the clusters, and true labels of items
using a collapsed Gibbs sampling procedure.
We analyze the performance of CoT on three real-world datasets: (1) judicial bail decisions; (2)
treatment recommendations for asthma patients; (3) decisions to approve/deny insurance requests.
Experimental results demonstrate that the proposed framework is very effective at inferring true
labels of items, predicting decisions made by decision makers, and providing diagnostic insights into
the collective decision making process.
2
Related Work
Here, we provide an overview of related research on modeling decision making. We also highlight
the connections of this work to two other related yet different research directions: stochastic block
models, and interpretable models.
Modeling decision making. There has been a renewed interest in analyzing and understanding
human decisions due to the recent surge in applications in crowdsourcing, public policy, and education [6]. Prior research in this area has primarily focussed on the following problems: inferring
true labels of items from human annotations [17, 6, 18, 3], inferring the expertise of decision makers [5, 19, 21], analyzing confusions or error properties of individual decision makers [2, 12, 10], and
obtaining diagnostic insights into the collective decision making process [10].
While some of the prior work has addressed each of these problems independently, there have been
very few attempts to unify the aforementioned directions. For instance, Whitehill et. al. [19] proposed
a model which jointly infers the true labels and estimate of evaluator?s quality by modeling decisions
as functions of the expertise levels of decision makers and the difficulty levels of items. However, this
approach neither models the error properties of decision makers, nor provides any diagnostic insights
into the process of decision making. Approaches proposed by Skene et al. [2] and Liu et al. [12]
model the confusions of individual decision makers and also estimate the true labels of items, but fail
to provide any diagnostic insights into the patterns of collective decisions. Recently, Lakkaraju et al.
proposed a framework [10] which also provides diagnostic insights but it requires a post-processing
step employing Apriori algorithm to obtain these insights. Furthermore, none of the aforementioned
approaches model the temporal dynamics of decision making.
Stochastic block models. There has been a long line of research on modeling relational data using
stochastic block models [15, 7, 20]. These modeling techniques typically involved grouping entities
(eg., nodes in a graph) such that interactions between these entities (eg., edges in a graph) are governed
by their corresponding clusters. However, these approaches do not model the nuances of decision
making such as confusions of decision makers which is crucial to our work.
Interpretable models. A large body of machine learning literature focused on developing interpretable models for classification [11, 9, 13, 1] and clustering [8]. To this end, various classes of
models such as decision lists [11], decision sets [9], prototype (case) based models [1], and generalized additive models [13] were proposed. However, none of these approaches can be readily applied
to determine error properties of decision makers.
2
3
Confusions over Time Model
In this section, we present CoT, a novel Bayesian framework which facilitates an interpretable,
multi-granular analysis of the decision making process. We begin by discussing the problem setting
and then dive into the details of modeling and inference.
3.1
Setting
Let J and I denote the sets of decision makers and items respectively. Each item is judged by one
or more decision makers. In domains such as judiciary and health care, each defendant or patient is
typically assessed by a single decision maker, where as each item is evaluated by multiple decision
makers in settings such as crowdsourcing. Our framework can handle either scenarios and does not
make any assumptions about the number of decision makers required to evaluate an item. However,
we do assume that each item is judged no more than once by any given decision maker. The decision
made by a decision maker j about an item i is denoted by ri,j . Each decision ri,j is associated
with a discrete time stamp ti,j ? {1, 2 ? ? ? T } corresponding to the time instant when the item i was
evaluated by the decision maker j.
(j)
Each decision maker has M different features or attributes and am denotes the value of the mth
(i)
feature of decision maker j. Similarly, each item has N different features or attributes and bn
th
represents the value of the n feature of item i. Each item i is associated with a true label zi ?
{1, 2, ? ? ? K}. zi is not observed in the data and is modeled as a latent variable. This mimics most
real-world scenarios where the ground-truth capturing the optimal decision or the true label is often
not available.
3.2
Defining Confusions over Time (CoT) model
The CoT model jointly addresses the problems of modeling confusions of individual decision makers
and how these confusions change over time, and also provides interpretable diagnostic insights into
the collective decision making process. Each of these aspects is captured by the generative process of
the CoT model, which comprises of the following components: (1) cluster assignments; (2) prototype
selection and subspace feature indicator generation for each of the clusters; (3) true label generation
for each of the items; (4) time dependent confusion matrices. Below, we describe each of these
components and highlight the connections between them.
Cluster Assignments. The CoT model groups decision makers and items into clusters. The model
assumes that there are L1 decision maker clusters and L2 item clusters. The values of L1 , L2 are
assumed to be available in advance. Each decision maker j is assigned to a cluster cj , which is
sampled from a multinomial distribution with a uniform Dirichlet prior ? . Similarly, each item i is
associated with a cluster di , sampled from a multinomial distribution with a uniform Dirichlet prior
?0 . The features of decision makers and the decisions that they make depend on the clusters they
belong to. Analogously, the true labels of items, their features and the decisions involving them are
influenced by the clusters to which they are affiliated.
Prototype and Subspace Feature Indicator. The interpretability of the CoT stems from the following two crucial components: associating each decision maker and item cluster with a prototype or
an exemplar, and a subspace feature indicator which is a binary vector indicating which features are
important in characterizing the cluster. The prototype pc of a decision maker cluster c is obtained
by sampling uniformly over all decision makers 1 ? ? ? |J| i.e., pc ? U nif orm(1, |J|). The subspace
feature indicator, ?c , of the cluster c is a binary vector of length M . An element of this vector, ?c,f ,
corresponds to the feature f and indicates if that feature is important (?c,f = 1) in characterizing
the cluster c. ?c,f ? {0, 1} is sampled from a Bernoulli distribution. The prototype p0d and subspace
feature indicator vector ?d0 corresponding to an item cluster d are defined analogously.
Generating the features: The prototype and the subspace feature indicator vector together provide a
template for generating feature values of the cluster members. More specifically, if the mth feature is
designated as an important feature for cluster c, then instances in that cluster are very likely to inherit
the coresponding feature value from the prototype datapoint pc .
3
(j)
We sample the value of a discrete feature m corresponding to decision maker j, am , from a
multinomial distribution ?cj ,m where cj denotes the cluster to which j belongs. ?cj ,m is in turn
sampled from a Dirichlet distribution parameterized by the vector gpcj ,m ,?cj ,m ,? i.e., ?cj ,m ?
Dirichlet(gpcj ,m ,?cj ,m ,? ). gpc,m ,?c,m ,? is a vector defined such that the eth element of the vector
corresponds to the prior on the eth possible value of the mth feature. The eth element of this vector
is defined as:
gpc,m ,?c,m ,? (e) = ? (1 + ?1 [?c,m = 1 and pc,m = Vm,e ])
(1)
where 1 denotes the indicator function, and ? and ? are the hyperparameters which determine the
extent to which the prototype will be copied by the cluster members. Vm,e denotes the eth possible
value of the mth feature. For example, let us assume that the mth feature corresponds to gender
which can take one of the following values: {male, f emale, N A}, then Vm,1 represents the value
male, Vm,2 denotes the values female and so on.
Equation 1 can be explained as follows: if the mth feature is irrelevant to the cluster c (i.e., ?c,m = 0),
then ?c,m will be sampled from a Dirichlet distribution with a uniform prior ?. On the other hand, if
?c,m = 1, then ?c,m has a prior of ? + ? on that feature value which matches the prototype?s feature
value, and a prior of just ? on all the other possible feature values. The larger the value of ?, the
higher the likelihood that the cluster members assume the same feature value as that of the prototype.
Values of continuous features are sampled in an analogous fashion. We model continuous features as
Gaussian distributions. If a particular continuous feature is designated as an important feature for
some cluster c, then the mean of the Gaussian distribution corresponding to this feature is set to be
equal to that of the corresponding feature value of the prototype pc , otherwise the mean is set to 0.
The variance of the Gaussian distribution is set to ? for all continuous features.
Though the above exposition focused on clusters of decision makers, we can generate feature values
of items in a similar manner. Feature values of items belonging to some cluster d are sampled from
the corresponding feature distributions ?0d , which are in turn sampled from priors which account for
the prototype p0d and subspace feature indicator ?d0 .
True Labels of Items. Our model assumes that every item i is associated with a true label zi . This
true label is sampled from a multinomial distribution ?di where di is the cluster to which i belongs.
?di is sampled from a Dirichlet prior which ensures that the true labels of the members of cluster di
conform to the true label of the prototype. The prior is defined using a vector gp0 0 and each element
d
of this vector can be computed as:
h
i
gp0 0 (e) = ? 1 + ?1 zp0d = e
(2)
d
Note that Equation 2 assigns a higher prior to the label which is the same as that of the cluster?s
prototype. The larger the value of ?, the higher the likelihood that the true labels of all the cluster
members will be the same as that of the prototype.
Time Dependent Confusion Matrices. Each pair of decision maker-item clusters (c, d) is associated
(t)
with a set of latent confusion matrices ?c,d , one for each discrete time instant t. These confusion
matrices influence how the decision makers in the cluster c judge the items in d and also allow us to
study how decision maker confusions change with time.
Each confusion matrix is of size K ? K where K denotes the number of possible values that an item
can be labeled with. Each entry (p, q) in a confusion matrix ? determines the probability that an
item with true label p will be assigned the label q. Higher probability mass on the diagonal signifies
accurate decisions.
Let us consider the confusion matrix corresponding to decision maker cluster c, item cluster d, and
(1)
(1)
time instant 1 (first time instant): ?c,d . Each row of this matrix denoted by ?c,d,z (z is the row index)
is sampled from a Dirichlet distribution with a uniform prior ?. The CoT framework also models the
dependencies between the confusion matrices at consecutive time instants via a trade-off parameter ?.
(t+1)
(t)
The magnitude of ? determines how similar ?c,d is to ?c,d . The eth element in the row z of the
(t+1)
confusion matrix ?c,d,z is sampled as follows:
(t+1)
?c,d,z (e) ? Dirichlet(h?(t)
c,d,z
) where h?(t)
(e),?
c,d,z (e),?
4
h
i
(t)
= ? 1 + ? ?c,d,z (e)
(3)
?0
Features of
Decision Makers
?
??
?
|J|
Prototypes &
Subspace
Feature
Indicators for
Decision
Maker
Clusters
Features and
Labels of Items
?0?
0
|I|
?0
!d0
cj
pc
!c
di
p0d
Prototypes &
Subspace
Feature
Indicators for
Item Clusters
b(i)
0
d
c
L1
|J|
T
?
zi
a(j)
tj,i
|I|
rj,i
?(t)
|J| x |I|
Decisions
L2
^, ?
?(t+1)
T
Time-dependent
Confusion Matrices
Figure 1: Plate notation for the CoT model. Each block is annotated with descriptive text. The
hyperparameters ?, ? are omitted to improve readability.
Generating the decisions: Our model assumes that the decision rj,i made by a decision maker j about
an item i depends on the clusters cj , di that j and i belong to, the time instant tj,i when the decision
(t )
was made, and the true label zi of item i. More specifically, rj,i ? M ultinomial(?cji,j
,di ,zi ).
Complete Generative Process. Please refer to the Appendix for the complete generative process of
CoT. The graphical representation of CoT is shown in Figure 1.
3.3
Inference
We use collapsed Gibbs sampling [4] approach to infer the latent variables of the CoT framework.
This technique involves integrating out all the intermediate latent variables ?, ?, ?0 , ? and sampling
0
only the variables corersponding to prototypes pc , p0d , subspace feature indicator vectors ?c,m , ?d,n
,
1
cluster assignments cj , di and item labels zi . The update equation for pc is given by: .
p(pc = q|z, c, d, ?, ? 0 , p0 , p(?c) ) ?
M
Y
m=1
1 (?c,m = 1) ? uc,m,qm
(4)
where uc,m,qm denotes the number of instances belonging to cluster c for which the discrete feature
m takes the value qm . qm denotes the value of the feature m corresponding to the decision maker
q. The update equation for p0d can be derived analogously. The conditional distribution for ?c,m
obtained by integrating out ? variables is:
?
?? ? B(gpc,m ,1,? +?uc )
if s = 1
B(gpc,m ,1,? )
?(c,m)
0
0
p(?c,m = s|z, c, d, ?
, ? , p , p, ?) ?
(5)
B(gpc,m ,0,? +?
uc )
?(1 ? ?) ?
otherwise
B(gp
,0,? )
c,m
where u
?c denotes the number of decision makers belonging to cluster c and B denotes the Beta
0
function. The conditional distribution for ?d,n
can be written in an analogous manner. The conditional
distributions for cj , di and zi can be derived as described in [10].
4
Experimental Evaluation
In this section, we present the evaluation of the CoT model on a variety of datasets. Our experiments
are designed to evaluate the performance of our model on a variety of tasks such as recovering
confusion matrices, predicting item labels and decisions made by decision makers. We also study the
interpretability aspects of our model by evaluating the insights obtained.
1
Due to space limitations, we present the update equations assuming that the features of decision makers and
items are discrete.
5
Dataset
# of
Evaluators
# of
Items
# of
Decisions
Evaluator
Features
Item
Features
Bail
252
250,500
250,500
# of felony, misd.,
minor offense cases
Previous arrests, offenses,
pays rent, children, gender
Asthma
48
60,048
62,497
Gender, age, experience,
specialty, # of patients seen
Gender, age, asthma history,
BMI, allergies
Insurance
74
49,876
50,943
# of policy decisions,
# of construction, chemical,
technology decisions
domain, previous losses,
premium amount quoted
Table 1: Summary statistics of our datasets.
Datasets. We evaluate CoT on the following real-world datasets: (1) Bail dataset comprising
of information about criminal court judges deciding if defendants should be released without any
conditions, asked to pay bail amount, or be locked up (K = 3); Here, decision makers are judges and
items are defendants. (2) Asthma dataset which captures the treatment recommendations given by
doctors to patients. Patients are recommended one of the two possible categories of treatments: mild
(mild drugs/inhalers), strong (nebulizers/immunomodulators) (K = 2). (3) Insurance data which
contains information about client managers deciding if a client company?s insurance request should
be approved or denied (K = 2). Each of the datasets spans about three years in time. We do have
the ground-truth of true labels associated with defendants/patients/insurance clients in the form of
expert decisions and observed consequences for each of the datasets. Note, however, that we only use
a small fraction (5%) of the available true labels during the learning process. The decision makers
and items are associated with a variety of features in each of these datasets (Table 1).
Baselines & Ablations. We benchmark the performance of CoT against the following state-of-theart baselines: Dawid-Skene Model (DS) [2], Single Confusion model (SC) [12], Hybrid Confusion
Model (HC) [12], Joint Confusion Model (JC) [10]. DS, SC and HC models focus only on modeling
decisions of individual decision makers and do not provide any diagnostic insights into the decision
making process JC model, on the other hand, also provides diagnostic insights (via post processing)
None of the baselines account for the temporal aspects.
To evaluate the importance of the various components of our model, we also consider multiple
ablations of CoT. Non-temporal CoT (NT-CoT) is a variant of CoT which does not incorporate
the temporal component and hence is applicable to a single time instance. Non-intepretable CoT
(NI-CoT) is another ablation which does not involve the prototype or subspace feature indicator vector
generation, instead ?, ?0 , and ? are sampled from symmetric Dirichlet priors.
Experimental Setup. In most real-world settings involving human decision making, the true labels
of items are available for very few instances. We mimic this setting in our experiments by employing
weak supervision. We let the all models (including the baselines) access the true labels of about 5%
of the items (selected randomly) in the data during the learning phase. In all of our experiments, we
divide each dataset into three discrete time chunks. Each time chunk corresponds to a year in the
data. While our model can handle the temporal aspects explicitly, the same is not true for any of
the baselines as well as the ablation Non-temporal CoT. To work around this, we run each of these
models separately on data from each time slice. We run the collapsed Gibbs sampling inference
until the approximate convergence of log-likelihood. All the hyperparameters of our model are
initialized to standard values: ? = 0? = ? = ? = ? = 0? = ? = ? = 1, ? = 0.1. The number of
decision maker and item clusters L1 and L2 were set using the Bayesian Information Criterion (BIC)
metric [14]. The parameters of all the other baselines were chosen similarly.
4.1
Evaluating Estimated Confusion Matrices and Predictive Power
We evaluate CoT on estimating confusion matrices, predicting item labels, and predicting decisions
of decision makers. We first present the details of each task and then discuss the results.
Recovering Confusion Matrices. We experiment with the CoT model to determine how accurately
it can recover decision maker confusion matrices. To measure this, we use the Mean Absolute Error
6
Task
Predicting item labels
Inferring confusion matrices
Predicting decisions
Method
SC
DS
HC
JC
LR
Bail
0.53
0.61
0.62
0.64
0.56
Asthma
0.59
0.63
0.65
0.68
0.60
Insurance
0.51
0.64
0.66
0.69
0.57
Bail
0.38
0.32
0.31
0.26
Asthma
0.31
0.28
0.26
0.19
Insurance
0.40
0.36
0.33
0.29
Bail
0.52
0.56
0.59
0.64
0.58
Asthma
0.51
0.58
0.64
0.67
0.60
Insurance
0.55
0.58
0.61
0.66
0.57
NT-CoT
NI-CoT
0.65
0.69
0.68
0.70
0.69
0.70
0.24
0.21
0.19
0.18
0.28
0.26
0.66
0.67
0.68
0.70
0.66
0.68
CoT
0.71
0.72
0.74
0.19
0.16
0.23
0.69
0.74
0.71
Gain %
9.86
5.56
6.76
36.84
18.75
26.09
7.25
9.46
7.04
Table 2: Experimental results: CoT consistently performs best across all tasks and datasets. Bottom
row of the table indicates percentage gain of the CoT over the best performing baseline JC.
(MAE) metric to compare the elements of the estimated confusion matrix (?0 ) and the observed
confusion matrix (?). MAE of two such matrices is the sum of element wise differences:
M AE(?, ?0 ) =
K K
1 XX
|?u,v ? ?0u,v |
K 2 u=1 v=1
While the baseline models SC, DS, HC associate a single confusion matrix with each decision maker,
the baseline JC and our model assume that each decision maker can have multiple confusion matrices
(one per each item cluster). To ensure a fair comparison, we apply the MAE metric every time a
decision maker judges an item choosing the appropriate confusion matrix and then compute the
average MAE.
Predicting Item Labels. We also evaluate the CoT on the task of predicting item labels. We use
the AUC ROC metric to measure the predictive performance. In addition to the previously discussed
baselines, we also compare the performance against Logistic Regression (LR) classifier. The LR
model was provided decision maker and item features, time stamps, and decision maker decisions as
input features.
Predicting Evaluator Decisions. The CoT model can also be used to predict decision maker
decisions. Recall that the decision maker decisions are regarded as observed variables through out
the inference. However, we can leverage the values of all the latent variables learned during inference
to predict the decision maker decisions. In order to execute this task, we divide the data into 10 folds
and carry out the Gibbs sampling inference procedure on the first 9 folds where the decision maker
decisions are observed. We then use the estimated latent variables to sample the decision maker
decisions for the remaining fold. We repeat this process over each of the 10 folds and report average
AUC.
Results and Discussion. Results of all the tasks are presented in Table 2. CoT outperforms all the
other baselines and ablations on all the tasks. The SC model which assumes that all the decision
makers share a single confusion matrix performs extremely poorly compared to the other baselines,
indicating that its assumptions are not valid in real-world datasets. The JC model, which groups
similar decision makers and items together turns out to be one of the best performing baseline. The
performance of our ablation models indicates that excluding the temporal aspects of the CoT causes a
dip in the performance of the model on all the tasks. Furthermore, leaving out the interpretability
components affects the model performance slightly. These results demonstrate the utility of the
joint inference of temporal, interpretable aspects alongside decision maker confusions and cluster
assignments.
4.2
Evaluating Interpretability
In this section, we first present an evaluation of the quality of the clusters generated by the CoT
model. We then discuss some of the qualitative insights obtained using CoT.
7
Model
Purity
Inverse Purity
JC
Bail
0.67
Asthma
0.71
Insurance
0.74
Bail
0.63
Asthma
0.66
Insurance
0.67
NT-CoT
NI-CoT
0.74
0.72
0.78
0.76
0.79
0.78
0.72
0.67
0.73
0.71
0.76
0.72
CoT
0.83
0.84
0.81
0.78
0.79
0.82
Table 3: Average purity and inverse purity computed across all decision maker and item clusters.
Figure 2: Estimated (top) and observed (bottom) confusion matrices for asthma dataset: These
matrices correspond to the group of decision makers who have relatively little experience (# of years
practising medicine = 0/1) and the group of patients with allergies but no past asthma attacks.
Cluster Quality. The prototype and the subspace feature indicator vector of a cluster allow us to
understand the nature of the instances in the cluster. For instance, if the subspace feature indicator
vector signifies that gender is the one and only important feature for some decision maker cluster c
and if the prototype of that cluster has value gender = female, then we can infer that c constitutes
of female decision makers. Since we are able to associate each cluster with such patterns, we can
readily define the notions of purity and inverse purity of a cluster.
Consider cluster c from the example above again. Since the defining pattern of this cluster is gender
= female, we can compute the purity of the cluster by calculating what fraction of the decision makers
in the cluster are female. Similarly, we can also compute what fraction of all decision makers who are
female and are assigned to cluster c. This is referred to as inverse purity. While purity metric captures
the notion of cluster homogeneity, the inverse purity metric ensures cluster completeness.
We compute the average purity and inverse purity metrics for the CoT, its ablation and a baseline
JC across all the decision maker and item clusters and the results are presented in Table 3. Notice
that CoT outperforms all the other ablations and the JC baseline. It is interesting to note that the
non-interpretable CoT (NI-CoT) has much lower purity and inverse purity compared to non-temporal
CoT (NT-CoT) as well as the CoT. This is partly due to the fact that the NI-CoT does not model
prototypes or feature indicators which leads to less pure clusters.
Qualitative Inspection of Insights. We now inspect the cluster descriptions and the corresponding
confusion matrices generated by our approach. Figure 2 shows one of the insights obtained by our
model on the asthma dataset. The confusion matrices presented in Figure 2 correspond to the group
of doctors with 0/1 years of experience evaluating patients who have allergies but did not suffer from
previous asthma attacks. The three confusion matrices, one for each year (from left to right), shown
on the top row in Figure 2 correspond to our estimates and those on the bottom row are computed
from the data. It can be seen that the estimated confusion matrices match very closely with the ground
truth thus demonstrating the effectiveness of the CoT framework.
Interpreting results in Figure 2, we find that doctors within the first year of their practice (left most
confusion matrix) were recommending stronger treatments (nebulizers and immunomodulators) to
patients who are likely to get better with milder treatments such as low impact drugs and inhalers.
As time passed, they were able to better identify patients who could get better with milder options.
This is a very interesting insight and we also found that such a pattern holds for client managers with
relatively little experience. This could possibly mean that inexperienced decision makers are more
likely to be risk averse, and therefore opt for safer choices.
8
References
[1] J. Bien and R. Tibshirani. Prototype selection for interpretable classification. The Annals of Applied
Statistics, pages 2403?2424, 2011.
[2] A. P. Dawid and A. M. Skene. Maximum likelihood estimation of observer error-rates using the em
algorithm. Applied Statistics, (1):20?28, 1979.
[3] O. Dekel and O. Shamir. Vox populi: Collecting high-quality labels from a crowd. In COLT, 2009.
[4] T. L. Griffiths and M. Steyvers. Finding scientific topics. Proceedings of the National Academy of Sciences,
101(suppl 1):5228?5235, 2004.
[5] M. Joglekar, H. Garcia-Molina, and A. Parameswaran. Evaluating the crowd with confidence. In KDD,
pages 686?694, 2013.
[6] E. Kamar, S. Hacker, and E. Horvitz. Combining human and machine intelligence in large-scale crowdsourcing. In AAMAS, pages 467?474, 2012.
[7] C. Kemp, J. B. Tenenbaum, T. L. Griffiths, T. Yamada, and N. Ueda. Learning systems of concepts with an
infinite relational model. In AAAI, page 5, 2006.
[8] B. Kim, C. Rudin, and J. A. Shah. The bayesian case model: A generative approach for case-based
reasoning and prototype classification. In NIPS, pages 1952?1960. 2014.
[9] H. Lakkaraju, S. H. Bach, and J. Leskovec. Interpretable decision sets: A joint framework for description
and prediction. In KDD, pages 1675?1684, 2016.
[10] H. Lakkaraju, J. Leskovec, J. Kleinberg, and S. Mullainathan. A bayesian framework for modeling human
evaluations. In SDM, pages 181?189, 2015.
[11] B. Letham, C. Rudin, T. H. McCormick, D. Madigan, et al. Interpretable classifiers using rules and bayesian
analysis: Building a better stroke prediction model. The Annals of Applied Statistics, 9(3):1350?1371,
2015.
[12] C. Liu and Y.-M. Wang. Truelabel + confusions: A spectrum of probabilistic models in analyzing multiple
ratings. In ICML, pages 225?232, 2012.
[13] Y. Lou, R. Caruana, and J. Gehrke. Intelligible models for classification and regression. In KDD, pages
150?158, 2012.
[14] A. A. Neath and J. E. Cavanaugh. The bayesian information criterion: background, derivation, and
applications. Wiley Interdisciplinary Reviews: Computational Statistics, 4(2):199?203, 2012.
[15] K. Nowicki and T. A. B. Snijders. Estimation and prediction for stochastic blockstructures. Journal of the
American Statistical Association, 96(455):1077?1087, 2001.
[16] V. C. Raykar, S. Yu, L. H. Zhao, A. Jerebko, C. Florin, G. H. Valadez, L. Bogoni, and L. Moy. Supervised
learning from multiple experts: Whom to trust when everyone lies a bit. In ICML, pages 889?896, 2009.
[17] V. C. Raykar, S. Yu, L. H. Zhao, G. H. Valadez, C. Florin, L. Bogoni, and L. Moy. Learning from crowds.
JMLR, pages 1297?1322, 2010.
[18] R. Snow, B. O?Connor, D. Jurafsky, and A. Y. Ng. Cheap and fast?but is it good?: Evaluating non-expert
annotations for natural language tasks. In EMNLP, pages 254?263, 2008.
[19] J. Whitehill, P. Ruvolo, T. Wu, J. Bergsma, and J. R. Movellan. Whose vote should count more: Optimal
integration of labels from labelers of unknown expertise. In NIPS, pages 2035?2043, 2009.
[20] K. S. Xu and A. O. Hero III. Dynamic stochastic blockmodels: Statistical models for time-evolving
networks. In International Conference on Social Computing, Behavioral-Cultural Modeling, and Prediction,
pages 201?210. Springer, 2013.
[21] D. Zhou, J. C. Platt, S. Basu, and Y. Mao. Learning from the wisdom of crowds by minimax entropy. In
NIPS, pages 2204?2212, 2012.
9
| 6234 |@word mild:2 judgement:1 stronger:1 approved:1 dekel:1 bn:1 p0:1 carry:1 liu:2 contains:1 renewed:1 outperforms:2 past:1 horvitz:1 nt:4 yet:2 written:1 readily:2 additive:1 kdd:3 dive:1 cheap:1 designed:1 interpretable:15 update:3 generative:5 selected:1 intelligence:1 item:64 rudin:2 inspection:1 ruvolo:1 cavanaugh:1 yamada:1 lr:3 provides:5 completeness:1 contribute:1 node:1 readability:1 attack:2 evaluator:4 beta:1 qualitative:2 behavioral:1 manner:3 cot:53 nor:1 surge:1 multi:2 manager:2 approval:1 company:1 little:2 spain:1 begin:1 underlying:1 notation:1 estimating:1 mass:1 xx:1 provided:1 what:2 cultural:1 finding:1 temporal:11 every:2 collecting:1 ti:1 tie:1 demonstrates:1 qm:4 classifier:2 platt:1 consequence:1 analyzing:3 jerebko:1 challenging:1 specialty:1 jurafsky:1 locked:1 practice:2 block:4 movellan:1 procedure:2 area:1 drug:2 eth:5 evolving:1 confidence:1 integrating:2 griffith:2 orm:1 madigan:1 get:2 selection:2 judged:3 collapsed:3 influence:1 risk:1 independently:2 focused:2 unify:1 identifying:1 assigns:1 pure:1 insight:19 rule:1 regarded:1 steyvers:1 handle:2 notion:2 analogous:2 annals:2 construction:1 shamir:1 heavily:1 associate:2 trend:1 dawid:4 element:7 labeled:1 observed:6 bottom:3 wang:1 capture:4 ensures:2 averse:1 felony:1 trade:1 principled:1 asked:1 dynamic:3 depend:1 predictive:2 joint:3 various:2 derivation:1 fast:1 effective:1 describe:1 sc:5 choosing:1 crowd:4 quite:1 whose:1 stanford:4 larger:2 kamar:1 otherwise:2 statistic:5 gp:1 jointly:5 descriptive:1 sdm:1 propose:2 interaction:2 combining:1 ablation:8 poorly:1 academy:1 description:2 convergence:1 cluster:74 generating:3 exemplar:1 minor:1 strong:1 recovering:2 c:2 involves:2 judge:5 direction:2 snow:1 drawback:1 annotated:1 attribute:2 closely:1 stochastic:5 human:6 intepretable:1 vox:1 public:1 education:1 opt:1 extension:1 hold:1 around:1 ground:4 deciding:2 predict:2 major:1 consecutive:1 omitted:1 released:1 estimation:2 applicable:1 label:37 maker:83 gehrke:1 joglekar:1 gaussian:3 zhou:1 derived:2 focus:1 consistently:1 bernoulli:1 indicates:3 mainly:1 likelihood:4 baseline:15 am:2 parameswaran:1 kim:1 inference:9 milder:2 dependent:4 typically:2 her:1 mth:6 comprising:1 aforementioned:3 classification:4 colt:1 denoted:2 integration:1 uc:4 apriori:1 equal:1 once:1 ng:1 sampling:6 represents:2 yu:2 icml:2 constitutes:1 theart:1 mimic:2 report:1 primarily:1 few:2 randomly:1 national:1 homogeneity:1 individual:9 phase:1 attempt:1 interest:1 insurance:12 evaluation:5 male:2 pc:9 tj:2 accurate:1 edge:1 mullainathan:1 experience:4 judicial:1 gp0:2 divide:2 initialized:1 leskovec:3 instance:8 modeling:12 caruana:1 assignment:5 signifies:2 entry:2 masked:1 uniform:4 characterize:2 dependency:1 chunk:2 international:1 interdisciplinary:1 probabilistic:1 vm:4 off:1 together:4 analogously:3 again:1 aaai:1 possibly:1 emnlp:1 expert:3 american:1 zhao:2 valadez:2 account:3 jc:9 explicitly:1 depends:1 observer:1 diagnose:1 analyze:1 doctor:4 recover:1 option:1 annotation:2 ni:5 variance:1 who:5 correspond:3 identify:2 wisdom:1 weak:1 bayesian:8 accurately:2 none:4 expertise:3 history:1 stroke:1 explain:1 datapoint:1 influenced:1 against:2 involved:1 associated:9 di:10 sampled:14 gain:2 dataset:6 treatment:7 recall:1 infers:2 cj:11 higher:4 supervised:1 populi:1 evaluated:2 though:1 execute:1 furthermore:3 just:1 asthma:14 nif:1 hand:2 d:4 until:1 trust:1 offense:2 logistic:1 quality:5 scientific:1 building:1 concept:1 true:26 evolution:3 hence:1 assigned:4 chemical:1 symmetric:1 nowicki:1 eg:4 during:3 raykar:2 please:1 auc:2 arrest:1 criterion:2 generalized:1 plate:1 complete:2 demonstrate:2 confusion:57 performs:2 l1:4 interpreting:1 reasoning:1 wise:1 novel:3 recently:1 common:1 multinomial:5 overview:1 million:1 belong:2 discussed:1 association:1 mae:4 refer:1 connor:1 gibbs:4 similarly:4 deny:1 language:1 access:1 supervision:1 labelers:1 bergsma:1 own:1 recent:1 female:6 belongs:2 irrelevant:1 scenario:3 binary:2 discussing:1 molina:1 captured:1 seen:2 care:2 purity:14 determine:4 recommended:1 multiple:5 rj:3 infer:2 stem:1 d0:3 snijders:1 match:2 bach:1 long:1 post:2 impact:2 prediction:4 involving:2 variant:1 regression:2 ae:1 patient:11 metric:7 suppl:1 cell:1 addition:1 background:1 separately:1 addressed:1 leaving:1 crucial:3 facilitates:2 member:5 effectiveness:1 leverage:1 intermediate:1 iii:1 variety:3 affect:1 bic:1 zi:8 florin:2 associating:1 reduce:1 prototype:29 court:1 cji:1 utility:1 passed:1 suffer:1 moy:2 hacker:1 cause:1 gpc:5 involve:1 amount:2 lakkaraju:4 tenenbaum:1 category:1 generate:1 percentage:1 notice:1 diagnostic:10 estimated:5 per:1 tibshirani:1 diverse:1 conform:1 discrete:7 group:6 demonstrating:1 neither:1 graph:2 fraction:3 year:6 sum:1 run:2 inverse:7 parameterized:1 defendant:4 blockstructures:1 ueda:1 wu:1 decision:145 appendix:1 bit:1 capturing:1 pay:2 copied:1 fold:4 encountered:1 ri:2 kleinberg:1 aspect:7 span:1 extremely:1 performing:2 relatively:2 skene:5 department:2 developing:1 designated:2 request:2 belonging:3 across:3 slightly:1 em:1 making:21 explained:1 equation:5 previously:1 turn:4 discus:2 fail:1 count:1 hero:1 end:2 available:4 experimentation:1 apply:1 appropriate:2 shah:1 assumes:5 denotes:11 clustering:1 dirichlet:9 ensure:1 graphical:1 remaining:1 top:2 instant:7 medicine:1 calculating:1 approve:1 allergy:3 diagonal:1 subspace:16 lou:1 entity:2 denied:1 topic:1 whom:1 extent:1 kemp:1 nuance:1 assuming:1 length:1 modeled:4 index:1 providing:1 setup:1 p0d:5 whitehill:2 collective:8 policy:3 affiliated:1 unknown:1 mccormick:1 inspect:1 datasets:10 benchmark:1 defining:2 relational:2 excluding:1 inferred:1 rating:1 criminal:1 pair:2 required:1 connection:2 ultinomial:1 learned:1 barcelona:1 nip:4 jure:2 address:1 able:2 alongside:1 below:1 pattern:5 bien:1 interpretability:4 including:1 everyone:1 power:1 difficulty:1 rely:1 client:4 predicting:9 indicator:19 hybrid:1 natural:1 representing:1 minimax:1 improve:1 technology:1 health:2 text:1 prior:16 understanding:1 literature:1 l2:4 review:1 loss:1 bail:10 highlight:2 generation:3 limitation:1 interesting:2 granular:2 age:2 share:1 row:6 summary:1 repeat:1 bias:1 allow:2 understand:1 basu:1 template:1 characterizing:3 focussed:2 absolute:1 slice:1 dip:1 world:6 evaluating:6 valid:1 made:8 employing:2 social:1 approximate:1 assumed:1 recommending:1 quoted:1 spectrum:1 continuous:4 latent:10 table:7 nature:1 obtaining:2 improving:1 hc:4 domain:3 inherit:1 did:1 blockmodels:1 bmi:1 intelligible:1 hyperparameters:3 child:1 fair:1 aamas:1 body:1 xu:1 representative:1 referred:1 roc:1 fashion:1 wiley:1 inferring:5 comprises:1 mao:1 lie:1 governed:1 stamp:2 rent:1 jmlr:1 list:1 grouping:2 importance:1 magnitude:1 entropy:1 garcia:1 likely:3 bogoni:2 recommendation:2 springer:1 gender:7 corresponds:4 truth:4 determines:3 conditional:3 exposition:1 towards:1 change:4 safer:1 specifically:3 infinite:1 reducing:1 uniformly:1 partly:2 experimental:4 premium:1 vote:1 indicating:2 people:1 assessed:1 incorporate:1 evaluate:6 crowdsourcing:3 |
5,786 | 6,235 | Kernel Bayesian Inference with
Posterior Regularization
Yang Song? , Jun Zhu??, Yong Ren?
Dept. of Physics, Tsinghua University, Beijing, China
Dept. of Comp. Sci. & Tech., TNList Lab; Center for Bio-Inspired Computing Research
State Key Lab for Intell. Tech. & Systems, Tsinghua University, Beijing, China
[email protected]; {dcszj@, renyong15@mails}.tsinghua.edu.cn
?
?
Abstract
We propose a vector-valued regression problem whose solution is equivalent to the
reproducing kernel Hilbert space (RKHS) embedding of the Bayesian posterior
distribution. This equivalence provides a new understanding of kernel Bayesian
inference. Moreover, the optimization problem induces a new regularization for the
posterior embedding estimator, which is faster and has comparable performance
to the squared regularization in kernel Bayes? rule. This regularization coincides
with a former thresholding approach used in kernel POMDPs whose consistency
remains to be established. Our theoretical work solves this open problem and
provides consistency analysis in regression settings. Based on our optimizational
formulation, we propose a flexible Bayesian posterior regularization framework
which for the first time enables us to put regularization at the distribution level.
We apply this method to nonparametric state-space filtering tasks with extremely
nonlinear dynamics and show performance gains over all other baselines.
1
Introduction
Kernel methods have long been effective in generalizing linear statistical approaches to nonlinear
cases by embedding a sample to the reproducing kernel Hilbert space (RKHS) [1]. In recent years,
the idea has been generalized to embedding probability distributions [2, 3]. Such embeddings of
probability measures are usually called kernel embeddings (a.k.a. kernel means). Moreover, [4, 5, 6]
show that statistical operations of distributions can be realized in RKHS by manipulating kernel
embeddings via linear operators. This approach has been applied to various statistical inference and
learning problems, including training hidden Markov models (HMM) [7], belief propagation (BP)
in tree graphical models [8], planning Markov decision processes (MDP) [9] and partially observed
Markov decision processes (POMDP) [10].
One of the key workhorses in the above applications is the kernel Bayes? rule [5], which establishes
the relation among the RKHS representations of the priors, likelihood functions and posterior
distributions. Despite empirical success, the characterization of kernel Bayes? rule remains largely
incomplete. For example, it is unclear how the estimators of the posterior distribution embeddings
relate to optimizers of some loss functions, though the vanilla Bayes? rule has a nice connection [11].
This makes generalizing the results especially difficult and hinters the intuitive understanding of
kernel Bayes? rule.
To alleviate this weakness, we propose a vector-valued regression [12] problem whose optimizer
is the posterior distribution embedding. This new formulation is inspired by the progress in two
fields: 1) the alternative characterization of conditional embeddings as regressors [13], and 2) the
?
Corresponding author.
introduction of posterior regularized Bayesian inference (RegBayes) [14] based on an optimizational
reformulation of the Bayes? rule.
We demonstrate the novelty of our formulation by providing a new understanding of kernel Bayesian
inference, with theoretical, algorithmic and practical implications. On the theoretical side, we are
able to prove the (weak) consistency of the estimator obtained by solving the vector-valued regression
problem under reasonable assumptions. As a side product, our proof can be applied to a thresholding
technique used in [10], whose consistency is left as an open problem. On the algorithmic side, we
propose a new regularization technique, which is shown to run faster and has comparable accuracy to
squared regularization used in the original kernel Bayes? rule [5]. Similar in spirit to RegBayes, we
are also able to derive an extended version of the embeddings by directly imposing regularization on
the posterior distributions. We call this new framework kRegBayes. Thanks to RKHS embeddings of
distributions, this is the first time, to the best of our knowledge, people can do posterior regularization
without invoking linear functionals (such as moments) of the random variables. On the practical side,
we demonstrate the efficacy of our methods on both simple and complicated synthetic state-space
filtering datasets.
Same to other algorithms based on kernel embeddings, our kernel regularized Bayesian inference
framework is nonparametric and general. The algorithm is nonparametric, because the priors, posterior
distributions and likelihood functions are all characterized by weighted sums of data samples. Hence
it does not need the explicit mechanism such as differential equations of a robot arm in filtering tasks.
It is general in terms of being applicable to a broad variety of domains as long as the kernels can be
defined, such as strings, orthonormal matrices, permutations and graphs.
2
2.1
Preliminaries
Kernel embeddings
Let (X , BX ) be a measurable space of random variables, pX be the associated probability measure and
HX be a RKHS with kernel k(?, ?). We define the kernel embedding of pX to be ?X = EpX [?(X)] ?
HX , where ?(X) = k(X, ?) is the feature map. Such a vector-valued expectation always exists if the
kernel is bounded, namely supx kX (x, x) < ?.
The concept of kernel embeddings has several important statistical merits. Inasmuch as the reproducing property, the expectation of f ? H w.r.t. pX can be easily computed as EpX [f (X)] =
EpX [hf, ?(X)i] = hf, ?X i. There exists universal kernels [15] whose corresponding RKHS H is
dense in CX in terms of sup norm. This means H contains a rich range of functions f and their
expectations can be computed by inner products without invoking usually intractable integrals. In
addition, the inner product structure of the embedding space H provides a natural way to measure the
differences of distributions through norms.
In much the same way we can define kernel embeddings of linear operators. Let (X , BX ) and (Y, BY )
be two measurable spaces, ?(x) and ?(y) be the measurable feature maps of corresponding RKHS
HX and HY with bounded kernels, and p denote the joint distribution of a random variable (X, Y ) on
X ?Y with product measures. The covariance operator CXY is defined as CXY = Ep [?(X)??(Y )],
where ? denotes the tensor product. Note that it is possible to identify CXY with ?(XY ) in HX ? HY
with the kernel function k((x1 , y1 ), (x2 , y2 )) = kX (x1 , x2 )kY (y1 , y2 ) [16]. There is an important
relation between kernel embeddings of distributions and covariance operators, which is fundamental
for the sequel:
Theorem 1 ([4, 5]). Let ?X , ?Y be the kernel embeddings of pX and pY respectively. If CXX is
injective, ?X ? R(CXX ) and E[g(Y ) | X = ?] ? HX for all g ? HY , then
?1
?Y = CY X CXX
?X .
(1)
?1
In addition, ?Y |X=x = E[?(Y )|X = x] = CY X CXX
?(x).
On the implementation side, we need to estimate these kernel embeddings via samples. An intuitive
PN
estimator for the embedding ?X is ?
bX = N1 i=1 ?(xi ), where {xi }N
i=1 is a sample from pX .
P
N
1
b
Similarly, the covariance operators can also be estimated by CXY = N i=1 ?(xi ) ? ?(yi ). Both
1
operators are shown to converge in the RKHS norm at a rate of Op (N ? 2 ) [4].
2
2.2
Kernel Bayes? rule
Let ?(Y ) be the prior distribution of a random variable Y , p(X = x | Y ) be the likelihood,
p? (Y | X = x) be the posterior distribution given ?(Y ) and observation x, and p? (X, Y ) be the
joint distribution incorporating ?(Y ) and p(X | Y ). Kernel Bayesian inference aims to obtain the
posterior embedding ??Y (X = x) given a prior embedding ?Y and a covariance operator CXY . By
Bayes? rule, p? (Y | X = x) ? ?(Y )p(X = x | Y ). We assume that there exists a joint distribution
p on X ? Y whose conditional distribution matches p(X | Y ) and let CXY be its covariance operator.
Note that we do not require p = p? hence p can be any convenient distribution.
? ?1
According to Thm. 1, ??Y (X = x) = CY? X CXX
?(x), where CY? X corresponds to the joint
?
?
distribution p and CXX to the marginal probability of p? on X. Recall that CY? X can be identified
with ?(Y X) in HY ?HX , we can apply Thm. 1 to obtain ?(Y X) = C(Y X)Y CY?1Y ?Y , where C(Y X)Y :=
?
can be represented as ?(XX) = C(XX)Y CY?1Y ?Y . This
E[?(Y ) ? ?(X) ? ?(Y )]. Similarly, CXX
way of computing posterior embeddings is called the kernel Bayes? rule [5].
Pm
Given estimators of the prior embedding ?
bY = i=1 ?
? i ?(yi ) and the covariance operator CbY X , The
?
?
?
posterior embedding can be obtained via ?
bY (X = x) = CbY? X ([CbXX
]2 + ?I)?1 CbXX
?(x) , where
?
squared regularization is added to the inversion. Note that the regularization for ?
bY (X = x) is not
unique. A thresholding alternative is proposed in [10] without establishing the consistency. We will
discuss this thresholding regularization in a different perspective and give consistency results in the
sequel.
2.3
Regularized Bayesian inference
Regularized Bayesian inference (RegBayes [14]) is based on a variational formulation of the Bayes?
rule [11]. TheR posterior distribution can be viewed as the solution of minp(Y |X=x) KL(p(Y |X =
x)k?(Y )) ? log p(X = x|Y )dp(Y |X = x), subjected to p(Y |X = x) ? Pprob , where Pprob is
the set of valid probability measures. RegBayes combines this formulation and posterior regularization [17] in the following way
Z
min KL(p(Y |X = x)k?(Y )) ? log p(X = x|Y )dp(Y |X = x) + U (?)
p(Y |X=x),?
s.t.
p(Y |X = x) ? Pprob (?),
where Pprob (?) is a subset depending on ? and U (?) is a loss function. Such a formulation makes it
possible to regularize Bayesian posterior distributions, smoothing the gap between Bayesian generative models and discriminative models. Related applications include max-margin topic models [18]
and infinite latent SVMs [14].
Despite the flexibility of RegBayes, regularization on the posterior distributions is practically imposed
indirectly via expectations of a function. We shall see soon in the sequel that our new framework of
kernel Regularized Bayesian inference can control the posterior distribution in a direct way.
2.4
Vector-valued regression
The main task for vector-valued regression [12] is to minimize the following objective
E(f ) :=
n
X
2
2
kyj ? f (xj )kHY + ? kf kHK ,
i=1
where yj ? HY , f : X ? HY . Note that f is a function with RKHS values and we assume that f
belongs to a vector-valued RKHS HK . In vector-valued RKHS, the kernel function k is generalized
to linear operators L(HY ) 3 K(x1 , x2 ) : HY ? HY , such that K(x1 , x2 )y := (Kx2 y)(x1 ) for
every x1 , x2 ? X and y ? HY , where Kx2 y ? HK . The reproducing property is generalized to
hy, f (x)iHY = hKx y, f iHK for every y ? HY , f ? HK and x ? X . In addition, [12] shows that
the representer theorem still holds for vector-valued RKHS.
3
Kernel Bayesian inference as a regression problem
One of the unique merits of the posterior embedding ??Y (X = x) is that expectations w.r.t. posterior
distributions can be computed via inner products, i.e., hh, ??Y (X = x)i = Ep? (Y |X=x) [h(Y )] for all
3
h ? HY . Since ??Y (X = x) ? HY , ??Y can be viewed as an element of a vector-valued RKHS HK
containing functions f : X ? HY .
A natural optimization objective [13] thus follows from the above observations
E[?] := sup EX (EY [h(Y )|X] ? hh, ?(X)iHY )2 ,
(2)
khkY ?1
where EX [?] denotes the expectation w.r.t. p? (X) and EY [?|X] denotes the expectation w.r.t. the
Bayesian posterior distribution, i.e., p? (Y | X) ? ?(Y )p(X | Y ). Clearly, ??Y = arg inf ? E[?].
Following [13], we introduce an upper bound Es for E by applying Jensen?s and Cauchy-Schwarz?s
inequalities consecutively
2
Es [?] := E(X,Y ) [k?(Y ) ? ?(X)kHY ],
(3)
where (X, Y ) is the random variable on X ?Y with the joint distribution p? (X, Y ) = ?(Y )p(X | Y ).
The first step to make this optimizational framework practical is to find finite sample estimators of
Es [?]. We will show how to do this in the following section.
3.1
A consistent estimator of Es [?]
Unlike the conditional embeddings in [13], we do not have i.i.d. samples from the joint distribution
p? (X, Y ), as the priors and likelihood functions are represented with samples from different distributions. We will eliminate this problem using a kernel trick, which is one of our main innovations in
this paper.
The idea is to use the inner product property of a kernel embedding ?(X,Y ) to represent the expectation
2
E(X,Y ) [k?(Y ) ? ?(X)kHY ] and then use finite sample estimators of ?(X,Y ) to estimate Es [?]. Recall
that we can identify CXY := EXY [?(X) ? ?(Y )] with ?(X,Y ) in a product space HX ? HY
2
with a product kernel kX kY on X ? Y [16]. Let f (x, y) = k?(y) ? ?(x)kHY and assume that
f ? HX ? HY . The optimization objective Es [?] can be written as
2
Es [?] = E(X,Y ) [k?(Y ) ? ?(X)kHY ] = hf, ?(X,Y ) iHX ?HY .
(4)
From Thm. 1, we assert that ?(X,Y ) = C(X,Y )Y CY?1Y ?Y and a natural estimator follows to be
?(X,Y ) , f iHX ?HY and we introduce
?
b(X,Y ) = Cb(X,Y )Y (CbY Y + ?I)?1 ?
bY . As a result, Ebs [?] := hb
the following proposition to write Ebs in terms of Gram matrices.
Proposition 1 (Proof in Appendix). Suppose (X, Y ) is a random variable in X ? Y, where the prior
for Y is ?(Y ) and the likelihood is p(X | Y ). Let HX be a RKHS with kernel kX and feature map
?(x), HY be a RKHS with kernel kY and feature map ?(y), ?(x, y) be the feature map of HX ? HY ,
Pl
?
bY = i=1 ?
? i ?(?
yi ) be a consistent estimator of ?Y and {(xi , yi )}ni=1 be a sample representing
2
p(X | Y ). Under the assumption that f (x, y) = k?(y) ? ?(x)kHY ? HX ? HY , we have
Ebs [?] =
n
X
2
?i k?(yi ) ? ?(xi )kHY ,
(5)
i=1
? Y ?,
? where (GY )ij = kY (yi , yj ),
where ? = (?1 , ? ? ? , ?n )| is given by ? = (GY + n?I)?1 G
? Y )ij = kY (yi , y?j ), and ?
? = (?
(G
?1 , ? ? ? , ?
? l )| .
The consistency of Ebs [?] is a direct consequence of the following theorem
adapted from
[5], since the
Cauchy-Schwarz inequality ensures |h?(X,Y ) , f i ? hb
?(X,Y ) , f i| ?
?(X,Y ) ? ?
b(X,Y )
kf k.
Theorem 2 (Adapted from [5], Theorem 8). Assume that CY Y is injective, ?
bY is a consistent
? Y? )) | Y = y, Y? = y?] is included in
estimator of ?Y in HY norm, and that E[k((X, Y ), (X,
? Y? ) is an independent copy of (X, Y ). Then, if the
HY ? HY as a function of (y, y?), where (X,
regularization coefficient ?n decays to 0 sufficiently slowly,
b
bY ? ?(X,Y )
?0
(6)
C(X,Y )Y (CbY Y + ?n I)?1 ?
HX ?HY
in probability as n ? ?.
4
Although Ebs [?] is a consistent estimator of Es [?], it does not necessarily have minima, since the
coefficients ?i can be negative. One of our main contributions in this paper is the discovery that we
can ignore data points (xi , yi ) with a negative ?i , i.e., replacing ?i with ?i+ := max(0, ?i ) in Ebs [?].
We will give explanations and theoretical justifications in the next section.
3.2
The thresholding regularization
Pn
2
We show in the following theorem that Ebs+ [?] := i=1 ?i+ k?(yi ) ? ?(xi )k converges to Es [?]
in probability in discrete situations. The trick of replacing ?i with ?i+ is named thresholding
regularization.
Theorem 3 (Proof in Appendix). Assume that X is compact and |Y| < ?, k is a strictly positive
2
definite continuous kernel with sup(x,y) k((x, y), (x, y)) < ? and f (x, y) = k?(y) ? ?(x)kHY ?
+
HX ? HY . With the conditions in Thm. 2, we assert that ?
b(X,Y ) is a consistent estimator of ?(X,Y )
b+
and Es [?] ? Es [?] ? 0 in probability as n ? ?.
In the context of partially observed Markov decision processes (POMDPs) [10], a similar thresholding
approach, combined with normalization, was proposed to make the Bellman operator isotonic and
contractive. However, the authors left the consistency of that approach as an open problem. The
justification of normalization has been provided in [13], Lemma 2.2 under the finite space assumption.
A slight modification of our proof of Thm. 3 (change the probability space from X ? Y to X ) can
complete the other half as a side product, under the same assumptions.
Compared to the original squared regularization used in [5], thresholding regularization is more
computational efficient because 1) it does not need to multiply the Gram matrix twice, and 2) it does
not need to take into consideration those data points with negative ?i ?s. In many cases a large portion
of {?i }ni=1 is negative but the sum of their absolute values is small. The finite space assumption in
Thm. 3 may also be weakened, but it requires deeper theoretical analyses.
3.3
Minimizing Ebs+ [?]
Following the standard steps of solving a RKHS regression problem, we add a Tikhonov regularization
term to Eb+ [?] to provide a well-proposed problem,
s
Eb?,n [?] =
n
X
2
2
?i+ k?(yi ) ? ?(xi )kHY + ? k?kHK .
(7)
i=1
Let ?
b?,n = arg min? Eb?,n [?]. Note that Eb?,n [?] is a vector-valued regression problem, and the
representer theorems in vector-valued RKHS apply here. We summarize the matrix expression of
?
b?,n in the following proposition.
Proposition 2 (Proof in Appendix). Without loss of generality, we assume that ?i+ 6= 0 for all
1 ? i ? n. Let ? ? HK and choose the kernel of HK to be K(xi , xj ) = kX (xi , xj )I, where
I : HK ? HK is an identity map. Then
?
b?,n (x) = ?(KX + ?n ?+ )?1 K:x ,
(8)
where ? = (?(y1 ), ? ? ? , ?(yn )), (KX )ij = kX (xi , xj ), ?+ = diag(1/?1+ , ? ? ? , 1/?n+ ), K:x =
(kX (x, x1 ), ? ? ? , kX (x, xn ))| and ?n is a positive regularization constant.
3.4
Theoretical justifications for ?
b?,n
In this section, we provide theoretical explanations for using ?
b?,n as an estimator of the posterior
embedding under specific assumptions. Let ?? = arg min? E[?], ?0 = arg min? Es [?], and recall
that ?
b?,n = arg min? Eb?,n [?]. We first show the relations between ?? and ?0 and then discuss the
relations between ?
b?,n and ?0 .
The forms of E and Es are exactly the same for posterior kernel embeddings and conditional kernel
embeddings. As a consequence, the following theorem in [13] still hold.
5
Theorem 4 ([13]). If there exists a ?? ? HK such that for any h ? HY , E[h|X] = hh, ?? (X)iHY
pX -a.s., then ?? is the pX -a.s. unique minimiser of both objectives:
?? = arg min E[?] = arg min Es [?].
??HK
??HK
This theorem shows that if the vector-valued RKHS HK is rich enough to contain ??Y |X=x , both E
and Es can lead us to the correct embedding. In this case, it is reasonable to use ?0 instead of ?? . For
the situation where ??Y |X=x 6? HK , we refer the readers to [13].
Unfortunately, we cannot obtain the relation between ?
b?,n and ?0 by referring to [19], as in [13]. The
main difficulty here is that {(xi , yi )}|ni=1 is not an i.i.d. sample from p? (X, Y ) = ?(Y )p(X | Y ) and
the estimator Ebs+ [?] does not use i.i.d. samples to estimate expectations. Therefore the concentration
inequality ([19], Prop. 2) used in the proofs of [19] cannot be applied.
To solve the problem, we propose Thm. 9 (in Appendix) which can lead to a consistency proof for
?
b?,n . The relation between ?
b?,n and ?0 can now be summarized in the following theorem.
Theorem 5 (Proof in Appendix). Assume Hypothesis 1 and Hypothesis 2 in [20] and our Assumption 1 (in the Appendix) hold. With the conditions in Thm. 3, we assert that if ?n decreases to 0
sufficiently slowly,
Es [b
??n ,n ] ? Es [?0 ] ? 0
(9)
in probability as n ? ?.
4
Kernel Bayesian inference with posterior regularization
Based on our optimizational formulation of kernel Bayesian inference, we can add additional
regularization terms to control the posterior embeddings. This technique gives us the possibility to
incorporate rich side information from domain knowledge and to enforce supervisions on Bayesian
inference. We call our framework of imposing posterior regularization kRegBayes.
As an example of the framework, we study the following optimization problem
L :=
m
X
2
2
?i+ k?(xi ) ? ?(yi )kHY + ? k?kHK + ?
2
k?(xi ) ? ?(ti )kHY ,
(10)
i=m+1
i=1
|
n
X
{z
}
Eb?,n [?]
|
{z
The regularization term
}
n
where {(xi , yi )}m
i=1 is the sample used for representing the likelihood, {(xi , ti )}i=m+1 is the sample
used for posterior regularization and ?, ? are the regularization constants. Note that in RKHS
embeddings, ?(t) is identified as a point distribution at t [2]. Hence the regularization term in (10)
encourages the posterior distributions p(Y | X = xi ) to be concentrated at ti . More complicated
Pl
regularization terms are also possible, such as k?(xi ) ? i=1 ?i ?(ti )kHY .
Compared to vanilla RegBayes, our kernel counterpart has several obvious advantages. First, the
difference between two distributions can be naturally measured by RKHS norms. This makes it
possible to regularize the posterior distribution as a whole, rather than through expectations of
discriminant functions. Second, the framework of kernel Bayesian inference is totally nonparametric,
where the priors and likelihood functions are all represented by respective samples. We will further
demonstrate the properties of kRegBayes through experiments in the next section.
Let ?
breg = arg min? L. It is clear that solving L is substantially the same as Eb?,n [?] and we
summarize it in the following proposition.
Proposition 3. With the conditions in Prop. 2, we have
?
breg (x) = ?(KX + ??+ )?1 K:x ,
where ?
=
(?(y1 ), ? ? ? , ?(yn )), (KX )ij
=
kX (xi , xj )|1?i,j?n ,
+
+
diag(1/?1 , ? ? ? , 1/?m , 1/?, ? ? ? , 1/?), and K:x = (kX (x, x1 ), ? ? ? , kX (x, xn ))| .
6
(11)
+
?
=
5
Experiments
In this section, we compare the results of kRegBayes and several other baselines for two state-space
filtering tasks. The mechanism behind kernel filtering is stated in [5] and we provide a detailed
introduction in Appendix, including all the formula used in implementation.
Toy dynamics This experiment is a twist of that used in [5]. We report the results of extended
Kalman filter (EKF) [21] and unscented Kalman filter (UKF) [22], kernel Bayes? rule (KBR) [5],
kernel Bayesian learning with thresholding regularization (pKBR) and kRegBayes.
The data points {(?t , xt , yt )} are generated from the dynamics
xt+1
cos ?t+1
?t+1 = ?t + 0.4 + ?t (mod 2?),
= (1 + sin(8?t+1 ))
+ ?t ,
yt+1
sin ?t+1
(12)
where ?t is the hidden state, (xt , yt ) is the observation, ?t ? N (0, 0.04) and ?t ? N (0, 0.04). Note
that this dynamics is nonlinear for both transition and observation functions. The observation model
is an oscillation around the unit circle. There are 1000 training data and 200 validation/test data for
each algorithm.
We suppose that EKF, UKF and kRegBayes know
the true dynamics of the model and the first hidden state ?1 . In this case, we use ??t+1 =
?1 + 0.4t (mod 2?) and (?
xt+1 , y?t+1 )| = (1 +
?
?
?
sin(8?t+1 ))(cos ?t+1 , sin ?t+1 )| as the supervision
data point for the (t + 1)-th step. We follow [5] to
set our parameters.
The results are summarized in Fig. 5. pKBR has
lower errors compared to KBR, which means the
thresholding regularization is practically no worse
than the original squared regularization. The lower
MSE of kRegBayes compared with pKBR shows that
the posterior regularization successfully incorporates
information from equations of the dynamics. More- Figure 1: Mean running MSEs against time
over, pKBR and kRegBayes run faster than KBR. The steps for each algorithm. (Best view in color)
total running times for 50 random datasets of pKBR,
kRegBayes and KBR are respectively 601.3s, 677.5s and 3667.4s.
Camera position recovery In this experiment, we build a scene containing a table and a chair,
which is derived from classchair.pov (http://www.oyonale.com). With a fixed focal point,
the position of the camera uniquely determines the view of the scene. The task of this experiment is
to estimate the position of the camera given the image. This is a problem with practical applications
in remote sensing and robotics.
We vary the position of the camera in a plane with a fixed height. The transition equations of the
hidden states are
?t+1 = ?t +0.2+?? ,
rt+1 = max(R2 , min(R1 , rt +?r )),
xt+1 = cos ?t+1 ,
yt+1 = sin ?t+1 ,
where ?? ? N (0, 4e ? 4), ?r ? N (0, 1), 0 ? R1 < R2 are two constants and {(xt , yt )}|m
t=1 are
treated as the hidden variables. As the observation at t-th step, we render a 100 ? 100 image with the
camera located at (xt , yt ). For training data, we set R1 = 0 and R2 = 10 while for validation data
and test data we set R1 = 5 and R2 = 7. The motivation is to distinguish the efficacy of enforcing
the posterior distribution to concentrate around distance 6 by kRegBayes. We show a sample set of
training and test images in Fig. 2.
We compare KBR, pKBR and kRegBayes with the traditional linear Kalman filter (KF [23]). Following [4] we down-sample the images and train a linear regressor for observation model. In all
experiments, we flatten the images to a column vector and apply Gaussian RBF kernels if needed.
The kernel band widths are set to be the median distances in the training data. Based on experiments
on the validation dataset, we set ?T = 1e ? 6 = 2?T and ?T = 1e ? 5.
7
Figure 2: First several frames of training data (upper row) and test data (lower row).
(a)
(b)
Figure 3: (a) MSEs for different algorithms (best view in color). Since KF performs much worse
than kernel filters, we use a different scale and plot it on the right y-axis. (b) Probability histograms
for the distance between each state and the scene center. All algorithms use 100 training data.
To provide supervision for kRegBayes, we uniformly generate 2000 data points {(?
xi , y?t )}2000
i=1
on the circle r = 6. Given the previous estimate (?
xt , y?t ), we first compute ??t = arctan(?
yt /?
xt )
(where the value ??t is adapted according to the quadrant of (?
xt , y?t )) and estimate (?
xt+1 , y?t+1 ) =
(cos(??t + 0.4), sin(??t + 0.4)). Next, we find the nearest point to (?
xt+1 , y?t+1 ) in the supervision set
(?
xk , y?k ) and add the regularization ?T k?(It+1 ) ? ?(?
xk , y?k )k to the posterior embedding, where
It+1 denotes the (t + 1)-th image.
We vary the size of training dataset from 100 to 300 and report the results of KBR, pKBR, kRegBayes
and KF on 200 test images in Fig. 3. KF performs much worse than all three kernel filters due
to the extreme non-linearity. The result of pKBR is a little worse than that of KBR, but the gap
decreases as the training dataset becomes larger. kRegBayes always performs the best. Note that the
advantage becomes less obvious as more data come. This is because kernel methods can learn the
distance relation better with more data, and posterior regularization tends to be more useful when
data are not abundant and domain knowledge matters. Furthermore, Fig. 3(b) shows that the posterior
regularization helps the distances to concentrate.
6
Conclusions
We propose an optimizational framework for kernel Bayesian inference. With thresholding regularization, the minimizer of the framework is shown to be a reasonable estimator of the posterior kernel
embedding. In addition, we propose a posterior regularized kernel Bayesian inference framework
called kRegBayes. These frameworks are applied to non-linear state-space filtering tasks and the
results of different algorithms are compared extensively.
Acknowledgements
We thank all the anonymous reviewers for valuable suggestions. The work was supported by the
National Basic Research Program (973 Program) of China (No. 2013CB329403), National NSF
of China Projects (Nos. 61620106010, 61322308, 61332007), the Youth Top-notch Talent Support
Program, and Tsinghua Initiative Scientific Research Program (No. 20141080934).
8
References
[1] Alex J Smola and Bernhard Sch?lkopf. Learning with kernels. Citeseer, 1998.
[2] Alain Berlinet and Christine Thomas-Agnan. Reproducing kernel Hilbert spaces in probability and
statistics. Springer Science & Business Media, 2011.
[3] Alex Smola, Arthur Gretton, Le Song, and Bernhard Sch?lkopf. A hilbert space embedding for distributions.
In Algorithmic learning theory, pages 13?31. Springer, 2007.
[4] Le Song, Jonathan Huang, Alex Smola, and Kenji Fukumizu. Hilbert space embeddings of conditional
distributions with applications to dynamical systems. In Proceedings of the 26th Annual International
Conference on Machine Learning, pages 961?968. ACM, 2009.
[5] Kenji Fukumizu, Le Song, and Arthur Gretton. Kernel bayes? rule. In Advances in neural information
processing systems, pages 1737?1745, 2011.
[6] Le Song, Kenji Fukumizu, and Arthur Gretton. Kernel embeddings of conditional distributions: A unified
kernel framework for nonparametric inference in graphical models. Signal Processing Magazine, IEEE,
30(4):98?111, 2013.
[7] Le Song, Byron Boots, Sajid M Siddiqi, Geoffrey J Gordon, and Alex Smola. Hilbert space embeddings of
hidden markov models. 2010.
[8] Le Song, Arthur Gretton, and Carlos Guestrin. Nonparametric tree graphical models. In International
Conference on Artificial Intelligence and Statistics, pages 765?772, 2010.
[9] Steffen Grunewalder, Guy Lever, Luca Baldassarre, Massi Pontil, and Arthur Gretton. Modelling transition
dynamics in mdps with rkhs embeddings. arXiv preprint arXiv:1206.4655, 2012.
[10] Yu Nishiyama, Abdeslam Boularias, Arthur Gretton, and Kenji Fukumizu. Hilbert space embeddings of
pomdps. arXiv preprint arXiv:1210.4887, 2012.
[11] Peter M. Williams. Bayesian conditionalisation and the principle of minimum information. The British
Journal for the Philosophy of Science, 31(2), 1980.
[12] Charles A Micchelli and Massimiliano Pontil. On learning vector-valued functions. Neural computation,
17(1):177?204, 2005.
[13] Steffen Gr?new?lder, Guy Lever, Luca Baldassarre, Sam Patterson, Arthur Gretton, and Massimiliano
Pontil. Conditional mean embeddings as regressors. In Proceedings of the 29th International Conference
on Machine Learning (ICML-12), pages 1823?1830, 2012.
[14] Jun Zhu, Ning Chen, and Eric P Xing. Bayesian inference with posterior regularization and applications to
infinite latent svms. The Journal of Machine Learning Research, 15(1):1799?1847, 2014.
[15] Charles A Micchelli, Yuesheng Xu, and Haizhang Zhang. Universal kernels. The Journal of Machine
Learning Research, 7:2651?2667, 2006.
[16] Nachman Aronszajn. Theory of reproducing kernels. Transactions of the American mathematical society,
68(3):337?404, 1950.
[17] Kuzman Ganchev, Joao Gra?a, Jennifer Gillenwater, and Ben Taskar. Posterior regularization for structured
latent variable models. The Journal of Machine Learning Research, 11:2001?2049, 2010.
[18] Jun Zhu, Amr Ahmed, and Eric Xing. MedLDA: Maximum margin supervised topic models. JMLR,
13:2237?2278, 2012.
[19] Andrea Caponnetto and Ernesto De Vito. Optimal rates for the regularized least-squares algorithm.
Foundations of Computational Mathematics, 7(3):331?368, 2007.
[20] Ernesto De Vito and Andrea Caponnetto. Risk bounds for regularized least-squares algorithm with
operator-value kernels. Technical report, DTIC Document, 2005.
[21] Simon J Julier and Jeffrey K Uhlmann. New extension of the kalman filter to nonlinear systems. In
AeroSense?97, pages 182?193. International Society for Optics and Photonics, 1997.
[22] Eric A Wan and Ronell Van Der Merwe. The unscented kalman filter for nonlinear estimation. In Adaptive
Systems for Signal Processing, Communications, and Control Symposium 2000. AS-SPCC. The IEEE 2000,
pages 153?158. Ieee, 2000.
[23] Rudolph Emil Kalman. A new approach to linear filtering and prediction problems. Journal of basic
Engineering, 82(1):35?45, 1960.
[24] Alexander J Smola and Risi Kondor. Kernels and regularization on graphs. In Learning theory and kernel
machines, pages 144?158. Springer, 2003.
[25] Heinz Werner Engl, Martin Hanke, and Andreas Neubauer. Regularization of inverse problems, volume
375. Springer Science & Business Media, 1996.
9
| 6235 |@word kondor:1 version:1 inversion:1 norm:5 open:3 covariance:6 citeseer:1 invoking:2 kbr:7 tnlist:1 moment:1 contains:1 efficacy:2 rkhs:22 document:1 com:1 exy:1 written:1 enables:1 plot:1 generative:1 half:1 intelligence:1 plane:1 xk:2 provides:3 characterization:2 arctan:1 zhang:1 height:1 mathematical:1 direct:2 differential:1 symposium:1 initiative:1 prove:1 khk:3 combine:1 haizhang:1 introduce:2 andrea:2 planning:1 steffen:2 heinz:1 bellman:1 inspired:2 little:1 totally:1 becomes:2 provided:1 xx:2 moreover:2 bounded:2 linearity:1 medium:2 joao:1 project:1 string:1 substantially:1 unified:1 assert:3 every:2 ti:4 exactly:1 berlinet:1 bio:1 control:3 unit:1 yn:2 positive:2 engineering:1 tsinghua:4 tends:1 consequence:2 despite:2 establishing:1 sajid:1 twice:1 eb:16 china:4 weakened:1 equivalence:1 co:4 range:1 contractive:1 ms:2 practical:4 unique:3 camera:5 yj:2 definite:1 optimizers:1 pontil:3 empirical:1 universal:2 convenient:1 flatten:1 quadrant:1 cannot:2 operator:12 put:1 context:1 applying:1 risk:1 py:1 isotonic:1 equivalent:1 measurable:3 map:6 center:2 imposed:1 yt:7 www:1 reviewer:1 williams:1 pomdp:1 recovery:1 estimator:16 rule:13 orthonormal:1 regularize:2 embedding:19 justification:3 suppose:2 magazine:1 hypothesis:2 trick:2 element:1 located:1 observed:2 ep:2 taskar:1 preprint:2 cy:9 ensures:1 remote:1 decrease:2 valuable:1 dynamic:7 vito:2 solving:3 patterson:1 eric:3 abdeslam:1 easily:1 joint:6 various:1 represented:3 train:1 massimiliano:2 effective:1 artificial:1 whose:6 stanford:1 valued:14 solve:1 larger:1 lder:1 statistic:2 rudolph:1 cby:4 advantage:2 emil:1 propose:7 product:10 flexibility:1 intuitive:2 ihy:3 ky:5 epx:3 r1:4 converges:1 ben:1 help:1 derive:1 depending:1 measured:1 nearest:1 ij:4 op:1 progress:1 solves:1 c:1 kenji:4 come:1 concentrate:2 ning:1 correct:1 filter:7 consecutively:1 require:1 hx:13 alleviate:1 preliminary:1 proposition:6 anonymous:1 strictly:1 pl:2 unscented:2 hold:3 practically:2 sufficiently:2 around:2 extension:1 cb:1 algorithmic:3 optimizer:1 vary:2 baldassarre:2 estimation:1 applicable:1 nachman:1 uhlmann:1 schwarz:2 establishes:1 successfully:1 weighted:1 ganchev:1 fukumizu:4 clearly:1 always:2 gaussian:1 aim:1 ekf:2 rather:1 grunewalder:1 pn:2 derived:1 modelling:1 likelihood:7 tech:2 hk:13 hkx:1 baseline:2 inference:19 ihk:1 eliminate:1 hidden:6 relation:7 manipulating:1 arg:8 among:1 flexible:1 smoothing:1 marginal:1 field:1 ernesto:2 broad:1 breg:2 yu:1 icml:1 representer:2 report:3 gordon:1 national:2 intell:1 ukf:2 jeffrey:1 n1:1 possibility:1 multiply:1 weakness:1 photonics:1 spcc:1 extreme:1 behind:1 implication:1 integral:1 injective:2 xy:1 arthur:7 minimiser:1 respective:1 tree:2 incomplete:1 circle:2 abundant:1 theoretical:7 column:1 engl:1 werner:1 subset:1 gr:1 supx:1 synthetic:1 combined:1 referring:1 thanks:1 fundamental:1 international:4 sequel:3 physic:1 regressor:1 squared:5 lever:2 boularias:1 containing:2 wan:1 choose:1 slowly:2 huang:1 guy:2 worse:4 cb329403:1 american:1 bx:3 toy:1 de:2 gy:2 summarized:2 coefficient:2 matter:1 view:3 lab:2 sup:3 portion:1 xing:2 bayes:13 hf:3 complicated:2 carlos:1 simon:1 hanke:1 contribution:1 minimize:1 square:2 ni:3 accuracy:1 cxy:7 merwe:1 largely:1 identify:2 weak:1 bayesian:24 lkopf:2 ren:1 comp:1 pomdps:3 against:1 obvious:2 naturally:1 proof:8 associated:1 pprob:4 gain:1 dataset:3 recall:3 knowledge:3 color:2 yuesheng:1 hilbert:7 supervised:1 follow:1 formulation:7 though:1 ihx:2 generality:1 furthermore:1 smola:5 replacing:2 aronszajn:1 nonlinear:5 propagation:1 scientific:1 mdp:1 concept:1 y2:2 contain:1 counterpart:1 former:1 regularization:42 hence:3 true:1 sin:6 width:1 encourages:1 uniquely:1 coincides:1 generalized:3 complete:1 demonstrate:3 workhorse:1 performs:3 christine:1 image:7 variational:1 consideration:1 charles:2 twist:1 volume:1 julier:1 slight:1 refer:1 imposing:2 talent:1 vanilla:2 consistency:9 pm:1 similarly:2 focal:1 mathematics:1 gillenwater:1 robot:1 supervision:4 add:3 posterior:40 recent:1 perspective:1 belongs:1 inf:1 tikhonov:1 inequality:3 success:1 kx2:2 yi:13 der:1 guestrin:1 minimum:2 additional:1 ey:2 novelty:1 converge:1 signal:2 gretton:7 caponnetto:2 technical:1 faster:3 characterized:1 match:1 youth:1 long:2 ahmed:1 luca:2 prediction:1 regression:9 basic:2 expectation:10 arxiv:4 histogram:1 kernel:70 represent:1 normalization:2 robotics:1 addition:4 median:1 sch:2 unlike:1 byron:1 incorporates:1 spirit:1 mod:2 call:2 yang:1 embeddings:26 enough:1 hb:2 variety:1 xj:5 identified:2 inner:4 idea:2 cn:1 andreas:1 expression:1 notch:1 song:7 render:1 peter:1 useful:1 clear:1 detailed:1 nonparametric:6 band:1 extensively:1 induces:1 svms:2 concentrated:1 siddiqi:1 http:1 generate:1 nsf:1 estimated:1 write:1 discrete:1 shall:1 medlda:1 key:2 reformulation:1 graph:2 year:1 beijing:2 sum:2 run:2 aerosense:1 inverse:1 named:1 gra:1 reasonable:3 reader:1 oscillation:1 decision:3 cxx:7 appendix:7 comparable:2 bound:2 distinguish:1 annual:1 adapted:3 optic:1 alex:4 bp:1 x2:5 scene:3 yong:1 hy:28 extremely:1 min:9 chair:1 px:7 martin:1 structured:1 according:2 sam:1 modification:1 neubauer:1 equation:3 remains:2 jennifer:1 discus:2 mechanism:2 hh:3 needed:1 know:1 merit:2 subjected:1 operation:1 apply:4 indirectly:1 enforce:1 inasmuch:1 alternative:2 original:3 thomas:1 denotes:4 running:2 include:1 top:1 graphical:3 risi:1 especially:1 build:1 society:2 tensor:1 objective:4 micchelli:2 added:1 realized:1 amr:1 concentration:1 rt:2 traditional:1 unclear:1 dp:2 distance:5 thank:1 sci:1 hmm:1 topic:2 mail:1 cauchy:2 discriminant:1 enforcing:1 kalman:6 providing:1 minimizing:1 kuzman:1 innovation:1 difficult:1 unfortunately:1 relate:1 negative:4 stated:1 implementation:2 upper:2 boot:1 observation:7 markov:5 datasets:2 finite:4 situation:2 extended:2 communication:1 y1:4 frame:1 reproducing:6 thm:8 namely:1 kl:2 connection:1 established:1 ther:1 able:2 regbayes:6 usually:2 agnan:1 dynamical:1 summarize:2 program:4 including:2 max:3 explanation:2 belief:1 natural:3 difficulty:1 regularized:8 treated:1 business:2 zhu:3 arm:1 representing:2 mdps:1 axis:1 jun:3 prior:8 understanding:3 nice:1 discovery:1 kf:6 acknowledgement:1 loss:3 permutation:1 suggestion:1 filtering:7 khy:12 geoffrey:1 validation:3 foundation:1 consistent:5 minp:1 thresholding:11 principle:1 row:2 supported:1 soon:1 copy:1 alain:1 side:7 deeper:1 absolute:1 van:1 xn:2 valid:1 kyj:1 rich:3 gram:2 transition:3 author:2 adaptive:1 regressors:2 transaction:1 functionals:1 compact:1 ignore:1 bernhard:2 xi:20 discriminative:1 continuous:1 latent:3 table:1 learn:1 mse:1 necessarily:1 domain:3 diag:2 dense:1 main:4 whole:1 motivation:1 x1:8 xu:1 fig:4 position:4 explicit:1 jmlr:1 nishiyama:1 theorem:13 formula:1 down:1 british:1 specific:1 xt:12 jensen:1 sensing:1 r2:4 decay:1 exists:4 intractable:1 incorporating:1 kx:15 margin:2 gap:2 chen:1 dtic:1 generalizing:2 cx:1 partially:2 springer:4 corresponds:1 minimizer:1 determines:1 acm:1 dcszj:1 prop:2 conditional:7 viewed:2 identity:1 rbf:1 change:1 included:1 infinite:2 uniformly:1 lemma:1 called:3 total:1 e:17 people:1 support:1 jonathan:1 alexander:1 philosophy:1 incorporate:1 dept:2 ex:2 |
5,787 | 6,236 | Maximization of
Approximately Submodular Functions
Thibaut Horel
Harvard University
[email protected]
Yaron Singer
Harvard University
[email protected]
Abstract
We study the problem of maximizing a function that is approximately submodular
under a cardinality constraint. Approximate submodularity implicitly appears in a
wide range of applications as in many cases errors in evaluation of a submodular
function break submodularity. Say that F is ?-approximately submodular if there
exists a submodular function f such that (1??)f (S) ? F (S) ? (1+?)f (S) for all
subsets S. We are interested in characterizing the query-complexity of maximizing
F subject to a cardinality constraint k as a function of the error level ? > 0. We
provide both lower and upper bounds: for ? > n?1/2 we show an exponential
query-complexity lower bound. In contrast, when ? < 1/k or under a stronger
bounded curvature assumption, we give constant approximation algorithms.
1
Introduction
In recent years, there has been a surge of interest in machine learning methods that involve discrete
optimization. In this realm, the evolving theory of submodular optimization has been a catalyst
for progress in extraordinarily varied application areas. Examples include active learning and
experimental design [9, 12, 14, 19, 20], sparse reconstruction [1, 6, 7], graph inference [23, 24, 8],
video analysis [29], clustering [10], document summarization [21], object detection [27], information
retrieval [28], network inference [23, 24], and information diffusion in networks [17].
The power of submodularity as a modeling tool lies in its ability to capture interesting application
domains while maintaining provable guarantees for optimization. The guarantees however, apply to
the case in which one has access to the exact function to optimize. In many applications, one does
not have access to the exact version of the function, but rather some approximate version of it. If
the approximate version remains submodular then the theory of submodular optimization clearly
applies and modest errors translate to modest loss in quality of approximation. But if the approximate
version of the function ceases to be submodular all bets are off.
Approximate submodularity. Recall that a function f : 2N ? R is submodular if for all S, T ?
N , f (S ? T ) + f (S ? T ) ? f (S) + f (T ). We say that a function F : 2N ? R is ?-approximately
submodular if there exists a submodular function f : 2N ? R s.t. for any S ? N :
(1 ? ?)f (S) ? F (S) ? (1 + ?)f (S).
(1)
Unless otherwise stated, all submodular functions f considered are normalized (f (?) = 0) and
monotone (f (S) ? f (T ) for S ? T ). Approximate submodularity appears in various domains.
? Optimization with noisy oracles. In these scenarios, we wish to solve optimization problems where one does not have access to a submodular function but a noisy version of it. An
example recently studied in [5] involves maximizing information gain in graphical models;
this captures many Bayesian experimental design settings.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
? PMAC learning. In the active area of learning submodular functions initiated by Balcan
and Harvey [3], the objective is to approximately learn submodular functions. Roughly
speaking, the PMAC-learning framework guarantees that the learned function is a constantfactor approximation of the true submodular function with high probability. Therefore, after
learning a submodular function, one obtains an approximately submodular function.
? Sketching. Since submodular functions have, in general, exponential-size representation, [2]
studied the problem of sketching submodular functions: finding a function with polynomialsize representation approximating a given submodular function. The resulting sketch is an
approximately submodular function.
Optimization of approximate submodularity. We focus on optimization problems of the form
max F (S)
S : |S|?k
(2)
where F is an ?-approximately submodular function and k ? N is the cardinality constraint.
We say that a set S ? N is an ?-approximation to the optimal solution of (2) if |S| ? k and
F (S) ? ? max|T |?k F (T ). As is common in submodular optimization, we assume the value query
model: optimization algorithms have access to the objective function F in a black-box manner,
i.e. they make queries to an oracle which returns, for a queried set S, the value F (S). The querycomplexity of the algorithm is simply the number of queries made to the oracle. An algorithm
is called an ?-approximation algorithm if for any approximately submodular input F the solution
returned by the algorithm is an ?-approximately optimal solution. Note that if there exists an ?approximation algorithm for the problem of maximizing an ?-approximate submodular function F ,
1
then this algorithm is a ?(1??)
1+? -approximation algorithm for the original submodular function f .
Conversely, if no such algorithm exists, this implies an inapproximability for the original function.
Clearly, if a function is 0-approximately submodular then it retains desirable provable guarantees2 ,
and it if it is arbitrarily far from being submodular it can be shown to be trivially inapproximable (e.g.
maximize a function which takes value of 1 for a single arbitrary set S ? N and 0 elsewhere). The
question is therefore:
How close should a function be to submodular to retain provable approximation guarantees?
In recent work, it was shown that for any constant ? > 0 there exists a class of ?-approximately
submodular functions for which no algorithm using fewer than exponentially-many queries has a
constant approximation ratio for the canonical problem of maximizing a monotone submodular
function under a cardinality constraint [13]. Such an impossibility result suggests two natural
relaxations: the first is to make additional assumptions about the structure of errors, such a stochastic
error model. This is the direction taken in [13], where the main result shows that when errors are
drawn i.i.d. from a wide class of distributions, optimal guarantees are obtainable. The second
alternative is to assume the error is subconstant, which is the focus of this paper.
1.1
Overview of the results
Our main result is a spoiler: even for ? = 1/n1/2?? for any constant ? > 0 and n = |N |, no
algorithm can obtain a constant-factor approximation guarantee. More specifically, we show that:
? For the general case of monotone submodular functions, for any ? > 0, given access to a
1
-approximately submodular function, no algorithm can obtain an approximation ratio
n1/2??
better than O(1/n? ) using polynomially many queries (Theorem 3);
? For the case of coverage functions we show that for any fixed ? > 0 given access to an
1
-approximately submodular function, no algorithm can obtain an approximation ratio
n1/3??
strictly better than O(1/n? ) using polynomially many queries (Theorem 4).
1
Observe that for an approximately submodular function F , there exists many submodular functions f of
which it is an approximation. All such submodular functions f are called representatives of F . The conversion
between an approximation guarantee for F and an approximation guarantee for a representative f of F holds for
any choice of the representative.
2
Specifically, [22] shows that it possible to obtain a (1 ? 1/e) approximation ratio for a cardinality constraint.
2
The above results imply that even in cases where the objective function is arbitrarily close to being
submodular as the number n of elements in N grows, reasonable optimization guarantees are
unachievable. The second result shows that this is the case even when we aim to optimize coverage
functions. Coverage functions are an important class of submodular functions which are used in
numerous applications [11, 21, 18].
Approximation guarantees. The inapproximability results follow from two properties of the
model: the structure of the function (submodularity), and the size of ? in the definition of approximate
submodularity. A natural question is whether one can relax either conditions to obtain positive
approximation guarantees. We show that this is indeed the case:
? In the general case of monotone
submodular functions we show that the greedy algorithm
achieves a 1?1/e?O(?) approximation ratio when ? = k? (Theorem 5). Furthermore, this
bound is tight: given a 1/k 1?? -approximately submodular function, the greedy algorithm
no longer provides a constant factor approximation guarantee (Proposition 6).
? Since our query-complexity lower bound holds for coverage functions, which already contain
a great deal of structure, we relax the structural assumption by considering functions with
bounded curvature c; this is a common assumption in applications of submodularity to
machine learning and has been used in prior work to obtain theoretical guarantees [15, 16].
Under this assumption, we give an algorithm which achieves an approximation ratio of
2
(1 ? c)( 1??
1+? ) (Proposition 8).
We state our positive results for the case of a cardinality constraint of k. Similar results hold for
matroids of rank k, the proofs of those can be found in the Appendix. Note that cardinality constraints
are a special case of matroid constraints, therefore our lower bounds also apply to matroid constraints.
1.2
Discussion and additional related work
Before transitioning to the technical results, we briefly survey error in applications of submodularity
and the implications of our results to these applications. First, notice that there is a coupling between
approximate submodularity and erroneous evaluations of a submodular function: if one can evaluate
a submodular function within (multiplicative) accuracy of 1 ? ? then this is an ?-approximately
submodular function.
Additive vs multiplicative approximation. The definition of approximate submodularity in (1)
uses relative (multiplicative) approximation. We could instead consider absolute (additive) approximation, i.e. require that f (S) ? ? ? F (S) ? f (S) + ? for all sets S. This definition has been
used in the related problem of optimizing approximately convex functions [4, 25], where functions
are assumed to have normalized range. For un-normalized functions or functions whose range is
unknown, a relative approximation is more informative. When the range is known, specifically if an
upper bound B on f (S) is known, an ?/B-approximately submodular function is also an ?-additively
approximate submodular function. This implies that our lower bounds and approximation results
could equivalently be expressed for additive approximations of normalized functions.
Error vs noise. If we interpret Equation (1) in terms of error, we see that no assumption is made
on the source of the error yielding the approximately submodular function. In particular, there is
no stochastic assumption: the error is deterministic and worst-case. Previous work have considered
submodular or combinatorial optimization under random noise. Two models naturally arise:
? consistent noise: the approximate function F is such that F (S) = ?S f (S) where ?S is
drawn independently for each set S from a distribution D. The key aspect of consistent
noise is that the random draws occur only once: querying the same set multiple times always
returns the same value. This definition is the one adopted in [13]; a similar notion is called
persistent noise in [5].
? inconsistent noise: in this model F (S) is a random variable such that f (S) = E[F (S)]. The
noisy oracle can be queried multiple times and each query corresponds to a new independent
random draw from the distribution of F (S). This model was considered in [26] in the
context of dataset summarization and is also implicitly present in [17] where the objective
function is defined as an expectation and has to be estimated via sampling.
3
Formal guarantees for consistent noise have been obtained in [13]. A standard way to approach
optimization with inconsistent noise is to estimate the value of each set used by the algorithm to an
accuracy ? via independent randomized sampling, where ? is chosen small enough so as to obtain
approximation guarantees. Specifically, assuming that the algorithm only makes polynomially many
value queries and that the function f is such that F (S) ? [b, B] for any set S, then a classical
application of the Chernoff bound combined with a union bound
implies
that if the value of each set
log n
is estimated by averaging the value of m samples with m = ? B b?
, then with high probability
2
the estimated value F (S) of each set used by the algorithm is such that (1 ? ?)f (S) ? F (S) ?
(1 + ?)f (S). In other words, randomized sampling is used to construct a function which is ?approximately submodular with high probability.
Implications of results in this paper. Given the above discussion, our results can be interpreted in
the context of noise as providing guarantees on what is a tolerable noise level. In particular,
Theorem
5
2
Bn log n
implies that if a submodular function is estimated using m samples, with m = ?
, then the
b
Greedy algorithm is a constant approximation algorithm for the problem of maximizing amonotone
n
submodular function under a cardinality constraint. Theorem 3 implies that if m = O Bn log
b
then the resulting estimation error is within the range where no algorithm can obtain a constant
approximation ratio.
2
Query-complexity lower bounds
In this section we give query-complexity lower bounds for the problem of maximizing an ?approximately submodular function subject to a cardinality constraint. In Section 2.1, we show
an exponential query-complexity lower bound for the case of general submodular functions when
? ? n?1/2 (Theorem 3). The same lower-bound is then shown to hold even when we restrict
ourselves to the case of coverage functions for ? ? n?1/3 (Theorem 4).
A general overview of query-complexity lower bounds. At a high level, the lower bounds are
constructed as follows. We define a class of monotone submodular functions F, and draw a function
f uniformly at random from F. In addition we define a submodular function g : 2N ? R s.t.
max|S|?k f (S) ? ?(n) ? max|S|?k g(S), where ?(n) = o(1) for a particular choice of k < n. We
then define the approximately submodular function F :
g(S), if (1 ? ?)f (S) ? g(S) ? (1 + ?)f (S)
F (S) =
f (S) otherwise
Note that by its definition, this function is an ?-approximately submodular function. To show the
lower bound, we reduce the problem of proving inapproximability of optimizing an approximately
submodular function to the problem of distinguishing between f and g using F . We show that
for every algorithm, there exists a function f ? F s.t. if f is unknown to the algorithm, it cannot
distinguish between the case in which the underlying function is f and the case in which the
underlying function is g using polynomially-many value queries to F , even when g is known to the
algorithm. Since max|S|?k f (S) ? ?(n) max|S|?k g(S), this implies that no algorithm can obtain
an approximation better than ?(n) using polynomially-many queries; otherwise such an algorithm
could be used to distinguish between f and g.
2.1
Monotone submodular functions
Constructing a class of hard functions. A natural candidate for a class of functions F and a
function g satisfying the properties described in the overview is:
|S|h
n
for H ? N of size h. The reason why g is hard to distinguish from f H is that when H is drawn
uniformly at random among sets of size h, f H is close to g with high probability. This follows from
an application of the Chernoff bound for negatively associated random variables. Formally, this is
stated in Lemma 1 whose proof is given in the Appendix.
f H (S) = |S ? H| and
4
g(S) =
Lemma 1. Let H ? N be a set drawn uniformly among sets of size h, then for any S ? N , writing
2
? = |S|h
n , for any ? such that ? ? > 1:
2
PH (1 ? ?)? ? |S ? H| ? (1 + ?)? ? 1 ? 2??(? ?)
Unfortunately this construction fails if the algorithm is allowed to evaluate the approximately
submodular function at small sets: for those the concentration of Lemma 1 is not high enough. Our
construction instead relies on designing F and g such that when S is ?large?, then we can make use of
the concentration result of Lemma 1 and when S is ?small?, functions in F and g are deterministically
close to each other. Specifically, we introduce for H ? N of size h:
h
f H (S) = |S ? H| + min |S ? (N \ H)|, ? 1 ?
n
(3)
|S|h
h
g(S) = min |S|,
+? 1?
n
n
The value of the parameters ? and h will be set later in the analysis. Observe that when S is small
? ? ?(1 ? h/n) and |S| ? ?) then f H (S) = g(S) = |S|. When S is large, Lemma 1 implies
(|S ? H|
that |S ? H| is close to |S|h/n and |S ? (N \ H)| is close to |S|(1 ? h/n) with high probability.
First note that f H and g are monotone submodular functions. f H is the sum of a monotone additive
function and a monotone budget-additive function. The function g can be written g(S) = G(|S|)
where G(x) = min(x, xh/n + ?(1 ? h/n)). G is a non-decreasing concave function (minimum
between two non-decreasing linear functions) hence g is monotone submodular.
Next, we observe that there is a gap between the maxima of the functions f H and the one of g.
h
When S ? k, g(S) = |S|h
n + ? 1 ? n . The maximum is clearly attained when |S| = k and is
H
upper-bounded by kh
n + ?. For f , the maximum is equal to k and is attained when S is a subset of
H of size k. So for ? ? k ? h, we obtain:
? h
max g(S) ?
+
max f H (S), H ? N
(4)
k
n |S|?k
|S|?k
Indistinguishability. The main challenge is now to prove that f H is close to g with high probability.
Formally, we have the following lemma.
Lemma 2. For h ? n2 , let H be drawn uniformly at random among sets of size h, then for any S:
2
PH (1 ? ?)f H (S) ? g(S) ? (1 + ?)f H (S) ? 1 ? 2??(? ?h/n)
(5)
? := N \ H, the complement of H in N . We consider four cases
Proof. For concision we define H
?
depending on the cardinality of S and S ? H.
? ? ? 1 ? h . In this case f H (S) = |S ? H| + |S ? H|
? = |S| and
Case 1: |S| ? ? and |S ? H|
n
g(S) = |S|. The two functions are equal and the inequality is immediately satisfied.
? ? ?(1 ? h ). In this case g(S) = |S| = |S ? H| + |S ? H|
? and
Case 2: |S| ? ? and |S ? H|
n
h
H
? we have:
f (S) = |S ? H| + ?(1 ? n ). By assumption on |S ? H|,
h
?
(1 ? ?)? 1 ?
? |S ? H|
n
? we have that |S| ? ?(1 ? h ) ? ? (since h ? n ). We
For the other side, by assumption on |S ? H|,
n
2
2
can then apply Lemma 1 and obtain:
2
? ? (1 + ?)? 1 ? h
PH |S ? H|
? 1 ? 2??(? ?h/n)
n
? ? ? 1 ? h . In this case f H (S) = |S ? H| + ?(1 ? h ) and
Case 3: |S| ? ? and |S ? H|
n
n
h
g(S) = |S|h
+
?(1
?
).
We
need
to
show
that:
n
n
2
|S|h
|S|h
PH (1 ? ?)
? |S ? H| ? (1 + ?)
? 1 ? 2??(? ?h/n)
n
n
This is a direct consequence of Lemma 1.
5
? ? ? 1 ? h . In this case f H (S) = |S ? H| + |S ? H|
? and
Case 4: |S| ? ? and |S ? H|
n
|S|h
g(S) = n + ?(1 ? nh ). As in the previous case, we have:
2
|S|h
|S|h
PH (1 ? ?)
? |S ? H| ? (1 + ?)
? 1 ? 2??(? ?h/n)
n
n
?
By the assumption on |S ? H|, we also have:
h
h
?
|S ? H| ? ? 1 ?
? (1 + ?)? 1 ?
n
n
So we need to show that:
2
h
?
PH (1 ? ?)? 1 ?
? |S ? H| ? 1 ? 2??(? ?h/n)
n
and then we will be able to conclude by union bound. This is again a consequence of Lemma 1.
Theorem 3. For any 0 < ? <
1
1
2 , ? ? n1/2?? ,
?/2
?(n
)
and any (possibly randomized) algorithm with
query-complexity smaller than 2
, there exists an ?-approximately submodular function F
such that for the problem of maximizing F under a cardinality constraint, the algorithm achieves an
2
approximation ratio upper-bounded by n?/2
with probability at least 1 ? ?(n1?/2 ) .
2
Proof. We set k = h = n1??/2 and ? = n1? ? . Let H be drawn uniformly at random among sets of
size h and let f H and g be as in (3). We first define the ?-approximately submodular function F H :
g(S)
if (1 ? ?)f H (S) ? g(S) ? (1 + ?)f H (S)
H
F (S) =
H
f (S) otherwise
It is clear from the definition that this is an ?-approximately submodular function. Consider a
deterministic algorithm A and let us denote by S1 , . . . , Sm the queries made by the algorithm when
given as input the function g (g is 0-approximately submodular, hence it is a valid input to A).
Without loss of generality, we can include the set returned by the algorithm in the queries, so Sm
denotes the set returned by the algorithm. By (5), for any i ? [m]:
?
2
PH [(1 ? ?)f H (Si ) ? g(Si ) ? (1 + ?)f H (Si )] ? 1 ? 2?? n
?
2
when these events realize, we have F H (Si ) = g(Si ). By union bound over i, when m < 2? n :
?/2
?/2
PH [?i, F H (Si ) = g(Si )] > 1 ? m2?? n
= 1 ? 2?? n
>0
This implies the existence of H such that A follows the same query path when given g and F H as
inputs. For this H:
? h
? h
H
H
+
max f (S) =
+
max F H (S)
F (Sm ) = g(Sm ) ? max g(S) ?
k
n |S|?k
k
n |S|?k
|S|?k
where the second inequality comes from (4). For our choice of parameters,
2
F H (Sm ) ? ? max F H (S)
n 2 |S|?k
?
k
+
h
n
?
= 2/n 2 , hence:
Let us now consider the case where the algorithm A is randomized and let us denote AH,R the
solution returned by the algorithm when given function F H as input and random bits R. We have:
X
2
2
H
H
H
H
PH,R F (AH,R ) ? ?/2 max F (S) =
P[R = r]PH F (AH,R ) ? ?/2 max F (S)
|S|?k
|S|?k
n
n
r
?
X
?
? (1 ? 2??(n 2 ) )
P[R = r] = 1 ? 2??(n 2)
r
where the equality comes from the analysis of the deterministic case (when the random bits are fixed,
the algorithm is deterministic). This implies the existence of H such that:
?
2
PR F H (AH,R ) ? ?/2 max F H (S) ? 1 ? 2??(n 2)
|S|?k
n
and concludes the proof of the theorem.
6
2.2
Coverage functions
In this section, we show that an exponential query-complexity lower bound still holds even in
the restricted case where the objective function approximates a coverage function. Recall that by
definition of a coverage function, the elements of the ground set N are subsets of a set U called
Sm the
universe. For a set S = {S1 , . . . , Sm } of subsets of U, the value f (S) is given by f (S) = | i=1 Si |.
1
1
2 , ? ? n1/3?? ,
?(n?/2 )
Theorem 4. For any 0 < ? <
and any (possibly randomized) algorithm with
query-complexity smaller than 2
, there exists a function F which ?-approximates a coverage
function, such that for the problem of maximizing F under a cardinality constraint, the algorithm
2
achieves an approximation ratio upper-bounded by n?/2
with probability at least 1 ? ?(n1?/2 ) .
2
The proof of Theorem 4 has the same structure as the proof of Theorem 3. The main difference is a
different choice of class of functions F and function g. The details can be found in the appendix.
3
Approximation algorithms
The results from Section 2 can be seen as a strong impossibility result since an exponential querycomplexity lower bound holds even in the specific case of coverage functions which exhibit a lot
of structure. Faced with such an impossibility, we analyze two ways to relax the assumptions in
order to obtain positive results. One relaxation considers ?-approximate submodularity when ? ? k1 ;
in this case we show that the Greedy algorithm achieves a constant approximation ratio (and that
? = k1 is tight for the Greedy algorithm). The other relaxation considers functions with stronger
structural properties, namely, functions with bounded curvature. In this case, we show that a constant
approximation ratio can be obtained for any constant ?.
3.1
Greedy algorithm
For the general class of monotone submodular functions, the result of [22] shows that a simple
greedy algorithm achieves an approximation ratio of 1 ? 1e . Running the same algorithm for an
?-approximately submodular function results in a constant approximation ratio when ? ? k1 . The
detailed description of the algorithm can be found in the appendix.
Theorem 5. Let F be an ?-approximately submodular function, then the set S returned by the greedy
algorithm satisfies:
2k
k !
1
1??
1
F (S) ?
1?
1?
max F (S)
4k?
1+?
k
S:|S|?k
1 + (1??)
2
In particular, for k ? 2, any constant
0 ? ? < 1 and ? = k? , this approximation ratio is constant and
lower-bounded by 1 ? 1e ? 16? .
Proof. Let us denote by O an optimal solution to maxS:|S|?K F (S) and by f a submodular representative of F . Let us write S = {e1 , . . . , e` } the set returned by the greedy algoithm and define
Si = {e1 , . . . , ei }, then:
X
X 1
f (O) ? f (Si ) +
f (Si ? {e}) ? f (Si ) ? f (Si ) +
F (Si ? {e}) ? f (Si )
1??
e?OPT
e?O
X 1
X 1 + ?
? f (Si ) +
F (Si+1 ) ? f (Si ) ? f (Si ) +
f (Si+1 ) ? f (Si )
1??
1??
e?O
e?O
1+?
? f (Si ) + K
f (Si+1 ) ? f (Si )
1??
where the first inequality uses submodularity, the second uses the definition of approximate submodularity, the third uses the definition of the Algorithm, the fourth uses approximate submodularity again
and the last one uses that |O| ? k.
7
Reordering the terms, and expressing the inequality in terms of F (using the definition of approximate
submodularity) gives:
2
2
1
1 1??
1??
F (Si+1 ) ? 1 ?
F (Si ) +
F (O)
k
1+?
k 1+?
This is an inductive inequality of the form ui+1 ? aui + b, u0 = 0. Whose solution is ui ?
b
i
1?a (1 ? a ). For our specific a and b, we obtain:
k
2k !
1
1
1??
F (S) ?
1? 1?
F (O)
4k?
k
1+?
1 + (1??)
2
The following proposition shows that ? = k1 is tight for the greedy algorithm, and that this is the case
even for additive functions. The proof can be found in the Appendix.
1
Proposition 6. For any ? > 0, there exists an ?-approximately additive function with ? = ? k1??
for which the Greedy algorithm has non-constant approximation ratio.
Matroid constraint. Theorem 5 can be generalized to the case of matroid constraints. We are now
looking at a problem of the form: maxS?I F (S), where I is the set of independent sets of a matroid.
Theorem 7. Let I be the set of independent sets of a matroid of rank k, and let F be an ?approximately submodular function, then if S is the set returned by the greedy algorithm:
1 1??
1
max f (S)
F (S) ?
k? S?I
2 1 + ? 1 + 1??
In particular, for k ? 2, any constant 0 ? ? < 1 and ? = k? , this approximation ratio is constant and
lower-bounded by ( 12 ? 2?).
3.2
Bounded curvature
With an additional assumption on the curvature of the submodular function f , it is possible to
obtain a constant approximation ratio for any ?-approximately submodular function with constant
f
(a)
?. Recall that the curvature c of function f : 2N ? R is defined by c = 1 ? mina?N N \{a}
.A
f (a)
consequence of this definition when f is submodular is that for any S ? N and a ? N \ S we have
that fS (a) ? (1 ? c)f (a).
Proposition 8. For the problem max|S|?k F (S) where F is an ?-approximately submodular function
which approximates a monotone submodular f with curvature c, there exists a polynomial time
2
algorithm which achieves an approximation ratio of (1 ? c)( 1??
1+? ) .
References
[1] F. Bach. Structured sparsity-inducing norms through submodular functions. In NIPS, 2010.
[2] A. Badanidiyuru, S. Dobzinski, H. Fu, R. Kleinberg, N. Nisan, and T. Roughgarden. Sketching
valuation functions. In SODA, pages 1025?1035. SIAM, 2012.
[3] M.-F. Balcan and N. J. Harvey. Learning submodular functions. In Proceedings of the forty-third
annual ACM symposium on Theory of computing, pages 793?802. ACM, 2011.
[4] A. Belloni, T. Liang, H. Narayanan, and A. Rakhlin. Escaping the local minima via simulated
annealing: Optimization of approximately convex functions. In COLT, pages 240?265, 2015.
[5] Y. Chen, S. H. Hassani, A. Karbasi, and A. Krause. Sequential information maximization:
When is greedy near-optimal? In COLT, pages 338?363, 2015.
[6] A. Das, A. Dasgupta, and R. Kumar. Selecting diverse features via spectral relaxation. In NIPS,
2012.
8
[7] A. Das and D. Kempe. Submodular meets spectral: Greedy algorithms for subset selection,
sparse approximation and dictionary selection. In ICML, 2011.
[8] A. Defazio and T. Caetano. A convex formulation for learning scale-free networks via submodular relaxation. In NIPS, 2012.
[9] D. Golovin and A. Krause. Adaptive submodularity: Theory and applications in active learning
and stochastic optimization. JAIR, 42:427?486, 2011.
[10] R. Gomes and A. Krause. Budgeted nonparametric learning from data streams. In ICML, 2010.
[11] C. Guestrin, A. Krause, and A. Singh. Near-optimal sensor placements in Gaussian processes.
In International Conference on Machine Learning (ICML), August 2005.
[12] A. Guillory and J. Bilmes. Simultaneous learning and covering with adversarial noise. In ICML,
2011.
[13] A. Hassidim and Y. Singer. Submodular optimization under noise. CoRR, abs/1601.03095,
2016.
[14] S. Hoi, R. Jin, J. Zhu, and M. Lyu. Batch mode active learning and its application to medical
image classification. In ICML, 2006.
[15] R. K. Iyer and J. A. Bilmes. Submodular optimization with submodular cover and submodular
knapsack constraints. In NIPS, pages 2436?2444, 2013.
[16] R. K. Iyer, S. Jegelka, and J. A. Bilmes. Curvature and optimal algorithms for learning and
minimizing submodular functions. In NIPS, pages 2742?2750, 2013.
[17] D. Kempe, J. Kleinberg, and E. Tardos. Maximizing the spread of influence through a social
network. In KDD, 2003.
[18] A. Krause and C. Guestrin. Near-optimal observation selection using submodular functions. In
National Conference on Artificial Intelligence (AAAI), Nectar track, July 2007.
[19] A. Krause and C. Guestrin. Nonmyopic active learning of gaussian processes. an exploration?
exploitation approach. In ICML, 2007.
[20] A. Krause and C. Guestrin. Submodularity and its applications in optimized information
gathering. ACM Trans. on Int. Systems and Technology, 2(4), 2011.
[21] H. Lin and J. Bilmes. A class of submodular functions for document summarization. In
ACL/HLT, 2011.
[22] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An analysis of approximations for maximizing
submodular set functions?i. Mathematical Programming, 14(1):265?294, 1978.
[23] M. G. Rodriguez, J. Leskovec, and A. Krause. Inferring networks of diffusion and influence.
ACM TKDD, 5(4), 2011.
[24] M. G. Rodriguez and B. Sch?lkopf. Submodular inference of diffusion networks from multiple
trees. In ICML, 2012.
[25] Y. Singer and J. Vondr?k. Information-theoretic lower bounds for convex optimization with
erroneous oracles. In NIPS, pages 3186?3194, 2015.
[26] A. Singla, S. Tschiatschek, and A. Krause. Noisy submodular maximization via adaptive
sampling with applications to crowdsourced image collection summarization. arXiv preprint
arXiv:1511.07211, 2015.
[27] H. Song, R. Girshick, S. Jegelka, J. Mairal, Z. Harchaoui, and T. Darrell. On learning to localize
objects with minimal supervision. In ICML, 2014.
[28] S. Tschiatschek, R. Iyer, H. Wei, and J. Bilmes. Learning mixtures of submodular functions for
image collection summarization. In NIPS, 2014.
[29] J. Zheng, Z. Jiang, R. Chellappa, and J. Phillips. Submodular attribute selection for action
recognition in video. In NIPS, 2014.
9
| 6236 |@word exploitation:1 briefly:1 version:5 polynomial:1 stronger:2 norm:1 additively:1 bn:2 selecting:1 document:2 si:26 written:1 realize:1 additive:7 informative:1 kdd:1 v:2 greedy:14 fewer:1 intelligence:1 provides:1 mathematical:1 constructed:1 direct:1 symposium:1 persistent:1 prove:1 introduce:1 manner:1 indeed:1 roughly:1 surge:1 decreasing:2 cardinality:12 considering:1 spain:1 bounded:9 underlying:2 what:1 interpreted:1 finding:1 guarantee:16 every:1 concave:1 indistinguishability:1 medical:1 positive:3 before:1 local:1 consequence:3 initiated:1 jiang:1 meet:1 path:1 approximately:37 black:1 acl:1 studied:2 conversely:1 suggests:1 tschiatschek:2 range:5 union:3 area:2 evolving:1 word:1 cannot:1 close:7 selection:4 context:2 influence:2 writing:1 optimize:2 deterministic:4 maximizing:11 independently:1 convex:4 survey:1 immediately:1 m2:1 proving:1 notion:1 tardos:1 construction:2 exact:2 programming:1 us:6 distinguishing:1 designing:1 harvard:4 element:2 satisfying:1 recognition:1 preprint:1 capture:2 worst:1 caetano:1 complexity:10 ui:2 concision:1 singh:1 tight:3 badanidiyuru:1 negatively:1 various:1 chellappa:1 query:23 artificial:1 extraordinarily:1 whose:3 solve:1 say:3 relax:3 otherwise:4 ability:1 noisy:4 reconstruction:1 translate:1 description:1 kh:1 inducing:1 darrell:1 sea:2 object:2 coupling:1 depending:1 progress:1 strong:1 coverage:10 involves:1 implies:9 come:2 direction:1 submodularity:19 attribute:1 stochastic:3 exploration:1 hoi:1 require:1 proposition:5 opt:1 strictly:1 hold:6 considered:3 ground:1 great:1 lyu:1 dobzinski:1 achieves:7 dictionary:1 estimation:1 combinatorial:1 singla:1 tool:1 clearly:3 sensor:1 always:1 gaussian:2 aim:1 rather:1 bet:1 focus:2 rank:2 impossibility:3 contrast:1 adversarial:1 hassid:1 inference:3 interested:1 among:4 colt:2 classification:1 special:1 kempe:2 equal:2 once:1 construct:1 sampling:4 chernoff:2 icml:8 national:1 ourselves:1 n1:9 ab:1 detection:1 interest:1 zheng:1 evaluation:2 mixture:1 yielding:1 pmac:2 implication:2 fu:1 modest:2 unless:1 tree:1 girshick:1 theoretical:1 leskovec:1 minimal:1 modeling:1 cover:1 retains:1 maximization:3 subset:5 guillory:1 combined:1 international:1 randomized:5 siam:1 retain:1 off:1 sketching:3 again:2 aaai:1 satisfied:1 possibly:2 return:2 int:1 nisan:1 stream:1 multiplicative:3 break:1 later:1 lot:1 analyze:1 crowdsourced:1 yaron:2 accuracy:2 lkopf:1 bayesian:1 bilmes:5 ah:4 simultaneous:1 hlt:1 definition:11 naturally:1 proof:9 associated:1 gain:1 dataset:1 recall:3 realm:1 hassani:1 obtainable:1 appears:2 attained:2 jair:1 follow:1 wei:1 formulation:1 box:1 generality:1 horel:1 furthermore:1 sketch:1 ei:1 rodriguez:2 mode:1 quality:1 grows:1 contain:1 true:1 normalized:4 inductive:1 hence:3 equality:1 deal:1 covering:1 generalized:1 mina:1 theoretic:1 balcan:2 image:3 thibaut:1 recently:1 nonmyopic:1 common:2 overview:3 exponentially:1 nh:1 approximates:3 interpret:1 expressing:1 queried:2 phillips:1 trivially:1 submodular:94 access:6 longer:1 supervision:1 curvature:8 recent:2 optimizing:2 scenario:1 harvey:2 inequality:5 arbitrarily:2 seen:1 minimum:2 additional:3 guestrin:4 forty:1 maximize:1 july:1 u0:1 multiple:3 desirable:1 harchaoui:1 technical:1 bach:1 retrieval:1 lin:1 e1:2 expectation:1 arxiv:2 addition:1 krause:9 annealing:1 source:1 sch:1 subject:2 inconsistent:2 structural:2 near:3 enough:2 matroid:6 restrict:1 escaping:1 reduce:1 whether:1 defazio:1 song:1 f:1 returned:7 speaking:1 action:1 clear:1 involve:1 detailed:1 nonparametric:1 ph:10 narayanan:1 canonical:1 notice:1 estimated:4 track:1 diverse:1 discrete:1 write:1 dasgupta:1 key:1 four:1 drawn:6 localize:1 budgeted:1 diffusion:3 graph:1 relaxation:5 monotone:13 year:1 sum:1 fourth:1 soda:1 reasonable:1 draw:3 appendix:5 bit:2 bound:22 distinguish:3 oracle:5 annual:1 roughgarden:1 occur:1 aui:1 constraint:16 belloni:1 placement:1 unachievable:1 kleinberg:2 aspect:1 min:3 kumar:1 structured:1 nectar:1 smaller:2 s1:2 restricted:1 pr:1 karbasi:1 gathering:1 taken:1 equation:1 remains:1 singer:3 adopted:1 apply:3 observe:3 spectral:2 tolerable:1 alternative:1 batch:1 existence:2 original:2 knapsack:1 denotes:1 clustering:1 include:2 running:1 graphical:1 maintaining:1 k1:5 approximating:1 classical:1 objective:5 question:2 already:1 concentration:2 exhibit:1 nemhauser:1 simulated:1 valuation:1 considers:2 reason:1 provable:3 assuming:1 ratio:18 providing:1 minimizing:1 equivalently:1 liang:1 unfortunately:1 stated:2 design:2 summarization:5 unknown:2 upper:5 conversion:1 observation:1 sm:7 jin:1 looking:1 varied:1 arbitrary:1 august:1 complement:1 namely:1 optimized:1 learned:1 barcelona:1 nip:9 trans:1 able:1 sparsity:1 challenge:1 max:20 video:2 power:1 event:1 natural:3 zhu:1 technology:1 imply:1 numerous:1 concludes:1 faced:1 prior:1 relative:2 catalyst:1 loss:2 reordering:1 interesting:1 wolsey:1 querying:1 jegelka:2 consistent:3 elsewhere:1 last:1 free:1 formal:1 side:1 wide:2 characterizing:1 matroids:1 sparse:2 absolute:1 valid:1 made:3 adaptive:2 collection:2 far:1 polynomially:5 social:1 approximate:17 obtains:1 vondr:1 implicitly:2 active:5 mairal:1 assumed:1 conclude:1 gomes:1 un:1 why:1 learn:1 golovin:1 constructing:1 domain:2 da:2 tkdd:1 main:4 spread:1 universe:1 noise:12 arise:1 n2:1 allowed:1 representative:4 fails:1 inferring:1 wish:1 deterministically:1 exponential:5 xh:1 lie:1 candidate:1 third:2 theorem:15 transitioning:1 erroneous:2 specific:2 rakhlin:1 cease:1 exists:11 sequential:1 corr:1 iyer:3 budget:1 gap:1 chen:1 simply:1 expressed:1 inapproximability:3 applies:1 constantfactor:1 corresponds:1 satisfies:1 relies:1 acm:4 fisher:1 hard:2 inapproximable:1 specifically:5 uniformly:5 averaging:1 lemma:10 called:4 experimental:2 formally:2 evaluate:2 |
5,788 | 6,237 | Eliciting Categorical Data for Optimal Aggregation
Chien-Ju Ho
Cornell University
[email protected]
Rafael Frongillo
CU Boulder
[email protected]
Yiling Chen
Harvard University
[email protected]
Abstract
Models for collecting and aggregating categorical data on crowdsourcing platforms typically fall into two broad categories: those assuming agents honest and
consistent but with heterogeneous error rates, and those assuming agents strategic
and seek to maximize their expected reward. The former often leads to tractable
aggregation of elicited data, while the latter usually focuses on optimal elicitation
and does not consider aggregation. In this paper, we develop a Bayesian model,
wherein agents have differing quality of information, but also respond to incentives. Our model generalizes both categories and enables the joint exploration
of optimal elicitation and aggregation. This model enables our exploration, both
analytically and experimentally, of optimal aggregation of categorical data and
optimal multiple-choice interface design.
1
Introduction
We study the general problem of eliciting and aggregating information for categorical questions. For
example, when posing a classification task to crowd workers who may have heterogeneous skills or
amount of information about the underlying true label, the principle wants to elicit workers? private
information and aggregate it in a way to maximize the probability that the aggregated information
correctly predicts the underlying true label.
Ideally, in order to maximize the probability of correctly predicting the ground truth, the principal
would want to elicit agents? full information by asking agents for their entire belief in the form of a
probability distribution over labels. However, this is not always practical, e.g., agents might not be
able to accurately differentiate 92% and 93%. In practice, the principal is often constrained to elicit
agents? information via a multiple-choice interface, which discretizes agents? continuous belief into
finite partitions. An example of such an interface is illustrated in Figure 1. Moreover, disregard
of whether full or partial information about agents? beliefs is elicited, aggregating the information
into a single belief or answer is often done in an ad hoc fashion (e.g. majority voting for simple
multiple-choice questions).
In this work, we explore the joint problem of eliciting and aggregating information for categorical
What?s the texture shown in the image?
data, with a particular focus on how to design the
multiple-choice interface, i.e. how to discretize
agents? belief space to form discrete choices. The
goal is to maximize the probability of correctly predicting the ground truth while incentivizing agents to
truthfully report their beliefs. This problem is challenging. Changing the interface not only changes
Figure 1: An example of the task interface.
which agent beliefs lead to which responses, but
also influences how to optimally aggregate these responses into a single label. Note that we focus on the abstract level of interface design. We explore
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
the problem of how to partition agents? belief spaces for optimal aggregations. We do not discuss
other behavioral aspects of interface design, such as question framing, layouts, etc
We propose a Bayesian framework, which allows us to achieve our goal in three interleaving steps.
First, we constrain our attention to interfaces which admit economically robust payment functions,
that is, where agents seeking to maximize their expected payment select the answer that corresponds
to their belief. Second, given an interface, we develop a principled way of aggregating information
elicited through it, to obtain the maximum a posteriori (MAP) estimator. Third, given the constraints
on interfaces (e.g., only binary choice question is allowed) and aggregation methods, we can then
choose the optimal interface, which leads to the highest prediction accuracy after both elicitation
and aggregation. (Note that if there are no constraints, eliciting full information is always optimal.)
Using theoretical analysis, simulations, and experiments, we provide answers to several interesting
questions. Our main results are summarized as follows:
? If the principal can elicit agents? entire belief distributions, our framework can achieve optimal
aggregation, in the sense that the principal can make predictions as if she has observed the
private information of all agents (Section 4.1). This resolves the open problem of optimal
aggregation for categorical data that was considered impossible to achieve in [1].
? For the binary-choice interface design question, we explore the design of optimal interfaces for
small and large numbers of agents (Section 4.2). We conduct human-subject experiments on
Amazon?s Mechanical Turk and demonstrate that our optimal binary-choice interface leads to
better prediction accuracy than a natural baseline interface (Section 5.3).
? Our framework gives a simple principled way of aggregating data from arbitrary interfaces
(Section 5.1). Applied to experimental data from [2] for a particular multiple-choice interface,
our aggregation method has better prediction accuracy than their majority voting (Section 5.2).
? For general multiple-choice interfaces, we use synthetic experiments to obtain qualitative insights of the optimal interface. Moreover, our simple (heuristic) aggregation method performs
nearly optimally, demonstrating the robustness of our framework (Section 5.1).
1.1
Related Work
Eliciting private information from strategic agents has been a central question in economics and
other related domains. The focus here is often on designing payment rules such that agents are incentivized to truthfully report their information. In this direction, proper scoring rules [3, 4, 5] have
long been used for eliciting beliefs about categorical and continuous variables. When the realized
value of a random variable will be observed, proper scoring rules have been designed for eliciting
either the complete subjective probability distributions of the random variable [3, 4] or some statistical properties of these distributions [6, 7]. When the realized value of a random variable will
not be available, a class of peer prediction mechanisms [8, 9] has been designed for truthful elicitation. These mechanisms often use proper scoring rules and leverage on the stochastic relationship
of agents? private information about the random variable in a Bayesian setting. However, work in
this direction often takes elicitation as an end goal and doesn?t offer insights on how to aggregate
the elicited information.
Another theme in the existing literature is the development of statistical inference and probabilistic
modeling methods for the purpose of aggregating agents? inputs. Assuming a batch of noisy inputs,
the EM algorithm [10] can be adopted to learn the skill level of agents and obtain estimates of
the best answer [11, 12, 13, 14, 15]. Recently, extensions have been made to also consider task
assignment and online task assignment in the context of these probabilistic models of agents [16,
17, 18, 19]. Work under this theme often assumes non-strategic agents who have some error rate
and are rewarded with a fixed payment that doesn?t depend on their reports.
This paper attempts to achieve both truthful elicitation and principled aggregation of information
with strategic agents. The closest work to our paper is [1], which has the same general goal and uses
a similar Bayesian model of information. That work achieves optimal aggregation by associating the
confidence of an agent?s prediction with hyperparameters of a conjugate prior distribution. However,
this approach leaves optimal aggregation for categorical data as an open question, which we resolve.
Moreover, our model allows us to elicit confidence about an answer over a coarsened report space
(e.g. a partition of the probability simplex) and to reason about optimal coarsening for the purpose
2
of aggregation. In comparison, [2] also elicit quantified confidence on reported labels in their mechanism. Their mechanism is designed to incentivize agents to truthfully report the label that they
believe to be correct when their confidence on the report is above a threshold and skip the question
when it?s below the threshold. Majority voting is then used to aggregate the reported labels. These
thresholds provide a coarsened report space for eliciting confidence, and thus are well modeled by
our approach. However, in that work the thresholds are given a priori, and moreover, the elicited
confidence is not used in aggregation. These are both holes which our approach fill; in Section 5,
we demonstrate how to derive optimal thresholds, and aggregation policies, which depend critically
on the prior distribution and the number of agents.
2
Bayesian Model
In our model, the principal would like to get information about a categorical question (e.g., predicting who will win the presidential election, or identifying whether there is a cancer cell in a picture
of cells) from m agents. Each question has a finite number of possible answers X , |X | = k. The
ground truth (correct answer) ? is drawn from a prior distribution p(?), with realized value ? 2 X .
This prior distribution is common knowledge to the principal and the agents. We use ?? to denote
the unknown, realized ground truth.
Agents have heterogeneous levels of knowledge or abilities on the question that are unknown to
the principal. To model agents? abilities, we assume each agent has observed independent noisy
samples related to the ground truth. Hence, each agent?s ability can be expressed as the number of
noisy samples she has observed. The number of samples observed can be different across agents
and is unknown to the principal. Formally, given the ground truth ?? , each noisy sample X, with
x 2 X , is i.i.d. drawn according to the distribution p(x|?? ). 1
In this paper, we focus our discussion on the symmetric noise distribution, defined as
p(x|?) = (1
?)1{? = x} + ? ? 1/k.
This noise distribution is common knowledge to the principal and the agents. While the symmetric
noise distribution may appear restrictive, it is indeed quite general. In Appendix C, we discuss how
our model covers many scenarios considered in the literature as special cases.
Beliefs of Agents. If an agent has observed n noisy samples, X
P1 n= x1 , . . . , Xn = xn , her belief
is determined by a count vector ~c = {c? : ? 2 X } where c? = i=1 1{xi = ?} is the number of
sample ? that the agent has observed. According to Bayes? rule, we write her posterior belief on ?
as p(?|x1 , . . . , xn ), which can be expressed as
Qn
?c? n c? p(?)
j=1 p(xj |?)p(?)
p(?|x1 , . . . , xn ) =
=P
,
c? 0
n c?0 p(? 0 )
p(x1 , . . . , xn )
? 0 2X ?
where ? = 1
? + ?/k and
= ?/k.
In addition to the posterior on ?, the agent also has an updated belief, called the posterior predictive
distribution (PPD), about an independent sample X given observed samples X1 = x1 , . . . , Xn =
xn . The PPD can be considered as a noisy version of the posterior:
?
p(x|x1 , . . . , xn ) = + (1 ?)p(? = x|x1 , . . . , xn )
k
In fact, in our setting the PPD and posterior are in one-to-one correspondence, so while our theoretical results focus on the PPD, our experiments will consider the posterior without loss of generality.
Interface. An interface defines the space of reports the principal can elicit from agents. The
reports elicited via the interface naturally partition agents? beliefs, a k-dimensional probability simplex, into a (potentially infinite) number of cells, which each correspond to a coarsened
version of agents? PPD. Formally, each interface consists of a report space
S R and a partition
D = {Dr ? k }r2R , with each cell Dr corresponding to a report r and r2R Dr = k .2 In
this paper, we sometime use only R or D to represent an interface.
1
When there is no ambiguity, we use p(x|?? ) to represent p(X = x|? = ?? ) and similar notations for
other distributions.
2
Strictly speaking, we will allow cells to overlap on their boundary; see Section 3 for more discussion.
3
In this paper, we focus on the abstract level of the interface design. We explore the problem of how to
partition agents? belief spaces for optimal aggregations. We do not discuss other aspects of interface
design, such as question framing, layouts, etc. In practice there are often pre-specified constraints
on the design of interfaces, e.g., the principal can only ask agents a multiple-choice question with
no more than 2 choices. We explore how to optimal design interfaces with given constraints.
Objective. The goal of the principal is to choose an interface corresponding to a partition D, satisfying some constraints, and an aggregation method AggD , to maximize the probability of correctly
predicting the ground truth. One very important constraint is that there should exist a payment
method for which agents are correctly incentivized to report r if their belief is in Dr ; see Section 3.
We can formulate the goal as the following optimization problem,
max
(1)
max Pr[AggD (R1 , . . . , Rm ) = ?] ,
(R,D)2Interfaces AggD
where Ri are random variables representing the reports chosen by agents after ?? and the samples
are drawn.
3
Our Mechanism
We assume the principal has access to a single independent noisy sample X drawn from p(x|?? ).
The principal can then leverage this sample to elicit and aggregate agents? beliefs by adopting techniques in proper scoring rules [3, 5]. This assumption can be satisfied by, for example, allowing
the principal to ask for an additional opinion outside of the m agents, or by asking agents multiple
questions and only scoring a small random subset for which answers can be obtained separately
(often, on the so-called ?gold standard set?).
Our mechanism can be described as follows. The principal chooses an interface with report space R
and partition D, and a scoring rule S(r, x) for r 2 R and x 2 X . The principal then requests a report
ri 2 R for each agent i 2 {1, . . . , m}, and observes her own sample X = x. She then gives a score
of S(ri , x) to agent i and aggregates the reports via a function AggD : R?? ? ??R ! X . Agents are
assumed to be rational and aim to maximize their expected scores. In particular, if an agent i believes
X is drawn from some distribution p, she will choose to report ri 2 argmaxr2R EX?p [S(r, X)].
Elicitation. To elicit truthful reports from agents, we adopt techniques from proper scoring rules
[3, 5]. A scoring rule is strictly proper if reporting one?s true belief uniquely maximizes the expected
score. For example, a strictly proper score is the logarithmic scoring rule, S(p, x) = log p(x), where
p(x) is the agent?s belief of the distribution x is drawn from.
In our setting, we utilize the requester?s additional sample p(x|?? ) to elicit agents? PPDs
p(x|x1 , . . . , xn ). If the report space R = k , we can simply use any strictly proper scoring rules,
such as the logarithmic scoring rule, to elicit truthful reports. If the set of report space R is finite,
we must specify what it means to be truthful. The partition D defined in the interface is a way of
codifying this relationship: a scoring rule is truthful with respect to a partition if report r is optimal
whenever an agent?s belief lies in cell Dr .3
Definition 1. S(r, x) is truthful with respect to D if for all r 2 R and all p 2
p 2 Dr () 8r0 6= r Ep S(r, X)
k
we have
Ep S(r0 , X) .
Several natural questions arise from this definition. For which partitions D can we devise such
truthful scores? And if we have such a partition, what are all the scores which are truthful for it?
As it happens, these questions have been answered in the field of property elicitation [20, 21], with
the verdict that there exist truthful scores for D if and only if D forms a power diagram, a type of
weighted Voronoi diagram [22].
Thus, when we consider the problem of designing the interface for a crowdsourcing task, if we want
to have robust economic incentives, we must confine ourselves to interfaces which induce power
3
As mentioned above, strictly speaking, the cells {Dr }r2R do not form a partition because their boundaries
overlap. This is necessary: for any (nontrivial) finite-report mechanism, there exist distributions for which the
agent is indifferent between two or more reports. Fortunately, the set of all such distributions has Lebesgue
measure 0 in the simplex, so these boundaries do not affect our analysis.
4
diagrams on the set of agent beliefs. In this paper, we focus on two classes of power diagrams:
threshold partitions, where the membership p 2 Dr can be decided by comparisons of the form
t1 ? p? ? t2 , and shadow partitions, where p 2 Dr () r = argmaxx p(x) p? (x) for
some reference distribution p? . Threshold partitions cover those from [2], and shadow partitions are
inspired by the Shadowing Method from peer prediction [23].
Aggregation. The goal of the principal is to aggregate the agents? reports into a single prediction
which maximizes the probability of correctly predicting the ground truth.
More formally, let us assume that the principal obtains reports r1 , . . . , rm from m agents such
that the belief pi of agent i lies in Di := Dri . In order to maximize the probability of correct
predictions, the principal aggregates the reports by calculating the posterior p(?|D1 , . . . , Dm ) for
all ? and making the prediction ?? that maximizes the posterior.
!
m
Y
1
m
i
?
? = argmax p(?|D , . . . , D ) = argmax
p(D |?) p(?) ,
?
?
i=1
where p(D |?) is the probability that the PPD of agent i falls within Di giving the ground truth to
be ?. To calculate p(D|?), we assume agents? abilities, represented by the number of samples, are
drawn from a distribution p(n). We assume p(n) is known to the principal. This assumption can be
satisfied if the principal is familiar with the market and has knowledge of agents? skill distribution.
Empirically, in our simulation, the optimal interface is robust to the choice of this distribution.
0
1
0
1
X
X
X
X ?n ?
p(n)
@
@
p(D|?) =
p(x1 ..xn |?)A p(n) =
? c? n c? A
,
~
c
Z(n)
n
n
i
x1 ..xn :p(?|x1 ..xn )2D
with Z(n) =
P
n
~
c ~
c
?
c1
n c1
and
~
c:p(?|~
c)2D
n
~
c
= n!/(
Q
i ci !),
where ci is the i-th component of ~c.
Interface Design. Let P (D) be the probability of correctly predicting the ground truth given partition D, assuming the best possible aggregation policy. The expectation is taken over which cell
Di 2 D agent i reports for m agents.
X
P (D) =
max p(?|D1 , . . . , Dm )p(D1 , . . . , Dm )
D 1 ,...,D m
=
X
?
max
D 1 ,...,D m
?
m
Y
i=1
!
p(Di |?) p(?) .
The optimal interface design problem is to find an interface with partition D within the set of feasible
interfaces such that in expectation, P (D) is maximized.
4
Theoretical Analysis
In this section, we analyze two settings to illustrate what our mechanism can achieve. We first consider the setting in which the principal can elicit full belief distributions from agents. We show that
our mechanism can obtain optimal aggregation, in the sense that the principal can make prediction
as if she has observed all the private signals observed by all workers. In the second setting, we
consider a common setting with binary signals and binary cells (e.g., binary classification tasks with
two-option interface). We demonstrate how to choose the optimal interface when we aim to collect
data from one single agent and when we aim to collect data from a large number of agents.
4.1
Collecting Full Distribution
Consider the setting in which the allowed reports are full distributions over labels. We show that
in this setting, the principal can achieve optimal aggregation. Formally, the interface consists of a
report space R = k ? [0, 1]k , which is the k-dimensional probability simplex, corresponding to
beliefs about the principal?s sample X given the observed samples of an agent. The aggregation is
optimal if the principal can obtain global PPD.
Definition 2 ([1]). Let S be the set of all samples observed by agents. Given the prior p(?) and data
S distributed among the agents, the global PPD is given by p(x|S).
5
In general, as noted in [1], computing the global PPD requires access to agents? actual samples, or
at least their counts, whereas the principal can at most elicit the PPD. In that work, it is therefore
considered impossible for the principal to leverage a single sample to obtain the global PPD for a
categorical question, as there does not exist a unique mapping from PPDs to sample counts. While
our setting differs from that paper, we intuitively resolve this impossibility by finding a non-trivial
unique mapping between the differences of sample counts and PPDs.
Lemma 1. Fix ?0 2 X and let di? i 2 Zk 1 be the vector di? i? = ci?0 ci? encoding the differences
in the number of samples of ? and ?0 that agent i has observed. There exists an unique mapping
between di? i and the PPD of agent i.
With Lemma 1 in hand, assuming the principal can obtain the full PPD from each agent, she can
now compute the global PPD: she simply converts each agents? PPD into a sample count difference,
sums these differences, and finally converts the total differences into the global PPD.
Theorem 2. Given the PPDs of all agents, the principal can obtain the global PPD.
4.2
Interface Design in Binary Settings
To gain the intuition about optimal interface design, we examine a simple setting with binary signal
X = {0, 1} and a partitions with only two cells. To simplify the discussion, we also assume all
agents have observed exactly n samples. In this setting, each partition can be determined by a
single parameter, the threshold pT ; its cells indicate whether the agent believes the probability of the
principal?s sample X to be 0 is larger than pT or not. Note that we can also write the threshold as T ,
the number of samples that the agent observes to be signal 0. Membership in the two cells indicates
whether or not the agents observes more than T samples with signal 0.
We first give the result when there is only one agent. 4
Lemma 3. In the binary-signal and two-cell setting, if the number of agents is one, the optimal
partition has threshold pT ? = 1/2.
If the number of agents is large, we numerically solve for the optimal partition with a wide range of
parameters. We find that the optimal partition is to set the threshold such that agents? posterior belief
on the ground truth is the same as the prior. This is equivalent to asking agents whether they observe
more samples with signal 0 or with signal 1. Please see Appendix B and H for more discussion.
The above arguments suggest that when the principal plans to collect data from multiple agents for
datasets with asymmetric priors (e.g., identifying anomaly images from a big dataset), adopting our
interface would lead to better aggregation than traditional interface do. We have evaluated this claim
in real-world experiments in Section 5.3.
5
Experiments
To confirm our theoretical results and test our model, we turn to experimental results. In our synthetic experiments, we simply explore what the model tells us about optimal partitions and how they
behave as a function of the model, giving us qualitative insights into interface design. We also introduce a heuristic aggregation method, which allows our results to be easily applied in practice. In
addition to validating our heuristics numerically, we show that they lead to real improvements over
simple majority voting by re-aggregating some data from previous work [2]. Finally, we perform
our own experiments for a binary signal task and show that the optimal mechanism under the model,
coupled with heuristic aggregation, significantly outperforms the baseline.
5.1
Synthetic Experiments
From our theoretical results, we expect that in the binary setting, the boundary of the optimal partition should be roughly uniform for small numbers of agents and quickly approach the prior as the
number of agents per task increases. In the Appendix, we confirm this numerically. Figure 2 extends this intuition to the 3-signal case, where the optimal reference point p? for a shadow partition
closely tracks the prior. Figure 2 also gives insight into the design of threshold partitions, showing
4
Our result can be generalized to k signals and one agent. See Lemma 4 in Appendix G.
6
2
0
2
1 0
2
0
2
1 0
2
1 0
2
1 0
2
1 0
2
1 0
2
1 0
2
1 0
2
1 0
2
1 0
2
1 0
2
1 0
2
1 0
1
2
1 0
1
Figure 2: Optimal interfaces as a function of the model; the prior is shown in each as a red dot. Each triangle
represents the probability simplex on three signals (0,1,2), and the cells (set of posteriors) of the partition
defined by the interface are delineated by dashed lines. Top: the optimal shadow partition for three agents.
Here the reference distribution p? is close to the prior, but often slightly toward uniform as suggested by the
behavior in the binary case (Section 4.2); for larger numbers of agents this point in fact matches the prior
always. Bottom: the optimal threshold partition for increasing values of ?. Here as one would expect, the more
uncertainty agents have about the true label, the lower the thresholds should be.
Figure 3: Prediction error according to our model as a function of the prior for (a) the optimal partition with
optimal aggregation, (b) the optimal partition with heuristic aggregation, and (c) the na??ve partition and aggregation. As we see, the heuristics are nearly optimal, and yield significantly lower error than the baseline.
that the threshold values should decrease as agent uncertainty increases. The Appendix gives other
qualitative findings.
The optimal partitions and aggregation policies suggested by our framework are often quite complicated. Thus, to be practical, one would like simple partitions and aggregation methods which
perform nearly optimally under our framework. Here we suggest a heuristic aggregation (HA)
method which is defined for a fixed number of samples n: for each cell Dr , consider the set of
count vectors after which an agent?s posterior would lie in Dr , and let cr be the average count
vector in this set. Now when agents report r1 , . . . , rm , simply sum the count vectors and choose
?? = HA(r1 , . . . , rm ) = argmax? p(?|cr1 +. . .+crm ). Thus, by simply translating the choice of cell
Dr to a representative sample count an agent may have observed, we arrive at a weighted-majoritylike aggregation method. This simple method performs quite well in simulations, as Figure 3 shows.
It also performs well in practice, as we will see in the next two subsections.
5.2
Aggregation Results for Existing Mechanisms
We evaluate our heuristic aggregation method using the dataset collected from existing mechanisms
in previous work [2]. Their dataset is collected by asking workers to answer a multi-choice question
and select one of the two confidence levels at the same time. We compared our heuristic aggregation
(HA) with simple majority voting (Maj) as adopted in their paper. For our heuristics, we used the
model with n = 4 and ? = 0.85 for every case here; this was the simplest model for which every
cell in every partition contained at least one possible posterior. Our results are fairly robust to the
choice of the model subject to this constraint, however, and often other models perform even better.
In Figure 4, we demonstrate the aggregation results for one of their tasks (?National Flags?) in the
dataset. Although the improvement is relatively small, it is statistically significant for every setting
plotted. Our HA outperformed Maj for all of their datasets and for all values of m.
7
Figure 4: The prediction error of aggregating data collect from existing mechanisms in previous work [2].
5.3
Figure 5: The prediction error of aggregating data
collected from Amazon Mechanical Turk.
Experiments on Amazon Mechanical Turk
We conducted experiments on Amazon Mechanical Turk (mturk.com) to evaluate our interface
design. Our goal was to examine whether workers respond to different interfaces, and whether the
interface and aggregation derived from our framework actually leads to better predictions.
Experiment setup. In our experiment, workers are asked to label 20 blurred images of textures.
We considered an asymmetric prior: 80% of the images were carpet and 20% were granite, and we
communicated this to the workers. Upon accepting the task, workers were randomly assigned to one
of two treatments: Baseline or ProbBased. Both offered a base payment of 10 cents, but the bonus
payments on the 5 randomly chosen ?ground truth? images differed between the treatments.
The Baseline treatment is the mostly commonly seen interface in crowdsourcing markets. For each
image, the worker is asked to choose from {Carpet, Granite}. She can get a bonus of 4 cents for each
correct answer in the ground truth set. In the ProbBased interface, the worker was asked whether
she thinks the probability of the image to be Carpet is {more than 80%, no more than 80%}. From
Section 4.2, this threshold is optimal when we aim to aggregate information from a potentially large
number of agents. To simplify the discussion, we map the two options to {Carpet, Granite} for the
rest of this section. For the 5 randomly chosen ground truth images, the worker would get 2 cents
for each correct answer of carpet images, and get 8 cents for each correct answer of granite images.
We tuned the bonus amount such that the expected bonus for answering all questions correctly is
approximately the same for each treatment. One can also easily check that for these bonus amounts,
workers maximize their expected bonus by honestly reporting their beliefs.
Results. This experiment is completed by 200 workers, 105 in Baseline and 95 in ProbBased. We
first observe whether workers? responses differ for different interfaces. In particular, we compare the
ratio of workers reporting Granite. As shown in Figure 6 (in Appendix A), our result demonstrates
that workers do respond to our interface design and are more likely to choose Granite for all images.
The differences are statistically significant (p < 0.01). We then examine whether this interface combined with our heuristic aggregation leads to better predictions. We perform majority voting (Maj)
for Baseline, and apply our heuristic aggregation (HA) to ProbBased. We choose the simplest model
(n = 1) for HA though the results are robust for higher n. Figure 5 shows that our interface leads to
considerably smaller aggregation for different numbers of randomly selected workers. Performing
HA for Baseline and Maj for ProbBased both led to higher aggregation errors, which underscores
the importance of matching the aggregation to the interface.
6
Conclusion
We have developed a Bayesian framework to model the elicitation and aggregation of categorical
data, giving a principled way to aggregate information collected from arbitrary interfaces, but also
to design the interfaces themselves. Our simulation and experimental results show the benefit of our
framework, resulting in significant prediction performance gains over standard interfaces and aggregation methods. Moreover, our theoretical and simulation results give new insights into the design
of optimal interfaces, some of which we confirm experimentally. While certainly more experiments
are needed to fully validate our methods, we believe our general framework to have value when
designing interfaces and aggregation policies for eliciting categorical information.
8
Acknowledgments
We thank the anonymous reviewers for their helpful comments. This research was partially supported by NSF grant CCF-1512964, NSF grant CCF-1301976, and ONR grant N00014-15-1-2335.
References
[1] R. M. Frongillo, Y. Chen, and I. Kash. Elicitation for aggregation. In The Twenty-Ninth AAAI Conference
on Artificial Intelligence, 2015.
[2] N. B. Shah and D. Zhou. Double or Nothing: Multiplicative Incentive Mechanisms for Crowdsourcing.
In Neural Information Processing Systems, NIPS ?15, 2015.
[3] Glenn W. Brier. Verification of forecasts expressed in terms of probability. Monthly Weather Review,
78(1):1?3, 1950.
[4] L. J. Savage. Elicitation of personal probabilities and expectations. Journal of the American Statistical
Association, 66(336):783?801, 1971.
[5] T. Gneiting and A. E. Raftery. Strictly Proper Scoring Rules, Prediction, and Estimation. Journal of the
American Statistical Association, 102(477):359?378, 2007.
[6] N.S. Lambert, D.M. Pennock, and Y. Shoham. Eliciting properties of probability distributions. In Proceedings of the 9th ACM Conference on Electronic Commerce, EC ?08, pages 129?138. ACM, 2008.
[7] R. Frongillo and I. Kash. Vector-Valued Property Elicitation. In Proceedings of the 28th Conference on
Learning Theory, pages 1?18, 2015.
[8] N. Miller, P. Resnick, and R. Zeckhauser. Eliciting informative feedback: The peer-prediction method.
Management Science, 51(9):1359?1373, 2005.
[9] D. Prelec. A bayesian truth serum for subjective data. Science, 306(5695):462?466, 2004.
[10] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM
algorithm. Journal of the Royal Statistical Society: Series B, 39:1?38, 1977.
[11] V. Raykar, S. Yu, L. Zhao, G. Valadez, C. Florin, L. Bogoni, and L. Moy. Learning from crowds. Journal
of Machine Learning Research, 11:1297?1322, 2010.
[12] S. R. Cholleti, S. A. Goldman, A. Blum, D. G. Politte, and S. Don. Veritas: Combining expert opinions without labeled data. In Proceedings 20th IEEE international Conference on Tools with Artificial
intelligence, 2008.
[13] R. Jin and Z. Ghahramani. Learning with multiple labels. In Advances in Neural Information Processing
Systems, volume 15, pages 897?904, 2003.
[14] J. Whitehill, P. Ruvolo, T. Wu, J. Bergsma, and J. Movellan. Whose vote should count more: Optimal
integration of labels from labelers of unknown expertise. In Advances in Neural Information Processing
Systems, volume 22, pages 2035?2043, 2009.
[15] A. P. Dawid and A. M. Skene. Maximum likeihood estimation of observer error-rates using the EM
algorithm. Applied Statistics, 28:20?28, 1979.
[16] D. R. Karger, S. Oh, and D. Shah. Iterative learning for reliable crowdsourcing systems. In The 25th
Annual Conference on Neural Information Processing Systems (NIPS), 2011.
[17] D. R. Karger, S. Oh, and D. Shah. Budget-optimal crowdsourcing using low-rank matrix approximations.
In Proc. 49th Annual Conference on Communication, Control, and Computing (Allerton), 2011.
[18] J. Zou and D. C. Parkes. Get another worker? Active crowdlearning with sequential arrivals. In Proceedings of the Workshop on Machine Learning in Human Computation and Crowdsourcing, 2012.
[19] C. Ho, S. Jabbari, and J. W. Vaughan. Adaptive task assignment for crowdsourced classification. In The
30th International Conference on Machine Learning (ICML), 2013.
[20] N. Lambert and Y. Shoham. Eliciting truthful answers to multiple-choice questions. In Proceedings of
the Tenth ACM Conference on Electronic Commerce, EC ?09, pages 109?118, 2009.
[21] R. Frongillo and I. Kash. General truthfulness characterizations via convex analysis. In Web and Internet
Economics, pages 354?370. Springer, 2014.
[22] F. Aurenhammer. Power diagrams: properties, algorithms and applications. SIAM Journal on Computing,
16(1):78?96, 1987.
[23] J. Witkowski and D. Parkes. A robust bayesian truth serum for small populations. In Proceedings of the
26th AAAI Conference on Artificial Intelligence, AAAI ?12, 2012.
[24] V. Sheng, F. Provost, and P. Ipeirotis. Get another label? Improving data quality using multiple, noisy
labelers. In ACM SIGKDD Conferences on Knowledge Discovery and Data Mining (KDD), 2008.
[25] P. Ipeirotis, F. Provost, V. Sheng, and J. Wang. Repeated labeling using multiple noisy labelers. Data
Mining and Knowledge Discovery, 2014.
9
| 6237 |@word economically:1 version:2 cu:1 private:5 open:2 seek:1 simulation:5 series:1 score:7 karger:2 tuned:1 subjective:2 existing:4 outperforms:1 savage:1 com:1 must:2 partition:37 informative:1 kdd:1 enables:2 designed:3 intelligence:3 leaf:1 selected:1 ruvolo:1 accepting:1 parkes:2 characterization:1 allerton:1 qualitative:3 consists:2 behavioral:1 introduce:1 indeed:1 market:2 behavior:1 p1:1 examine:3 themselves:1 multi:1 brier:1 expected:6 roughly:1 inspired:1 goldman:1 resolve:3 election:1 actual:1 increasing:1 spain:1 underlying:2 moreover:5 notation:1 maximizes:3 bonus:6 what:5 developed:1 differing:1 finding:2 every:4 collecting:2 voting:6 exactly:1 rm:4 demonstrates:1 control:1 grant:3 appear:1 t1:1 aggregating:10 gneiting:1 encoding:1 approximately:1 might:1 quantified:1 collect:4 challenging:1 range:1 statistically:2 decided:1 practical:2 unique:3 acknowledgment:1 commerce:2 practice:4 differs:1 communicated:1 movellan:1 elicit:13 significantly:2 weather:1 matching:1 shoham:2 confidence:7 pre:1 induce:1 suggest:2 get:6 close:1 context:1 influence:1 impossible:2 vaughan:1 equivalent:1 map:2 reviewer:1 layout:2 attention:1 economics:2 serum:2 convex:1 formulate:1 amazon:4 identifying:2 estimator:1 insight:5 rule:14 fill:1 oh:2 population:1 crowdsourcing:7 requester:1 updated:1 pt:3 colorado:1 anomaly:1 us:1 designing:3 harvard:2 dawid:1 satisfying:1 asymmetric:2 predicts:1 labeled:1 observed:15 coarsened:3 ep:2 bottom:1 resnick:1 wang:1 calculate:1 decrease:1 highest:1 observes:3 principled:4 mentioned:1 intuition:2 dempster:1 reward:1 ideally:1 asked:3 personal:1 depend:2 predictive:1 upon:1 triangle:1 easily:2 joint:2 represented:1 artificial:3 tell:1 aggregate:10 labeling:1 outside:1 crowd:2 peer:3 quite:3 heuristic:12 larger:2 solve:1 valued:1 whose:1 presidential:1 ability:4 statistic:1 think:1 noisy:9 laird:1 online:1 differentiate:1 hoc:1 propose:1 yiling:2 combining:1 achieve:6 gold:1 validate:1 double:1 r1:4 sea:1 derive:1 develop:2 illustrate:1 skip:1 shadow:4 indicate:1 differ:1 direction:2 closely:1 correct:6 stochastic:1 exploration:2 human:2 opinion:2 translating:1 fix:1 anonymous:1 extension:1 strictly:6 codifying:1 confine:1 considered:5 ground:14 mapping:3 claim:1 achieves:1 adopt:1 purpose:2 estimation:2 sometime:1 outperformed:1 shadowing:1 proc:1 label:13 tool:1 weighted:2 always:3 aim:4 zhou:1 frongillo:4 cornell:2 cr:1 derived:1 focus:8 she:9 improvement:2 rank:1 indicates:1 check:1 likelihood:1 impossibility:1 underscore:1 sigkdd:1 baseline:8 sense:2 helpful:1 posteriori:1 inference:1 voronoi:1 membership:2 typically:1 entire:2 her:3 ppd:17 classification:3 among:1 priori:1 development:1 plan:1 platform:1 constrained:1 special:1 fairly:1 integration:1 field:1 represents:1 broad:1 yu:1 icml:1 nearly:3 simplex:5 report:31 t2:1 simplify:2 randomly:4 ve:1 national:1 familiar:1 argmax:3 ourselves:1 lebesgue:1 attempt:1 mining:2 certainly:1 indifferent:1 worker:18 partial:1 necessary:1 conduct:1 incomplete:1 re:1 plotted:1 theoretical:6 modeling:1 asking:4 cover:2 cr1:1 assignment:3 strategic:4 subset:1 uniform:2 conducted:1 optimally:3 reported:2 answer:13 synthetic:3 chooses:1 combined:1 ju:1 considerably:1 international:2 truthfulness:1 siam:1 probabilistic:2 quickly:1 na:1 central:1 ambiguity:1 satisfied:2 aaai:3 choose:8 management:1 dr:12 admit:1 american:2 zhao:1 valadez:1 expert:1 summarized:1 blurred:1 ad:1 multiplicative:1 observer:1 analyze:1 red:1 aggregation:49 bayes:1 elicited:6 raf:1 option:2 complicated:1 crowdsourced:1 accuracy:3 who:3 maximized:1 correspond:1 yield:1 miller:1 bayesian:8 lambert:2 accurately:1 critically:1 expertise:1 whenever:1 definition:3 prelec:1 turk:4 dm:3 naturally:1 di:7 rational:1 gain:2 dataset:4 treatment:4 ask:2 knowledge:6 subsection:1 actually:1 higher:2 wherein:1 response:3 specify:1 done:1 evaluated:1 though:1 generality:1 hand:1 sheng:2 web:1 defines:1 quality:2 believe:2 true:4 ccf:2 former:1 analytically:1 hence:1 assigned:1 symmetric:2 illustrated:1 raykar:1 uniquely:1 please:1 noted:1 generalized:1 complete:1 demonstrate:4 performs:3 interface:65 image:11 recently:1 kash:3 common:3 empirically:1 volume:2 association:2 numerically:3 significant:3 monthly:1 dot:1 access:2 etc:2 base:1 labelers:3 closest:1 posterior:12 own:2 bergsma:1 rewarded:1 scenario:1 n00014:1 binary:12 onr:1 devise:1 scoring:13 seen:1 additional:2 fortunately:1 r0:2 aggregated:1 maximize:9 truthful:11 signal:12 dashed:1 zeckhauser:1 multiple:13 full:7 match:1 offer:1 long:1 prediction:19 heterogeneous:3 mturk:1 expectation:3 represent:2 adopting:2 cell:17 c1:2 addition:2 want:3 separately:1 whereas:1 diagram:5 rest:1 pennock:1 comment:1 subject:2 dri:1 validating:1 coarsening:1 leverage:3 crm:1 xj:1 affect:1 florin:1 associating:1 economic:1 honest:1 whether:10 moy:1 speaking:2 amount:3 category:2 simplest:2 exist:4 nsf:2 correctly:8 per:1 track:1 discrete:1 write:2 incentive:3 demonstrating:1 threshold:16 blum:1 drawn:7 changing:1 tenth:1 utilize:1 incentivize:1 convert:2 sum:2 uncertainty:2 respond:3 reporting:3 extends:1 arrive:1 electronic:2 wu:1 appendix:6 internet:1 correspondence:1 annual:2 nontrivial:1 constraint:7 constrain:1 ri:4 aspect:2 answered:1 argument:1 performing:1 relatively:1 skene:1 according:3 request:1 conjugate:1 across:1 slightly:1 em:3 smaller:1 delineated:1 making:1 happens:1 intuitively:1 pr:1 boulder:1 taken:1 payment:7 discus:3 count:10 mechanism:14 turn:1 needed:1 tractable:1 end:1 jabbari:1 adopted:2 generalizes:1 available:1 granite:6 discretizes:1 observe:2 apply:1 batch:1 robustness:1 ho:2 shah:3 cent:4 assumes:1 top:1 completed:1 calculating:1 giving:3 restrictive:1 ghahramani:1 eliciting:12 society:1 aurenhammer:1 seeking:1 objective:1 question:21 realized:4 r2r:3 traditional:1 win:1 incentivized:2 thank:1 majority:6 collected:4 trivial:1 reason:1 toward:1 assuming:5 modeled:1 relationship:2 ratio:1 setup:1 mostly:1 potentially:2 whitehill:1 design:20 proper:9 policy:4 unknown:4 perform:4 allowing:1 discretize:1 twenty:1 datasets:2 finite:4 honestly:1 jin:1 behave:1 communication:1 ninth:1 provost:2 arbitrary:2 mechanical:4 specified:1 framing:2 barcelona:1 nip:3 able:1 elicitation:12 suggested:2 usually:1 below:1 max:4 royal:1 reliable:1 belief:30 power:4 overlap:2 natural:2 predicting:6 ipeirotis:2 representing:1 picture:1 maj:4 raftery:1 categorical:12 coupled:1 prior:14 literature:2 review:1 discovery:2 loss:1 expect:2 fully:1 interesting:1 agent:101 offered:1 verification:1 consistent:1 rubin:1 principle:1 pi:1 cancer:1 supported:1 allow:1 fall:2 wide:1 distributed:1 benefit:1 boundary:4 feedback:1 xn:13 world:1 doesn:2 qn:1 made:1 commonly:1 adaptive:1 ec:2 skill:3 obtains:1 rafael:1 chien:1 confirm:3 global:7 active:1 assumed:1 xi:1 don:1 truthfully:3 continuous:2 iterative:1 glenn:1 learn:1 zk:1 robust:6 improving:1 argmaxx:1 posing:1 zou:1 domain:1 main:1 big:1 noise:3 hyperparameters:1 arise:1 arrival:1 nothing:1 allowed:2 repeated:1 x1:12 representative:1 fashion:1 differed:1 theme:2 carpet:5 lie:3 answering:1 third:1 interleaving:1 incentivizing:1 theorem:1 showing:1 exists:1 workshop:1 sequential:1 importance:1 ci:4 texture:2 budget:1 hole:1 forecast:1 chen:2 logarithmic:2 led:1 simply:5 explore:6 likely:1 bogoni:1 expressed:3 contained:1 partially:1 springer:1 corresponds:1 truth:16 acm:4 goal:8 feasible:1 experimentally:2 change:1 determined:2 infinite:1 likeihood:1 flag:1 principal:33 lemma:4 called:2 total:1 experimental:3 disregard:1 vote:1 select:2 formally:4 latter:1 evaluate:2 d1:3 ex:1 |
5,789 | 6,238 | Globally Optimal Training of Generalized
Polynomial Neural Networks with Nonlinear
Spectral Methods
A. Gautier, Q. Nguyen and M. Hein
Department of Mathematics and Computer Science
Saarland Informatics Campus, Saarland University, Germany
Abstract
The optimization problem behind neural networks is highly non-convex.
Training with stochastic gradient descent and variants requires careful
parameter tuning and provides no guarantee to achieve the global optimum.
In contrast we show under quite weak assumptions on the data that a
particular class of feedforward neural networks can be trained globally
optimal with a linear convergence rate with our nonlinear spectral method.
Up to our knowledge this is the first practically feasible method which
achieves such a guarantee. While the method can in principle be applied to
deep networks, we restrict ourselves for simplicity in this paper to one and
two hidden layer networks. Our experiments confirm that these models are
rich enough to achieve good performance on a series of real-world datasets.
1
Introduction
Deep learning [13, 16] is currently the state of the art machine learning technique in
many application areas such as computer vision or natural language processing. While the
theoretical foundations of neural networks have been explored in depth see e.g. [1], the
understanding of the success of training deep neural networks is a currently very active
research area [5, 6, 9]. On the other hand the parameter search for stochastic gradient descent
and variants such as Adagrad and Adam can be quite tedious and there is no guarantee that
one converges to the global optimum. In particular, the problem is even for a single hidden
layer in general NP hard, see [17] and references therein. This implies that to achieve global
optimality efficiently one has to impose certain conditions on the problem.
A recent line of research has directly tackled the optimization problem of neural networks
and provided either certain guarantees [2, 15] in terms of the global optimum or proved
directly convergence to the global optimum [8, 11]. The latter two papers are up to our
knowledge the first results which provide a globally optimal algorithm for training neural
networks. While providing a lot of interesting insights on the relationship of structured
matrix factorization and training of neural networks, Haeffele and Vidal admit themselves
in their paper [8] that their results are ?challenging to apply in practice?. In the work of
Janzamin et al. [11] they use a tensor approach and propose a globally optimal algorithm
for a feedforward neural network with one hidden layer and squared loss. However, their
approach requires the computation of the score function tensor which uses the density of
the data-generating measure. However, the data generating measure is unknown and also
difficult to estimate for high-dimensional feature spaces. Moreover, one has to check certain
non-degeneracy conditions of the tensor decomposition to get the global optimality guarantee.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
In contrast our nonlinear spectral method just requires that the data is nonnegative which is
true for all sorts of count data such as images, word frequencies etc. The condition which
guarantees global optimality just depends on the parameters of the architecture of the network
and boils down to the computation of the spectral radius of a small nonnegative matrix.
The condition can be checked without running the algorithm. Moreover, the nonlinear
spectral method has a linear convergence rate and thus the globally optimal training of the
network is very fast. The two main changes compared to the standard setting are that we
require nonnegativity on the weights of the network and we have to minimize a modified
objective function which is the sum of loss and the negative total sum of the outputs. While
this model is non-standard, we show in some first experimental results that the resulting
classifier is still expressive enough to create complex decision boundaries. As well, we achieve
competitive performance on some UCI datasets. As the nonlinear spectral method requires
some non-standard techniques, we use the main part of the paper to develop the key steps
necessary for the proof. However, some proofs of the intermediate results are moved to the
supplementary material.
2
Main result
In this section we present the algorithm together with the
main theorem providing the convergence guarantee. We limit
the presentation to one hidden layer networks to improve the
readability of the paper. Our approach can be generalized
to feedforward networks of arbitrary depth. In particular, we
present in Section 4.1 results for two hidden layers.
0.07
0.06
0.05
0.04
0.03
0.02
0.01
0
0
0.01
0.02
0.03
0.04
We consider in this paper multi-class classification where d is
the dimension of the feature space and K is the number of
classes. We use the negative cross-entropy loss defined for label
y ? [K] := {1, . . . , K} and classifier f : Rd ? RK as
!
K
X
efy (x)
L y, f (x) = ? log PK
= ?fy (x) + log
efj (x) .
fj (x)
j=1 e
j=1
0.05
0.06
0.07
0.08
0.09
0.1
Figure 1: Classification decision
boundaries in R2 . (Best viewed in
colors.)
The function class we are using is a feedforward neural network with one hidden layer with
n1 hidden units. As activation functions we use real powers of the form of a generalized
polyomial, that is for ? ? Rn1 with ?i ? 1, i ? [K], we define:
fr (x) = fr (w, u)(x) =
n1
X
l=1
wrl
d
X
ulm xm
?l
,
(1)
m=1
1
where R+ = {x ? R | x ? 0} and w ? RK?n
, u ? Rn+1 ?d are the parameters of the network
+
which we optimize. The function class in (1) can be seen as a generalized polynomial in the
sense that the powers do not have to be integers. Polynomial neural networks have been
recently analyzed in [15]. Please note that a ReLU activation function makes no sense in
our setting as we require the data as well as the weights to be nonnegative. Even though
nonnegativity of the weights is a strong constraint, one can model quite complex decision
boundaries (see Figure 1, where we show the outcome of our method for a toy dataset in R2 ).
In order to simplify the notation we use w = (w1 , . . . , wK ) for the K output units wi ? Rn+1 ,
i = 1, . . . , K. All output units and the hidden layer are normalized. We optimize over the set
1
S+ = (w, u) ? RK?n
? Rn+1 ?d kukpu = ?u , kwi kpw = ?w , ?i = 1, . . . , K .
+
We also introduce S++ where one replaces R+ with R++ = {t ? R | t > 0}. The final
optimization problem we are going to solve is given as
max
?(w, u) with
(2)
(w,u)?S+
?(w, u) =
n1
n1 X
n
K
K X
d
X
i
X
X
1 Xh
? L yi , f (w, u)(xi ) +
fr (w, u)(xi ) +
wr,l +
ulm ,
n i=1
r=1
r=1
m=1
l=1
2
l=1
where (xi , yi ) ? Rd+ ? [K], i = 1, . . . , n is the training data. Note that this is a maximization
problem and thus we use minus the loss in the objective so that we are effectively minimizing
the loss. The reason to write this as a maximization problem is that our nonlinear spectral
method is inspired by the theory of (sub)-homogeneous nonlinear eigenproblems on convex
cones [14] which has its origin in the Perron-Frobenius theory for nonnegative matrices.
In fact our work is motivated by the closely related Perron-Frobenius theory for multihomogeneous problems developed in [7]. This is also the reason why we have nonnegative
weights, as we work on the positive orthant which is a convex cone. Note that > 0 in the
objective can be chosen arbitrarily small and is added out of technical reasons.
In order to state our main theorem we need some additional notation. For p ? (1, ?), we
let p0 = p/(p ? 1) be the H?lder conjugate of p, and ?p (x) = sign(x)|x|p?1 . We apply ?p to
scalars and vectors in which case the function is applied componentwise. For a square matrix
A we denote its spectral radius by ?(A). Finally, we write ?wi ?(w, u) (resp. ?u ?(w, u)) to
denote the gradient of ? with respect to wi (resp. u) at (w, u). The mapping
?w ?p0w (?w1 ?(w, u))
?w ?p0w (?wK ?(w, u))
?u ?p0u (?u ?(w, u))
?
G (w, u) =
,...,
,
,
k?p0w (?w1 ?(w, u))kpw
k?p0w (?wK ?(w, u))kpw k?p0u (?u ?(w, u))kpu
(3)
defines a sequence converging to the global optimum of (2). Indeed, we prove:
Theorem 1. Let {xi , yi }ni=1 ? Rd+ ? [K], pw , pu ? (1, ?), ?w , ?u > 0, n1 ? N and ? ?
Rn1 with ?i ? 1 for every i ? [n1 ]. Define ?x , ?1 , ?2 > 0 as ?x = maxi?[n] kxi k1 , ?1 =
Pn1
Pn1
(K+1)?(K+1)
?w l=1
(?u ?x )?l , ?2 = ?w l=1
?l (?u ?x )?l and let A ? R++
be defined as
Al,m = 4(p0w ? 1)?1 ,
Al,K+1 = 2(p0w ? 1)(2?2 + k?k? ),
AK+1,m = 2(p0u ? 1)(2?1 + 1), AK+1,K+1 = 2(p0u ? 1)(2?2 + k?k? ? 1),
?m, l ? [K].
If the spectral radius ?(A) of A satisfies ?(A) < 1, then (2) has a unique global maximizer
(w? , u? ) ? S++ . Moreover, for every (w0 , u0 ) ? S++ , there exists R > 0 such that
lim (wk , uk ) = (w? , u? )
k??
and
k(wk , uk ) ? (w? , u? )k? ? R ?(A)k
?k ? N,
where (wk+1 , uk+1 ) = G? (wk , uk ) for every k ? N.
Note that one can check for a given model (number of hidden units n1 , choice of ?, pw , pu ,
?u , ?w ) easily if the convergence guarantee to the global optimum holds by computing the
spectral radius of a square matrix of size K + 1. As our bounds for the matrix A are very
conservative, the ?effective? spectral radius is typically much smaller, so that we have very
fast convergence in only a few iterations, see Section 5 for a discussion. Up to our knowledge
this is the first practically feasible algorithm to achieve global optimality for a non-trivial
neural network model. Additionally, compared to stochastic gradient descent, there is no
free parameter in the algorithm. Thus no careful tuning of the learning rate is required. The
reader might wonder why we add the second term in the objective, where we sum over all
outputs. The reason is that we need that the gradient of G? is strictly positive in S+ , this is
why we also have to add the third term for arbitrarily small > 0. In Section 5 we show
that this model achieves competitive results on a few UCI datasets.
Choice of ?: It turns out that in order to get a non-trivial classifier one has to choose
?1 , . . . , ?n1 ? 1 so that ?i 6= ?j for every i, j ? [n1 ] with i 6= j. The reason for this
lies in certain invariance properties of the network. Suppose that we use a permutation
invariant componentwise activation function ?, that is ?(P x) = P ?(x) for any permutation
matrix P and suppose that A, B are globally optimal weight matrices for a one hidden layer
architecture, then for any permutation matrix P ,
A?(Bx) = AP T P ?(Bx) = AP T ?(P Bx),
which implies that A0 = AP T and B 0 = P B yield the same function and thus are also
globally optimal. In our setting we know that the global optimum is unique and thus it has
to hold that, A = AP T and B = P B for all permutation matrices P . This implies that
both A and B have rank one and thus lead to trivial classifiers. This is the reason why one
has to use different ? for every unit.
3
? ? Rm?m and assume
Dependence of ?(A) on the model parameters:
Let Q, Q
+
? i,j for every i, j ? [m], then ?(Q) ? ?(Q),
? see Corollary 3.30 [3]. It follows
0 ? Qi,j ? Q
that ?(A) in Theorem 1 is increasing w.r.t. ?u , ?w , ?x and the number of hidden units
n1 . Moreover, ?(A) is decreasing w.r.t. pu , pw and in particular, we note that for any
fixed architecture (n1 , ?, ?u , ?w ) it is always possible to find pu , pw large enough so that
?(A) < 1. Indeed, we know from the Collatz-Wielandt formula (Theorem 8.1.26 in [10]) that
?(A) = ?(AT ) ? maxi?[K+1] (AT v)i /vi for any v ? RK+1
++ . We use this to derive lower bounds
on pu , pw that ensure ?(A) < 1. Let v = (pw ? 1, . . . , pw ? 1, pu ? 1), then (AT v)i < vi for
every i ? [K + 1] guarantees ?(A) < 1 and is equivalent to
pw > 4(K + 1)?1 + 3
and
pu > 2(K + 1)(k?k? + 2?2 ) ? 1,
(4)
where ?1 , ?2 are defined as in Theorem 1. However, we think that our current bounds are
sub-optimal so that this choice is quite conservative. Finally, we note that the constant R in
Theorem 1 can be explicitly computed when running the algorithm (see Theorem 3).
Proof Strategy:
The following main part of the paper is devoted to the proof of the
algorithm. For that we need some further notation. We introduce the sets
K?n1
n1 ?d
1
V + = R+
? Rn+1 ?d , V++ = RK?n
? R++
++
B+ = (w, u) ? V+ kukpu ? ?u , kwi kpw ? ?w , ?i = 1, . . . , K},
and similarly we define B++ replacing V+ by V++ in the definition. The high-level idea of
the proof is that we first show that the global maximum of our optimization problem in (2)
is attained in the ?interior? of S+ , that is S++ . Moreover, we prove that any critical point of
(2) in S++ is a fixed point of the mapping G? . Then we proceed to show that there exists a
unique fixed point of G? in S++ and thus there is a unique critical point of (2) in S++ . As
the global maximizer of (2) exists and is attained in the interior, this fixed point has to be
the global maximizer.
Finally, the proof of the fact that G? has a unique fixed point follows by noting that G?
maps B++ into B++ and the fact that B++ is a complete metric space with respect to the
Thompson metric. We provide a characterization of the Lipschitz constant of G? and in turn
derive conditions under which G? is a contraction. Finally, the application of the Banach
fixed point theorem yields the uniqueness of the fixed point of G? and the linear convergence
rate to the global optimum of (2). In Section 4 we show the application of the established
framework for our neural networks.
3
From the optimization problem to fixed point theory
Lemma 1. Let ? : V ? R be differentiable. If ??(w, u) ? V++ for every (w, u) ? S+ , then
the global maximum of ? on S+ is attained in S++ .
We now identify critical points of the objective ? in S++ with fixed points of G? in S++ .
Lemma 2. Let ? : V ? R be differentiable. If ??(w, u) ? V++ for all (w, u) ? S++ , then
(w? , u? ) is a critical point of ? in S++ if and only if it is a fixed point of G? .
Our goal is to apply the Banach fixed point theorem to G? : B++ ? S++ ? B++ . We recall
this theorem for the convenience of the reader.
Theorem 2 (Banach fixed point theorem e.g. [12]). Let (X, d) be a complete metric space
with a mapping T : X ? X such that d(T (x), T (y)) ? q d(x, y) for q ? [0, 1) and all x, y ? X.
Then T has a unique fixed-point x? in X, that is T (x? ) = x? and the sequence defined as
xn+1 = T (xn ) with x0 ? X converges limn?? xn = x? with linear convergence rate
qn
d(xn , x? ) ?
d(x1 , x0 ).
1?q
So, we need to endow B++ with a metric ? so that (B++ , ?) is a complete metric space. A
popular metric for the study of nonlinear eigenvalue problems on the positive orthant is the
m
so-called Thompson metric d : Rm
++ ? R++ ? R+ [18] defined as
d(z, z?) = k ln(z) ? ln(?
z )k?
where
ln(z) = ln(z1 ), . . . , ln(zm ) .
4
Using the known facts that (Rn++ , d) is a complete metric space and its topology coincides
with the norm topology (see e.g. Corollary 2.5.6 and Proposition 2.5.2 [14]), we prove:
Lemma 3. For p ? (1, ?) and ? > 0, ({z ? Rn++ | kzkp ? ?}, d) is a complete metric space.
Now, the idea is to see B++ as a product of such metric spaces. For i = 1, . . . , K, let
i
1
B++
= {wi ? Rn++
| kwi kpw ? ?w } and di (wi , w
?i ) = ?i k ln(wi ) ? ln(w
?i )k? for some
K+1
1 ?d
constant ?i > 0. Furthermore, let B++
= {u ? Rn++
| kukpu ? ?u } and dK+1 (u, u
?) =
i
u)k? . Then (B++
?K+1 k ln(u) ? ln(?
, di ) is a complete metric space for every i ? [K + 1] and
K+1
1
K
B++ = B++
? . . . ? B++
? B++
. It follows that (B++ , ?) is a complete metric space with
? : B++ ? B++ ? R+ defined as
K
X
? (w, u), (w,
? u
?) =
?i k ln(wi ) ? ln(w
?i )k? + ?K+1 k ln(u) ? ln(?
u)k? .
i=1
The motivation for introducing the weights ?1 , . . . , ?K+1 > 0 is given by the next theorem.
We provide a characterization of the Lipschitz constant of a mapping F : B++ ? B++ with
respect to ?. Moreover, this Lipschitz constant can be minimized by a smart choice of ?.
For i ? [K], a, j ? [n1 ], b ? [d], we write Fwi,j and Fuab to denote the components of F such
that F = (Fw1,1 , . . . , Fw1,n1 , Fw2,1 , . . . , FwK,n1 , Fu11 , . . . , Fun1 d ).
(K+1)?(K+1)
Lemma 4. Suppose that F ? C 1 (B++ , V++ ) and A ? R+
satisfies
|?wk Fwi,j (w, u)|, wk ? Ai,k Fwi,j (w, u),
|?u Fwi,j (w, u)|, u ? Ai,K+1 Fwi,j (w, u)
and
h|?wk Fuab (w, u)|, wk i ? AK+1,k Fuab (w, u),
h|?u Fuab (w, u)|, ui ? AK+1,K+1 Fuab (w, u)
for all i, k ? [K], a, j ? [n1 ], b ? [d] and (w, u) ? B++ . Then, for every (w, u), (w,
? u
?) ? B++
it holds
(AT ?)k
? F (w, u), F (w,
? u
?) ? U ? (w, u), (w,
? u
?)
with
U = max
.
?k
k?[K+1]
Note that, from the Collatz-Wielandt ratio for nonnegative matrices, we know that the
constant U in Lemma 4 is lower bounded by the spectral radius ?(A) of A. Indeed, by
Theorem 8.1.31 in [10], we know that if AT has a positive eigenvector ? ? RK+1
++ , then
(AT ?)i
= ?(A) =
?i
i?[K+1]
max
min
?
? ?RK+1
++
(AT ?? )i
.
??i
i?[K+1]
max
(5)
Therefore, in order to obtain the minimal Lipschitz constant U in Lemma 4, we choose the
weights of the metric ? to be the components of ?. A combination of Theorem 2, Lemma 4
and this observation implies the following result.
Theorem 3. Let ? ? C 1 (V, R) ? C 2 (B++ , R) with ??(S+ ) ? V++ . Let G? : B++ ? B++
(K+1)?(K+1)
be defined as in (3). Suppose that there exists a matrix A ? R+
such that G? and
T
A satisfies the assumptions of Lemma 4 and A has a positive eigenvector ? ? RK+1
++ . If
?(A) < 1, then ? has a unique critical point (w? , u? ) in S++ which
is
the
global
maximum
of
the optimization problem (2). Moreover, the sequence (wk , uk ) k defined for any (w0 , u0 ) ?
S++ as (wk+1 , uk+1 ) = G? (wk , uk ), k ? N, satisfies limk?? (wk , uk ) = (w? , u? ) and
? (w1 , u1 ), (w0 , u0 )
k
k
?
?
k
?K+1
k(w , u ) ? (w , u )k? ? ?(A)
?k ? N,
1 ? ?(A) min ?u , mint?[K] ??wt
where the weights in the definition of ? are the entries of ?.
4
Application to Neural Networks
In the previous sections we have outlined the proof of our main result for a general objective
function satisfying certain properties. The purpose of this section is to prove that the
properties hold for our optimization problem for neural networks.
5
We recall our objective function from (2)
?(w, u) =
n1
n1 X
n
K
K X
d
i
X
X
X
1 Xh
? L yi , f (w, u)(xi ) +
fr (w, u)(xi ) +
wr,l +
ulm
n i=1
r=1
r=1
m=1
l=1
l=1
and the function class we are considering from (1)
fr (x) = fr (w, u)(x) =
n1
X
wr,l
d
X
ulm xm
?l
,
m=1
l=1
The arbitrarily small in the objective is needed to make the gradient strictly positive on
the boundary of V+ . We note that the assumption ?i ? 1 for every i ? [n1 ] is crucial in the
following lemma in order to guarantee that ?? is well defined on S+ .
Lemma 5. Let ? be defined as in (2), then ??(w, u) is strictly positive for any (w, u) ? S+ .
Next, we derive the matrix A ? R(K+1)?(K+1) in order to apply Theorem 3 to G? with
? defined in (2). As discussed in its proof, the matrix A given in the following theorem
has a smaller spectral radius than that of Theorem 1. To express this matrix, we consider
n1
n1
??
p,q : R++ ? R++ ? R++ defined for p, q ? (1, ?) and ? ? R++ as
1/p
h X
i1? ?p
pq
q
?j p
?l q??p
(?
t
)
,
(6)
+
max
??
(?,
t)
=
(?
t
)
j
l
p,q
c
j?J
l?J
where J = {l ? [n1 ] | ?l p ? q}, J c = {l ? [n1 ] | ?l p > q} and ? = minl?J ?l .
Theorem 4. Let ? be defined as above and G? be as in (3). Set Cw = ?w ??
p0w ,pu (1, ?u ?x ),
?
i 0
?
Cu = ?w ?p0w ,pu (?, ?u ?x ) and ?x = maxi?[n] kx kpu . Then A and G satisfy all assumptions
of Lemma 4 with
Qw,w Qw,u
A = 2 diag p0w ? 1, . . . , p0w ? 1, p0u ? 1
Qu,w Qu,u
K?1
1?K
where Qw,w ? RK?K
++ , Qw,u ? R++ , Qu,w ? R++ and Qu,u ? R++ are defined as
Qw,w = 2Cw 11T ,
Qw,u = (2Cu + k?k? )1,
Qu,w = (2Cw + 1)1T , Qu,u = (2Cu + k?k? ? 1).
Pn1
?l
In the supplementary material, we prove that ??
which yields the weaker
p,q (?, t) ?
l=1 ?l t
bounds ?1 , ?2 given in Theorem 1. In particular, this observation combined with Theorems 3
and 4 implies Theorem 1.
4.1
Neural networks with two hidden layers
We show how to extend our framework for neural networks with 2 hidden layers. In future
work we will consider the general case. We briefly explain the major changes. Let n1 , n2 ? N
1
2
and ? ? Rn++
, ? ? Rn++
with ?i , ?j ? 1 for all i ? [n1 ], j ? [n2 ], our function class is:
fr (x) = fr (w, v, u)(x) =
n2
X
wr,l
n1
X
vlm
d
X
m=1
l=1
ums xs
?m ?l
s=1
and the optimization problem becomes
max
?(w, v, u)
2
V+ = RK?n
? Rn+2 ?n1 ? Rn+1 ?d ,
+
where
(w,v,u)?S+
(7)
S+ = {(w1 , . . . , wK , v, u) ? V+ | kwi kpw = ?w , kvkpv = ?v , kukpu = ?u } and
n2
n2 X
n1
n1 X
n
K
K X
d
i X
X
X
X
1 Xh
i
i
?(w, v, u) =
?L yi , f (x ) +
fr (x ) +
wr,l +
vlm +
ums .
n i=1
r=1
r=1
m=1
m=1 s=1
l=1
6
l=1
?
?
?
The map G? : S++ ? S++ = {z ? S+ | z > 0}, G? = (G?
w1 , . . . , GwK , Gv , Gu ), becomes
G?
wi (w, v, u) = ?w
?p0w (?wi ?(w, u))
k?p0w (?wi ?(w, v, u))kpw
?i ? [K]
(8)
and
G?
v (w, v, u) = ?v
?p0v (?v ?(w, v, u))
,
k?p0v (?v ?(w, v, u))kpv
G?
u (w, v, u) = ?u
?p0u (?u ?(w, v, u))
.
k?p0u (?u ?(w, v, u))kpu
We have the following equivalent of Theorem 1 for 2 hidden layers.
Theorem 5. Let {xi , yi }ni=1 ? Rd+ ? [K], pw , pv , pu ? (1, ?), ?w , ?v , ?u > 0, n1 , n2 ? N and
1
2
? ? Rn++
, ? ? Rn++
with ?i , ?j ? 1 for all i ? [n1 ], j ? [n2 ]. Let ?x = maxi?[n] kxi kp0u ,
? = ? v ??
p0v ,pu (1, ?u ?x ),
Cw = ?w ??p0w ,pv (1, ?),
(K+2)?(K+2)
and define A ? R++
Am,l =
Am,K+2 =
AK+1,K+1 =
AK+2,l =
AK+2,K+2 =
Cv = ?w ??p0w ,pv (?, ?),
Cu = k?k? Cv ,
as
4(p0w ? 1)Cw ,
2(p0w ? 1) 2Cu + k?k? k?k? ,
2(p0v ? 1) 2Cv + k?k? ? 1 ,
2(p0u ? 1)(2Cw + 1),
2(p0u ? 1)(2Cu + k?k? k?k? ?
Am,K+1 = 2(p0w ? 1)(2Cv + k?k
?)
AK+1,l = 2(p0v ? 1) 2Cw + 1
AK+1,K+2 = 2(p0v ? 1) 2Cu + k?k? k?k?
AK+2,K+1 = 2(p0u ? 1)(2Cv + k?k? ),
1)
?m, l ? [K].
If ?(A) < 1, then (7) has a unique global maximizer (w? , v ? , u? ) ? S++ . Moreover, for every
(w0 , v 0 , u0 ) ? S++ , there exists R > 0 such that
lim (wk , v k , uk ) = (w? , v ? , u? )
k??
k(wk , v k , uk )?(w? , v ? , u? )k? ? R ?(A)k
and
?k ? N
where (wk+1 , v k+1 , uk+1 ) = G? (wk , v k , uk ) for every k ? N and G? is defined as in (8).
As for the case with one hidden layer, for any fixed architecture ?w , ?v , ?u > 0, n1 , n2 ? N
1
2
and ? ? Rn++
, ? ? Rn++
with ?i , ?j ? 1 for all i ? [n1 ], j ? [n2 ], it is possible to derive lower
bounds on pw , pv , pu that guarantee ?(A) < 1 in Theorem 5. Indeed, it holds
Cw ? ?1 = ?w
n2 h
X
?v
j=1
n1
i?j
X
(?u ??x )?l
and Cv ? ?2 = ?w
n2
X
n1
h X
i?j
?j ?v
(?u ??x )?l ,
j=1
l=1
l=1
with ??x = maxi?[n] kxi k1 . Hence, the two hidden layers equivalent of (4) becomes
pw > 4(K+2)?1 +5, pv > 2(K+2) 2?2 +k?k? ?1, pu > 2(K+2)k?k? (2?2 +k?k? )?1. (9)
5
Experiments
100
NLSM1
SGD 100
SGD 10
SGD 1
SGD 0.1
SGD 0.01
10 -12
10 -14
10 0
10 1
epochs
10 2
test error
(p? ? f )/|p? |
10 -10
Table 1: Test accuracy on UCI datasets
NLSM1
SGD 100
SGD 10
SGD 1
SGD 0.1
SGD 0.01
80
10 -8
60
40
20
0
0
10
10
1
10
2
10
3
epochs
Figure 2: Training score (left) w.r.t. the optimal score p?
and test error (right) of NLSM1 and Batch-SGD with different
step-sizes.
Dataset
NLSM1 NLSM2 ReLU1 ReLU2 SVM
Cancer
Iris
Banknote
Blood
Haberman
Seeds
Pima
96.4
90.0
97.1
76.0
75.4
88.1
79.2
96.4
96.7
96.4
76.7
75.4
90.5
80.5
95.7
100
100
76.0
70.5
90.5
76.6
93.6
93.3
97.8
76.0
72.1
92.9
79.2
95.7
100
100
77.3
72.1
95.2
79.9
The shown experiments should be seen as a proof of concept. We do not have yet a
good understanding of how one should pick the parameters of our model to achieve good
performance. However, the other papers which have up to now discussed global optimality
for neural networks [11, 8] have not included any results on real datasets. Thus, up to our
7
Nonlinear Spectral Method for 1 hidden layer
Input: Model n1 ? N, pw , pu ? (1, ?), ?w , ?u > 0, ?1 , . . . , ?n1 ? 1, > 0 so that the
matrix A of Theorem 1 satisfies ?(A) < 1. Accuracy ? > 0 and (w0 , u0 ) ? S++ .
1 Let (w1 , u1 ) = G? (w0 , u0 ) and compute R as in Theorem 3
2 Repeat
3
(wk+1 , uk+1 ) = G? (wk , uk )
4
k ?k+1
5 Until k ? ln ? /R / ln ?(A)
Output: (wk , uk ) fulfills k(wk , uk ) ? (w? , u? )k? < ? .
With G? defined as in (3). The method for two hidden layers is similar: consider G?
as in (8) instead of (3) and assume that the model satisfies Theorem 5.
knowledge, we show for the first time a globally optimal algorithm for neural networks that
leads to non-trivial classification results.
We test our methods on several low dimensional UCI datasets and denote our algorithms as
NLSM1 (one hidden layer) and NLSM2 (two hidden layers). We choose the parameters of our
model out of 100 randomly generated combinations of (n1 , ?, ?w , ?u ) ? [2, 20] ? [1, 4] ? (0, 1]2
(respectively (n1 , n2 , ?, ?, ?w , ?v , ?u ) ? [2, 10]2 ? [1, 4]2 ? (0, 1]2 ) and pick the best one based
on 5-fold cross-validation error. We use Equation (4) (resp. Equation (9)) to choose pu , pw
(resp. pu , pv , pw ) so that every generated model satisfies the conditions of Theorem 1 (resp.
Theorem 5), i.e. ?(A) < 1. Thus, global optimality is guaranteed in all our experiments.
For comparison, we use the nonlinear RBF-kernel SVM and implement two versions of the
Rectified-Linear Unit network - one for one hidden layer networks (ReLU1) and one for
two hidden layers networks (ReLU2). To train ReLU, we use a stochastic gradient descent
method which minimizes the sum of logistic loss and L2 regularization term over weight
matrices to avoid over-fitting. All parameters of each method are jointly cross validated.
More precisely, for ReLU the number of hidden units takes values from 2 to 20, the step-sizes
and regularizers are taken in {10?6 , 10?5 , . . . , 102 } and {0, 10?4 , 10?3 , . . . , 104 } respectively.
For SVM, the hyperparameter C and the kernel parameter ? of the radius basis function
K(xi , xj ) = exp(??kxi ? xj k2 ) are taken from {2?5 , 2?4 . . . , 220 } and {2?15 , 2?14 . . . , 23 }
respectively. Note that ReLUs allow negative weights while our models do not. The results
presented in Table 1 show that overall our nonlinear spectral methods achieve slightly worse
performance than kernel SVM while being competitive/slightly better than ReLU networks.
Notably in case of Cancer, Haberman and Pima, NLSM2 outperforms all the other models.
For Iris and Banknote, we note that without any constraints ReLU1 can easily find an
architecture which achieves zero test error while this is difficult for our models as we impose
constraints on the architecture in order to prove global optimality.
We compare our algorithms with Batch-SGD in order to optimize (2) with batch-size being
5% of the training data while the step-size is fixed and selected between 10?2 and 102 .
At each iteration of our spectral method and each epoch of Batch-SGD, we compute the
objective and test error of each method and show the results in Figure 2. One can see that
our method is much faster than SGDs, and has a linear convergence rate. We noted in
our experiments that as ? is large and our data lies between [0, 1], all units in the network
tend to have small values that make the whole objective function relatively small. Thus, a
relatively large change in (w, u) might cause only small changes in the objective function
but performance may vary significantly as the distance is large in the parameter space. In
other words, a small change in the objective may have been caused by a large change in the
parameter space, and thus, largely influences the performance - which explains the behavior
of SGDs in Figure 2.
The magnitude of the entries of the matrix A in Theorems 1 and 5 grows with the number
of hidden units and thus the spectral radius ?(A) also increases with this number. As we
expect that the number of required hidden units grows with the dimension of the datasets
we have limited ourselves in the experiments to low-dimensional datasets. However, these
bounds are likely not to be tight, so that there might be room for improvement in terms of
dependency on the number of hidden units.
8
Acknowledgment
The authors acknowledge support by the ERC starting grant NOLEPRO 307793.
References
[1] M. Anthony and P. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge
University Press, New York, 1999.
[2] S. Arora, A. Bhaskara, R. Ge, and T. Ma. Provable bounds for learning some deep representations. In ICML, 2014.
[3] A. Berman and R. J. Plemmons. Nonnegative Matrices in the Mathematical Sciences. SIAM,
Philadelphia, 1994.
[4] D. P. Bertsekas. Nonlinear Programming. Athena Scientific, Belmont, Mass., 1999.
[5] A. Choromanska, M. Hena, M. Mathieu, G. B. Arous, and Y. LeCun. The loss surfaces of
multilayer networks. In AISTATS, 2015.
[6] A Daniely, R. Frostigy, and Y. Singer. Toward deeper understanding of neural networks: The
power of initialization and a dual view on expressivity, 2016. arXiv:1602.05897v1.
[7] A. Gautier, F. Tudisco, and M. Hein. The Perron-Frobenius Theorem for Multi-Homogeneous
Maps. in preparation, 2016.
[8] B. D. Haeffele and Rene Vidal. Global optimality in tensor factorization, deep learning, and
beyond, 2015. arXiv:1506.07540v1.
[9] M. Hardt, B. Recht, and Y. Singer. Train faster, generalize better: Stability of stochastic
gradient descent. In ICML, 2016.
[10] R. A. Horn and C. R. Johnson. Matrix Analysis. Cambridge University Press, New York,
second edition, 2013.
[11] M. Janzamin, H. Sedghi, and A. Anandkumar. Beating the perils of non-convexity:guaranteed
training of neural networks using tensor methods, 2015. arXiv:1506.08473v3.
[12] W. A. Kirk and M. A. Khamsi. An Introduction to Metric Spaces and Fixed Point Theory.
John Wiley, New York, 2001.
[13] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521, 2015.
[14] B. Lemmens and R. D. Nussbaum. Nonlinear Perron-Frobenius theory. Cambridge University
Press, New York, general edition, 2012.
[15] R. Livni, S. Shalev-Shwartz, and O. Shamir. On the computational efficiency of training neural
networks. In NIPS, pages 855?863, 2014.
[16] J. Schmidhuber. Deep Learning in Neural Networks: An Overview. Neural Networks, 61:85?117,
2015.
[17] J. Sima. Training a single sigmoidal neuron is hard. Neural Computation, 14:2709?2728, 2002.
[18] A. C. Thompson. On certain contraction mappings in a partially ordered vector space.
Proceedings of the American Mathematical Society, 14:438?443, 1963.
9
| 6238 |@word cu:7 briefly:1 pw:14 polynomial:3 norm:1 version:1 tedious:1 decomposition:1 p0:1 contraction:2 pick:2 sgd:13 minus:1 arous:1 series:1 score:3 kpv:1 outperforms:1 current:1 activation:3 yet:1 john:1 belmont:1 gv:1 selected:1 provides:1 characterization:2 readability:1 sigmoidal:1 nussbaum:1 saarland:2 mathematical:2 prove:6 fitting:1 introduce:2 x0:2 notably:1 indeed:4 behavior:1 themselves:1 multi:2 plemmons:1 inspired:1 globally:8 decreasing:1 considering:1 increasing:1 becomes:3 provided:1 spain:1 campus:1 moreover:8 notation:3 mass:1 qw:6 bounded:1 minimizes:1 eigenvector:2 developed:1 guarantee:11 every:14 classifier:4 rm:2 uk:16 k2:1 unit:12 grant:1 bertsekas:1 positive:7 limit:1 ak:10 ap:4 haberman:2 might:3 therein:1 initialization:1 challenging:1 factorization:2 limited:1 unique:8 acknowledgment:1 lecun:2 horn:1 practice:1 implement:1 area:2 significantly:1 word:2 get:2 convenience:1 interior:2 influence:1 optimize:3 equivalent:3 map:3 starting:1 convex:3 thompson:3 simplicity:1 insight:1 stability:1 resp:5 shamir:1 suppose:4 ulm:4 programming:1 homogeneous:2 us:1 origin:1 satisfying:1 convexity:1 ui:1 trained:1 tight:1 smart:1 efficiency:1 basis:1 gu:1 easily:2 train:2 fast:2 effective:1 outcome:1 shalev:1 quite:4 supplementary:2 solve:1 lder:1 think:1 jointly:1 final:1 sequence:3 differentiable:2 eigenvalue:1 propose:1 product:1 fr:9 zm:1 uci:4 achieve:7 moved:1 frobenius:4 convergence:9 optimum:8 generating:2 adam:1 converges:2 derive:4 develop:1 strong:1 implies:5 berman:1 radius:9 closely:1 stochastic:5 material:2 explains:1 require:2 proposition:1 strictly:3 hold:5 practically:2 exp:1 seed:1 mapping:5 major:1 achieves:3 vary:1 purpose:1 uniqueness:1 gautier:2 label:1 currently:2 create:1 nolepro:1 always:1 modified:1 avoid:1 corollary:2 endow:1 validated:1 improvement:1 rank:1 check:2 contrast:2 sense:2 am:3 typically:1 a0:1 hidden:26 going:1 choromanska:1 i1:1 germany:1 overall:1 classification:3 dual:1 art:1 icml:2 future:1 minimized:1 np:1 simplify:1 few:2 randomly:1 ourselves:2 n1:41 highly:1 analyzed:1 behind:1 devoted:1 regularizers:1 necessary:1 janzamin:2 hein:2 theoretical:2 minimal:1 maximization:2 introducing:1 entry:2 daniely:1 wonder:1 johnson:1 wrl:1 dependency:1 kxi:4 combined:1 recht:1 density:1 siam:1 informatics:1 together:1 w1:7 squared:1 rn1:2 choose:4 worse:1 admit:1 american:1 bx:3 toy:1 wk:24 satisfy:1 explicitly:1 caused:1 depends:1 vi:2 view:1 lot:1 competitive:3 sort:1 relus:1 minimize:1 square:2 ni:2 accuracy:2 largely:1 efficiently:1 yield:3 identify:1 peril:1 generalize:1 weak:1 rectified:1 explain:1 checked:1 definition:2 frequency:1 proof:9 di:2 boil:1 degeneracy:1 proved:1 dataset:2 popular:1 hardt:1 recall:2 knowledge:4 color:1 lim:2 attained:3 though:1 furthermore:1 just:2 until:1 hand:1 expressive:1 replacing:1 nonlinear:13 maximizer:4 defines:1 logistic:1 scientific:1 grows:2 concept:1 true:1 normalized:1 hence:1 regularization:1 sima:1 please:1 noted:1 coincides:1 iris:2 generalized:4 complete:7 p0v:6 fj:1 image:1 recently:1 overview:1 banach:3 discussed:2 extend:1 pn1:3 cambridge:3 rene:1 ai:2 cv:6 tuning:2 rd:4 outlined:1 mathematics:1 similarly:1 erc:1 language:1 pq:1 surface:1 etc:1 pu:16 add:2 recent:1 mint:1 schmidhuber:1 certain:6 success:1 arbitrarily:3 yi:6 seen:2 vlm:2 additional:1 impose:2 minl:1 v3:1 u0:6 technical:1 faster:2 cross:3 qi:1 converging:1 variant:2 multilayer:1 vision:1 metric:14 arxiv:3 iteration:2 kernel:3 limn:1 crucial:1 limk:1 kwi:4 tend:1 integer:1 anandkumar:1 noting:1 feedforward:4 intermediate:1 enough:3 bengio:1 xj:2 relu:4 architecture:6 restrict:1 topology:2 idea:2 motivated:1 bartlett:1 proceed:1 cause:1 york:4 deep:7 eigenproblems:1 sign:1 wr:5 write:3 hyperparameter:1 express:1 key:1 blood:1 v1:2 sum:4 cone:2 reader:2 decision:3 kpw:7 layer:19 bound:7 guaranteed:2 haeffele:2 tackled:1 fold:1 replaces:1 nonnegative:7 constraint:3 precisely:1 u1:2 optimality:8 min:2 relatively:2 department:1 structured:1 combination:2 conjugate:1 smaller:2 slightly:2 wi:10 qu:6 invariant:1 taken:2 ln:15 equation:2 turn:2 count:1 needed:1 know:4 fwi:5 ge:1 singer:2 sgds:2 vidal:2 apply:4 spectral:17 batch:4 running:2 ensure:1 k1:2 society:1 tensor:5 objective:12 added:1 strategy:1 dependence:1 gradient:8 cw:8 distance:1 athena:1 w0:6 fy:1 trivial:4 reason:6 provable:1 toward:1 sedghi:1 relationship:1 providing:2 minimizing:1 ratio:1 difficult:2 pima:2 negative:3 kpu:3 unknown:1 observation:2 neuron:1 datasets:8 acknowledge:1 descent:5 orthant:2 hinton:1 rn:16 arbitrary:1 banknote:2 perron:4 required:2 componentwise:2 z1:1 established:1 barcelona:1 expressivity:1 nip:2 beyond:1 xm:2 beating:1 max:6 power:3 critical:5 natural:1 fwk:1 improve:1 mathieu:1 arora:1 philadelphia:1 epoch:3 understanding:3 l2:1 adagrad:1 loss:7 expect:1 permutation:4 interesting:1 validation:1 foundation:2 principle:1 cancer:2 repeat:1 free:1 weaker:1 allow:1 deeper:1 livni:1 boundary:4 depth:2 dimension:2 world:1 xn:4 rich:1 qn:1 author:1 nguyen:1 confirm:1 global:23 active:1 xi:8 shwartz:1 search:1 why:4 table:2 additionally:1 nature:1 complex:2 anthony:1 diag:1 aistats:1 pk:1 main:7 motivation:1 whole:1 edition:2 n2:12 x1:1 wiley:1 sub:2 nonnegativity:2 pv:6 xh:3 lie:2 third:1 kirk:1 bhaskara:1 down:1 theorem:34 rk:10 formula:1 maxi:5 explored:1 r2:2 dk:1 x:1 svm:4 exists:5 effectively:1 magnitude:1 kx:1 entropy:1 wielandt:2 likely:1 ordered:1 partially:1 scalar:1 satisfies:7 ma:1 viewed:1 presentation:1 goal:1 careful:2 rbf:1 room:1 lipschitz:4 feasible:2 hard:2 change:6 included:1 wt:1 lemma:11 conservative:2 total:1 called:1 invariance:1 experimental:1 support:1 latter:1 gwk:1 fulfills:1 preparation:1 |
5,790 | 6,239 | Joint quantile regression in vector-valued RKHSs
Maxime Sangnier Olivier Fercoq Florence d?Alch?e-Buc
LTCI, CNRS, T?el?ecom ParisTech
Universit?e Paris-Saclay
75013, Paris, France
{maxime.sangnier, olivier.fercoq, florence.dalche}
@telecom-paristech.fr
Abstract
Addressing the will to give a more complete picture than an average relationship
provided by standard regression, a novel framework for estimating and predicting
simultaneously several conditional quantiles is introduced. The proposed methodology leverages kernel-based multi-task learning to curb the embarrassing phenomenon of quantile crossing, with a one-step estimation procedure and no postprocessing. Moreover, this framework comes along with theoretical guarantees
and an efficient coordinate descent learning algorithm. Numerical experiments on
benchmark and real datasets highlight the enhancements of our approach regarding the prediction error, the crossing occurrences and the training time.
1
Introduction
Given a couple (X, Y ) of random variables, where Y takes scalar values, a common aim in statistics
and machine learning is to estimate the conditional expectation E [Y | X = x] as a function of x. In
the previous setting, called regression, one assumes that the main information in Y is a scalar value
corrupted by a centered noise. However, in some applications such as medicine, economics, social
sciences and ecology, a more complete picture than an average relationship is required to deepen the
analysis. Expectiles and quantiles are different quantities able to achieve this goal.
This paper deals with this last setting, called (conditional) quantile regression. This topic has been
championed by Koenker and Bassett [16] as the minimization of the pinball loss (see [15] for an
extensive presentation) and brought to the attention of the machine learning community by Takeuchi
et al. [26]. Ever since then, several studies have built upon this framework and the most recent ones
include regressing a single quantile of a random vector [12]. On the contrary, we are interested in
estimating and predicting simultaneously several quantiles of a scalar-valued random variable Y |X
(see Figure 1), thus called joint quantile regression. For this purpose, we focus on non-parametric
hypotheses from a vector-valued Reproducing Kernel Hilbert Space (RKHS).
Since quantiles of a distribution are closely related, joint quantile regression is subsumed under the
field of multi-task learning [3]. As a consequence, vector-valued kernel methods are appropriate for
such a task. They have already been used for various applications, such as structured classification
[10] and prediction [7], manifold regularization [21, 6] and functional regression [14]. Quantile
regression is a new opportunity for vector-valued RKHSs to perform in a multi-task problem, along
with a loss that is different from the `2 cost predominantly used in the previous references.
In addition, such a framework offers a novel way to curb the phenomenon of quantile curve crossing,
while preserving the so called quantile property (which may not be true for current approaches). This
one guarantees that the ratio of observations lying below a predicted quantile is close to the quantile
level of interest.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
In a nutshell, the contributions of this work are (following the outline of the paper): i) a novel
methodology for joint quantile regression, that is based on vector-valued RKHSs; ii) enhanced predictions thanks to a multi-task approach along with limited appearance of crossing curves; iii) theoretical guarantees regarding the generalization of the model; iv) an efficient coordinate descent
algorithm, that is able to handle the intercept of the model in a manner that is simple and different
from Sequential Minimal Optimization (SMO). Besides these novelties, the enhancements of the
proposed method and the efficiency of our learning algorithm are supported by numerical experiments on benchmark and real datasets.
2
2.1
Problem definition
Quantile regression
Let Y ? R be a compact set, X be an arbitrary input space and (X, Y ) ? X ? Y a pair of random variables following an unknown joint distribution. For a given probability ? ? (0, 1), the
conditional ? -quantile of (X, Y ) is the function ?? : X ? R such that ?? (x) = inf{? ? R :
P (Y ? ? | X = x) ? ? }. Thus, given a training set {(xi , yi )}ni=1 ? (X ? Y)n , the quantile regression problem aims at estimating this conditional ? -quantile function ?? . Following Koenker
[15], this can be achieved by minimization of the pinball loss: `? (r) = max(? r, (? ? 1)r), where
r ? R is a residual. Using P
such a loss first arose from the observation that the location parameter ?
n
that minimizes the `1 -loss i=1 |yi ? ?| is an estimator of the unconditional median [16].
Now focusing on the estimation of a conditional quantile, one can show that the target function
?? is a minimizer of the ? -quantile risk R? (h) = E [`? (Y ? h(X))] [17]. However, since the
joint probability of (X, Y ) is unknown but we are provided with an independent and identically
distributed (iid)Psample of observations {(xi , yi )}ni=1 , we resort to minimizing the empirical risk:
n
R?emp (h) = n1 i=1 `? (yi ? h(xi )), within a class H ? (R)X of functions, calibrated in order
to overcome the shift from the true risk to the empirical one. In particular, when H has the form:
H = {h = f + b : b ? R, f ? (R)X , ?(f ) ? c}, with ? : (R)X ? R being a convex function
and c > 0 a constant, Takeuchi et al. [26] proved that (similarly to the unconditional case) the
? obtained by minimizing R?emp in H, the ratio of
quantile property is satisfied: for any estimator h,
? (i.e. yi < h(x
? i )) equals ? to a small error (the ration of observations
observations lying below h
?
exactly equal to h(xi )). Moreover, under some regularity assumptions, this quantity converges to ?
when the sample grows. Note that these properties are true since the intercept b is unconstrained.
2.2
Multiple quantile regression
In many real problems (such as medical reference charts), one is not only interested by estimating a
single quantile curve but a few of them. Thus, denoting Np the range of integers between 1 and p, for
several quantile levels ?j (j ? Np ) and functions hj ? H, the empirical
PnlossPtop be minimized can bi
written as the following separable function: R?emp (h1 , . . . , hp ) = n1 i=1 j=1 `?j (yi ? hj (xi )),
where ? denotes the p dimensional vector of quantile levels.
A nice feature of multiple quantile regression is thus to extract slices of the conditional distribution
of Y |X. However, when quantiles are estimated independently, an embarrassing phenomenon often
appears: quantile functions cross, thus violating the basic principle that the cumulative distribution
function should be monotonically non-decreasing. We refer to that pitfall as the crossing problem.
In this paper, we propose to prevent curve crossing by considering the problem of multiple quantile
regression as a vector-valued regression problem where outputs are not independent. An interesting
feature of our method is to preserve the quantile property while most other approaches lose it when
struggling to the crossing problem.
2.3
Related work
Going beyond linear and spline-based models, quantile regression in RKHSs has been introduced a
decade ago [26, 17]. In [26], the authors proposed to minimize the pinball loss in a scalar-valued
RKHS and to add hard constraints on the training points in order to prevent the crossing problem.
Our work can be legitimately seen as an extension of [26] to multiple quantile regression using
2
a vector-valued RKHS and structural constraints against curve crossing thanks to an appropriate
matrix-valued kernel.
Another related work is [27], which first introduced the idea of multi-task learning for quantile
regression. In [27], linear quantile curves are estimated jointly with a common feature subspace
shared across the tasks, based on multi-task feature learning [3]. In addition, the authors showed
that for such linear regressors, a common representation shared across infinitely many tasks can be
computed, thus estimating simultaneously conditional quantiles for all possible quantile levels. Both
previous approaches will be considered in the numerical experiments.
Quantile regression has been investigated from many perspectives, including different losses leading to an approximate quantile property (-insensitive [25], re-weighted least squares [22]) along
with models and estimation procedures to curb the crossing problem: location-scale model with a
multi-step strategy [13], tensor product spline surface [22], non-negative valued kernels [18], hard
non-crossing constraints [26, 28, 5], inversion and monotonization of a conditional distribution estimation [9] and rearrangement of quantile estimations [8], to cite only a few references. Let us remark
that some solutions such as non-crossing constraints [26] lose theoretically the quantile property because of constraining the intercept.
In comparison to the literature, we propose a novel methodology, based on vector-valued RKHSs,
with a one-step estimation, no post-processing, and keeping the quantile property while dealing with
curve crossing. We also provide an efficient learning algorithm and theoretical guarantees.
3
3.1
Vector-valued RKHS for joint quantile regression
Joint estimation
Given a vector ? ? (0, 1)p of quantile levels, multiple quantile regression is now considered as a
joint estimation in (Rp )X of the target function x ? X 7? (??1 (x), . . . , ??p (x)) ? Rp of conditional
quantiles. Thus, let now ? be a convex regularizer on (Rp )X and H = {h = f + b : b ?
Rp , f ? (Rp )X , ?(f ) ? c} be the P
hypothesis set. Similarly to previously, joint quantile regression
n
emp
1
aims at minimizing
R
? (h) = n
i=1 `? (yi 1 ? h(xi )), where 1 stands for the all-ones vector,
Pp
`? (r) = j=1 `?j (rj ) and h is in H, which is to be appropriately chosen in order to estimate the
p conditional quantiles while enhancing predictions and avoiding curve crossing. It is worthwhile
remarking that, independently of the choice of ?, the quantile property is still verified for a vectorvalued estimator since the loss is separable and the intercept is unconstrained. Similarly, the vectorvalued function whose components are the conditional ?j -quantiles is still a minimizer of the ? quantile risk R? (h) = E [`? (Y 1 ? h(X))].
In this context, the constraint ? does not necessarily apply independently on each coordinate function hj but can impose dependency between them. The theory of vector-valued RKHS seems especially well suited for this purpose when considering ? as the norm associated to it. In this situation,
the choice of the kernel does not only influence the nature of the hypotheses (linear, non-linear,
universal approximators) but also the way the estimation procedure is regularized. In particular, the
kernel critically operates on the output space by encoding structural constraints on the outputs.
3.2
Matrix-valued kernel
Let us denote ?> the transpose operator and L(Rp ) the set of linear and bounded operators from
Rp to itself. In our (finite) case, L(Rp ) comes down to the set of p ? p real-valued matrices.
A matrix-valued kernel is a function K : X ? X ? L(Rp ), that is symmetric and positive [20]:
?(x, x0 ) ?
X ? X , K(x, x0 ) = K(x0 , x)> and ?m ? N, ?{(?i , ? i )}1?i?m ? (X ? Rp )m ,
P
1?i,j?m ? i | K(?i , ?j )? j ` ? 0.
2
Let K be such a kernel and for any x ? X , let Kx : y ? Rp 7? Kx y ? (Rp )X be the linear operator
such that: ?x0 ? X , (Kx y)(x0 ) = K(x0 , x)y. There exists a unique Hilbert space of functions
KK ? (Rp )X (with an inner product and a norm respectively denoted h? | ?iK and k?kK ), called the
RKHS associated to K, such that ?x ? X [20]: Kx spans the space KK (?y ? Rp : Kx y ? K), Kx
is bounded for the uniform norm (supy?Rp kKx ykK < ?) and ?f ? K : f (x) = Kx? f (reproducing
property), where ?? is the adjoint operator.
3
From now on, we assume that we are provided with a matrix-valued kernel K and we limit the
hypothesis space to: H = {f + b : b ? Rp , f ? KK , kf kK ? c} (i.e. ? = k?kK ). Though several
candidates are available [1], we focus on one of the simplest and most efficiently computable kernels,
called decomposable kernel: K : (x, x0 ) 7? k(x, x0 )B, where k : X ?X ? R is a scalar-valued kernel and B is a p ? p symmetric Positive Semi-Definite (PSD) matrix. In this particular case, the matrix B encodes the relationship between the components fj and thus, the link between thedifferent
conditional quantile estimators. A rational choice is to consider B = exp(??(?i ? ?j )2 ) 1?i,j?p .
To explain it, let us consider two extreme cases (see also Figure 1).
First, when ? = 0, B is the all-ones matrix. Since KK is the closure of the space
span {Kx y : (x, y) ? X ? Rp }, any f ? KK has all its components equal. Consequently, the
quantile estimators hj = fj + bj are parallel (and non-crossing) curves. In this case, the regressor
is said homoscedastic. Second, when ? ? +?, then B ? I (identity matrix). In this situation, it is easy to show that the components of f ? KK are independent from each other and that
Pp
2
2
kf kK = j=1 kfj kK0 (where k?kK0 is the norm coming with the RKHS associated to k) is separable. Thus, each quantile function is learned independently from the others. Regressors are said
heteroscedastic. It appears clearly that between these two extreme cases, there is a room for learning
a non-homescedastic and non-crossing quantile regressor (while preserving the quantile property).
Figure 1: Estimated (plain lines) and true (dashed lines) conditional quantiles of Y |X (synthetic
dataset) from homoscedastic regressors (? = 0) to heteroscedastic ones (? ? +?).
4
Theoretical analysis
This section is intended to give a few theoretical insights about the expected behavior of our
hypotheses. Here, we do assume working in an RKHS but not specifically with a decomposable kernel. First, we aim at providing a uniform generalization bound. For this purpose, let
F = {f ? KK , kf kK ? c}, tr(?) be the trace operator, ((Xi , Yi ))1?i?n ? (X ? Y)n be an
? n (h) = 1 Pn `? (Yi 1 ? h(Xi )), the random variable associated to the
iid sample and denote R
i=1
n
empirical risk of a hypothesis h.
Theorem 4.1 (Generalization). Let a ? R+ such that supy?Y |y| ? a, b ? Y p and H =
{f + b : f ? F} be the class of hypotheses. Moreover, assume that there exists ? ? 0 such that:
supx?X tr(K(x, x)) ? ?. Then with probability at least 1 ? ? (for ? ? (0, 1]):
r
r
?
p?
log(1/?)
?
?
?h ? H, R(h) ? Rn (h) + 2 2c
+ (2pa + c p?)
.
n
2n
Sketch of proof (full derivation in Appendix A.1). We start with a concentration inequality for
scalar-valued functions [4] and we use a vector-contraction property [19]. The bound on the
Rademacher complexity of [24, Theorem 3.1] concludes the proof.
The uniform bound in Theorem 4.1 states that, with high probability, all the?hypotheses of interest
have a true risk which is less that an empirical risk to an additive bias in O(1/ n). Let us remark that
it makes use of the output dimension p. However, there exist non-uniform generalization bounds for
operator-valued kernel-based hypotheses, which do not depend on the output dimension [14], being
thus well-suited for infinite-dimensional output spaces. Yet those results, only hold for optimal
? of the learning problem, which we never obtain in practice.
solutions h
As a second theoretical insight, Theorem 4.2 gives a bound on the quantile property, which is similar
to the one provided in [26] for scalar-valued functions. This one states that E [P (Y ? hj (X) | X)]
does not deviate to much from ?j .
4
Theorem 4.2 (Quantile deviation). Let us consider that the assumptions of Theorem
4.1 hold. More
over, let > 0 be an artificial margin, ?+
: r ? R 7? proj[0,1] 1 ? r and ??
: r ? R 7?
proj[0,1] ? r , two ramp functions, j ? Np and ? ? (0, 1]. Then with probability at least 1 ? ?:
n
n
1X ?
1X +
? (Yi ?hj (Xi ))?? ? E [P (Y ? hj (X) | X)] ?
? (Yi ? hj (Xi )) +?,
n i=1
n i=1
{z
}
|
p ? q log(2/?)
??j
where ? = 2c
+
.
n
2n
?h ? H,
Sketch of proof (full derivation in Appendix A.2). The proof is similar to the one of Theorem 4.1,
?
when remarking that ?+
and ? are 1/-Lipschitz continuous.
5
Optimization algorithm
In order to finalize the M-estimation of a non-parametric function, we need a way to jointly solve the
optimization problem of interest and compute the estimator. For ridge regression in vector-valued
RKHSs, representer theorems enable to reformulate the hypothesis f and to derive algorithms based
on matrix inversion [20, 6] or Sylvester equation [10]. Since the optimization problem we are
tackling is quite different, those methods do not apply. Yet, deriving a dual optimization problem
makes it possible to hit the mark.
Quantile estimation, as presented in this paper, comes down to minimizing a regularized empirical
risk, defined by the pinball loss `? . Since this loss function is non-differentiable, we introduce
slack variables ? and ? ? to get the following primal formulation. We also consider a regularization
parameter C to be tuned:
n
X
1
minimizep
kf k2K + C
h? | ?i i`2 +h1 ? ? | ??i i`2
f ?KK ,b?R , 2
p n
?
i=1
s. t.
?,? ?(R )
?i ? Nn : ?i < 0, ??i < 0,
yi ? f (xi ) ? b = ?i ? ??i ,
(1)
where < is a pointwise inequality. A dual formulation of Problem (1) is (see Appendix B):
n
n
X
1 X
minimize
h?
yi h?i | 1i`2
i | K(xi , xj )?j i`2 ?
??(Rp )n 2
i,j=1
i=1
s. t.
? n
?
? X
?i = 0Rp , ?i ? Nn :
i=1
?
? C(? ? 1) 4 ? 4 C? ,
i
(2)
where the linear constraints come from considering an intercept b. The Karush-Kuhn-Tucker (KKT)
? of
conditions of Problem (1) indicate that a minimizer f? of (1) can be recovered from a solution ?
Pn
? can also be obtained thanks to KKT conditions.
? i . Moreover, b
(2) with the formula f? = i=1 Kxi ?
However, as we deal with a numerical approximate solution ?, in practice b is computed by solving
Problem (1) with f fixed. This boils down to taking bj as the ?j -quantile of (yi ? fj (xi ))1?i?n .
Problem (2) is a common quadratic program that can be solved with off-the-shelf solvers. However,
since we are essentially interested in decomposable kernels K(?, ?) = k(?, ?)B, it appears that the
quadratic part of the objective function would be defined by the np ? np matrix K ? B, where ?
is the Kronecker product and K = (k(xi , xj ))1?i,j?n . Storing this matrix explicitly is likely to be
time and memory expensive. In order to improve the estimation procedure, ad hoc algorithms can
be derived. For instance, regression with a decomposable kernel boils down to solving a Sylvester
equation (which can be done efficiently) [10] and vector-valued Support Vector Machine (SVM)
without intercept can be learned with a coordinate descent algorithm [21]. However, these methods
can not be used in our setting since the loss function is different and considering the intercept is
necessary for the quantile property. Yet, coordinate descent could theoretically be extended in an
SMO technique, able to handle the linear constraints introduced by the intercept. However, SMO
works usually with a single linear constraint and needs heuristics to run efficiently, which are quite
difficult to find (even though an implementation exists for two linear constraints [25]).
Therefore, for the sake of efficiency, we propose to use a Primal-Dual Coordinate Descent (PDCD)
technique, recently introduced in [11]. This algorithm (which is proved to converge) is able to deal
with the linear constraints coming from the intercept and is thus utterly workable for the problem at
hand. Moreover, PDCD has been proved favorably competitive with SMO for SVMs.
5
Table 1: Empirical pinball loss and crossing loss ?100 (the less, the better). Bullets (resp. circles)
indicate statistically significant (resp non-significant) differences. The proposed method is JQR.
-
Pinball loss
Crossing loss
Data set
-
I ND .
I ND . ( NC )
MTFL
JQR
-
I ND .
I ND . ( NC )
MTFL
JQR
caution
ftcollinssnow
highway
heights
sniffer
snowgeese
ufc
birthwt
crabs
GAGurine
geyser
gilgais
topo
BostonHousing
CobarOre
engel
mcycle
BigMac2003
UN3
cpus
-
102.6 ? 17.3
151.1 ? 8.2
102.9 ? 39.1
128.2 ? 2.4
44.8 ? 6.7
68.4 ? 35.3
81.8 ? 4.6
139.0 ? 9.9
12.3 ? 1.0
62.6 ? 8.2
110.2 ? 7.8
47.4 ? 4.4
71.1 ? 13.0
48.5 ? 5.0
0.5 ? 0.5
61.3 ? 18.3
89.2 ? 8.5
71.0 ? 21.0
99.5 ? 7.0
20.0 ? 13.7
103.2 ? 17.2
150.8 ? 8.0
102.8 ? 38.9
128.2 ? 2.4
44.6 ? 6.8
68.4 ? 35.3
81.6 ? 4.6
139.0 ? 9.9
12.3 ? 1.0
62.6 ? 8.2
110.1 ? 7.8
47.2 ? 4.4
70.1 ? 13.7
48.5 ? 5.0
0.5 ? 0.5
61.2 ? 19.0
88.9 ? 8.4
70.9 ? 21.1
99.4 ? 7.0
19.9 ? 13.6
102.9 ? 19.0
152.4 ? 8.9
102.0 ? 34.5
128.6 ? 2.2
46.9 ? 7.6
75.3 ? 38.2
84.9 ? 4.7
142.6 ? 11.6
12.6 ? 1.0
64.5 ? 7.5
109.4 ? 7.1
49.9 ? 3.6
73.1 ? 11.8
49.7 ? 4.7
5.0 ? 4.9
58.7 ? 17.9
102.0 ? 11.7
68.7 ? 18.1
101.8 ? 7.1
23.8 ? 16.0
??? 102.6 ? 19.0
??? 153.7 ? 12.1
??? 103.7 ? 35.7
??? 127.9 ? 1.8
??? 45.2 ? 6.9
??? 76.0 ? 31.5
??? 80.6 ? 4.1
??? 139.8 ? 11.7
??? 11.9 ? 0.9
??? 62.6 ? 8.1
??? 111.3 ? 8.2
??? 46.9 ? 4.6
??? 69.6 ? 13.4
??? 47.4 ? 4.7
??? 0.6 ? 0.5
??? 64.4 ? 23.2
??? 84.3 ? 10.3
??? 67.6 ? 20.9
??? 98.8 ? 7.6
??? 19.7 ? 13.7
-
0.53 ? 0.67
0.00 ? 0.00
9.08 ? 7.38
0.04 ? 0.05
1.01 ? 0.75
3.24 ? 5.10
0.24 ? 0.22
0.00 ? 0.00
0.46 ? 0.33
0.05 ? 0.08
0.87 ? 1.60
1.23 ? 0.96
2.72 ? 3.26
0.64 ? 0.32
0.10 ? 0.13
1.50 ? 4.94
2.10 ? 1.83
2.50 ? 2.12
1.06 ? 0.85
1.29 ? 1.13
0.31 ? 0.70
0.00 ? 0.00
9.00 ? 7.39
0.04 ? 0.05
0.52 ? 0.48
2.60 ? 4.28
0.27 ? 0.42
0.00 ? 0.00
0.35 ? 0.24
0.04 ? 0.07
0.92 ? 2.02
0.95 ? 0.85
1.52 ? 2.47
0.48 ? 0.27
0.10 ? 0.13
1.25 ? 4.53
0.92 ? 1.25
1.87 ? 1.68
0.85 ? 0.70
1.17 ? 1.15
0.69 ? 0.54
0.00 ? 0.00
3.48 ? 4.49
0.07 ? 0.14
1.23 ? 0.77
8.93 ? 19.52
0.82 ? 1.47
0.31 ? 0.88
0.30 ? 0.22
0.05 ? 0.09
0.80 ? 1.18
0.71 ? 0.96
2.75 ? 2.93
1.11 ? 0.33
0.29 ? 0.35
1.65 ? 5.97
1.13 ? 1.10
0.73 ? 0.92
0.65 ? 0.62
0.46 ? 0.28
??? 0.09 ? 0.14
??? 0.00 ? 0.00
??? 8.81 ? 7.46
??? 0.00 ? 0.00
??? 0.15 ? 0.22
??? 0.94 ? 3.46
??? 0.05 ? 0.15
??? 0.00 ? 0.00
??? 0.06 ? 0.20
??? 0.03 ? 0.08
??? 0.72 ? 1.51
??? 0.81 ? 0.43
??? 1.14 ? 2.02
??? 0.58 ? 0.34
??? 0.02 ? 0.05
??? 0.06 ? 0.14
??? 0.14 ? 0.37
??? 1.55 ? 1.75
??? 0.09 ? 0.31
??? 0.09 ? 0.13
PDCD is described in Algorithm 1, where, for ? = (?i )1?i?n ? (Rp )n , ?j ? Rn denotes its j th
row vector and ?ij its ith component, diag is the operator mapping a vector to a diagonal matrix
and proj1 and proj[C(?l ?1),C?l ] are respectively the projectors onto the vector 1 and the compact
set [C(?l ? 1), C?l ]. PDCD uses dual variables ? ? (Rp )n (which are updated during the descent)
and has two sets of parameters ? ? (Rp )n and ? ? (Rp )n , that verify (?(i, l) ? Nn ? Np ):
?li < (K(xi ,x1i )) +? l . In practice, we kept the same parameters as in [11]: ?il = 10(K(xi , xi ))l,l
l,l
i
and ?li equal to 0.95 times the bound. Moreover, as it is standard for coordinate descent methods,
Pn
l
our implementation uses efficient updates for the computation of both j=1 K(xi , xj )?j and ? .
6
Numerical experiments
Two sets of experiments are presented, respectively aimed at assessing the ability of our methodology to predict quantiles and at comparing an implementation of Algorithm 1 with an off-theshelf solver and an augmented Lagrangian scheme. Following the previous sections, a decomposable kernel K(x, x0 ) = k(x, x0 )B is used, where B = (exp(??(?i ? ?j )2 ))1?i,j?p and
2
k(x, x0 ) = exp(? kx ? x0 k`2 /2? 2 ), with ? being the 0.7-quantile of the pairwise distances of
the training data {xi }1?i?n . Quantile levels of interest are ? = (0.1, 0.3, 0.5, 0.7, 0.9).
6.1
Quantile regression
Pn
Quantile regression is assessed with two criteria: the pinball loss n1 i=1 `? (yi 1 ?
h(xi )) is the one minimized to build the proposed estimator and the crossing loss
Pp?1
Pn
1
j=1 n
i=1 max(0, hj+1 (xi ) ? hj (xi )) , assuming that ?j > ?j+1 , quantifies how far hj goes
below hj+1 , while hj is expected to stay always above hj+1 . More experiments are in Appendix D.1.
This study focuses on three non-parametric models based on the RKHS theory. Other linear and
spline-based models have been dismissed since Takeuchi et al. [26] have already provided a comparison of these ones with kernel methods. First, we considered an independent estimation of quantile regressors (I ND .), which boils down to setting B = I (this approach could be set up without
vector-valued RKHSs but with scalar-valued kernels only). Second, hard non-crossing constraints
on the training data have been imposed (I ND . ( NC )), as proposed in [26]. Third, the proposed joint
estimator (JQR) uses the Gaussian matrix B presented above.
Quantile regression with multi-task feature learning (MTFL), as proposed in [27], is also included.
For a fair comparison, each point is mapped with ?(x) = (k(x, x1 ), . . . , k(x, xn )) and the estimator
h(x) = W > ?(x) + b (W ? Rn?p ) is learned jointly with the PSD matrix D ? Rn?n of the
6
Table 2: CPU time (s) for training a model.
Algorithm 1 Primal-Dual Coordinate Descent.
p
Initialize ?i , ? i ? R (?i ? Nn ).
repeat
Choose (i, l) ? Nn ? Np uniformly at random.
l
Set ? ? proj1 ? l + diag(? l )?l .
P
l
l
l
Set dli ? n
j=1 (K(xi , xj )?j ) ? yi +2? i ? ?i .
l
l
l l
Set ?i ? proj[C(?l ?1),C?l ] ?i ? ?i di .
Size
QP
AUG . L AG .
PDCD
250
8.73 ? 0.34
261.11 ? 46.69 18.69 ? 3.54
500
75.53 ? 2.98
865.86 ? 92.26 61.30 ? 7.05
1000 621.60 ? 30.37
?
266.50 ? 41.16
2000 3416.55 ? 104.41
?
958.93 ? 107.80
l
Update coordinate (i, l): ?il ? ?li , ?il ? ?i ,
and keep other coordinates unchanged.
until duality gap (1)-(2) is small enough
regularizer ?(h) = tr(W > D ?1 W ). This comes down to alternating our approach (with B = I
and k(?, ?) = h? | D?i`2 ) and the update D ? (W W > )1/2 / tr((W W > )1/2 ).
To present an honorable comparison of these four methods, we did not choose datasets for the benefit
of our method but considered the ones used in [26]. These 20 datasets (whose names are indicated
in Table 1) come from the UCI repository and three R packages: quantreg, alr3 and MASS. The
sample sizes vary from 38 (CobarOre) to 1375 (heights) and the numbers of explanatory variables
vary from 1 (5 sets) to 12 (BostonHousing). The datasets were standardized coordinate-wise to
have zero mean and unit variance. Results are given in Table 1 thanks to the mean and the standard
deviation of the test losses recorded on 20 random splits train-test with ratio 0.7-0.3. The best result
of each line is boldfaced and the bullets indicate the significant differences of each competitor from
JQR (based on a Wilcoxon signed-rank test with significance level 0.05).
The parameter C is chosen by cross-validation (minimizing the pinball loss) inside a logarithmic
grid (10?5 , 10?4 , . . . , 105 ) for all methods and datasets. For our approach (JQR), the parameter ?
is chosen in the same grid as C with extra candidates 0 and +?. Finally, for a balanced comparison,
the dual optimization problems corresponding to each approach are solved with CVXOPT [2].
Regarding the pinball loss, joint quantile regression compares favorably to independent and hard
non-crossing constraint estimations for 12 vs 8 datasets (5 vs 1 significantly different). These results
bear out the assumption concerning the relationship between conditional quantiles and the usefulness
of multiple-output methods for quantile regression. Prediction is also enhanced compared to MTFL
for 15 vs 5 datasets (11 vs 1 significantly different).
The crossing loss clearly shows that joint regression enables to weaken the crossing problem, in
comparison to independent estimation and hard non-crossing constraints (18 vs 1 favorable datasets
and 9 vs 0 significantly different). Results are similar compared to MTFL (16 vs 3, 12 vs 1). Note
that for I ND . ( NC ), the crossing loss is null on the training data by construction but not necessarily
on the test data. In addition, let us remark that model selection (and particularly for ?, which tunes
the trade-off between hetero and homoscedastic regressors) has been performed based on the pinball
loss only. It seems that, in a way, the pinball loss embraces the crossing loss as a subcriterion.
6.2
Learning algorithms
This section is aimed at comparing three implementations of algorithms for estimating joint quantile
regressors (solving Problem 2), following their running (CPU) time. First, the off-the-shelf solver
(based on an interior-point method) included in CVXOPT [2] (QP) is applied to Problem (2) turned
into a standard form of linearly constrained quadratic program. Second, an augmented Lagrangian
scheme (AUG . L AG) is used in order to get rid of the linear constraints and to make it possible to use
a coordinate descent approach (detailed procedure in Appendix C). In this scheme, the inner solver
is Algorithm 1 when the intercept is dismissed, which boils down to be the algorithm proposed in
[23]. The last approach (PDCD) is Algorithm 1.
We use a synthetic dataset (the same as in Figure 1), for which X ? [0, 1.5]. The target Y is computed as a sine curve at 1 Hz modulated by a sine envelope at 1/3 Hz and mean 1. Moreover, this
pattern is distorted with a random Gaussian noise with mean 0 and a linearly decreasing standard deviation from 1.2 at X = 0 to 0.2 at X = 1.5. Parameters for the models are: (C, ?) = (102 , 10?2 ).
7
To compare the implementations of the three algorithms, we first run QP, with a relative tolerance set
to 10?2 , and store the optimal objective value. Then, the two other methods (AUG . L AG and PDCD)
are launched and stopped when they pass the objective value reached by QP (optimal objective
values are reported in Appendix D.2). Table 2 gives the mean and standard deviation of the CPU
time required by each method for 10 random datasets and several sample sizes. Some statistics are
missing because AUG . L AG . ran out of time.
As expected, it appears that for a not too tight tolerance and big datasets, implementation of Algorithm 1 outperforms the two other competitors. Let us remark that QP is also more expensive in
memory than the coordinate-based algorithms like ours. Moreover, training time may seem high in
comparison to usual SVMs. However, let us first remind that we jointly learn p regressors. Thus, a
fair comparison should be done with an SVM applied to an np ? np matrix, instead of n ? n. In
addition, there is no sample sparsity in quantile regression, which does speed up SVM training.
Last but not least, in order to illustrate the use of our algorithm, we have run it on two 2000-point
datasets from economics and medicine: the U.S. 2000 Census data, consisting of annual salary and
9 related features on workers, and the 2014 National Center for Health Statistics? data, regarding
girl birth weight and 16 statistics on parents.1 Parameters (C, ?) have been set to (1, 100) and
(0.1, 1) respectively for the Census and NCHS datasets (determined by cross-validation). Figure 2
depicts 9 estimated conditional quantiles of the salary with respect to the education (17 levels from
no schooling completed to doctorate degree) and of the birth weight (in grams) vs mother?s prepregnancy weight (in pounds). As expected, the Census data reveal an increasing and heteroscedastic
trend while new-born?s weight does not seem correlated to mother?s weight.
Figure 2: Estimated conditional quantiles for the Census (left, salary vs education) and the NCHS
data (right, birth weight vs mother?s pre-pregnancy weight).
7
Conclusion
This paper introduces a novel framework for joint quantile regression, which is based on vectorvalued RKHSs. It comes along with theoretical guarantees and an efficient learning algorithm.
Moreover, this methodology, which keeps the quantile property, enjoys few curve crossing and enhanced performances compared to independent estimations and hard non-crossing constraints.
To go forward, let us remark that this framework benefits from all the tools now associated with
vector-valued RKHSs, such as manifold learning for the semi-supervised setting, multiple kernel
learning for measuring feature importance and random Fourier features for very large scale applications. Moreover, extensions of our methodology to multivariate output variables are to be investigated, given that it requires to choose among the various definitions of multivariate quantiles.
Acknowledgments
This work was supported by the industrial chair ?Machine Learning for Big Data?.
References
[1] M.A. Alvarez, L. Rosasco, and N.D. Lawrence. Kernels for Vector-Valued Functions: a Review. Foundations and Trends in Machine Learning, 4(3):195?266, 2012. arXiv: 1106.6251.
1
Data are available at www.census.gov/census2000/PUMS5.html and www.nber.org/data/
vital-statistics-natality-data.html.
8
[2] M.S. Anderson, J. Dahl, and L. Vandenberghe. CVXOPT: A Python package for convex optimization,
version 1.1.5., 2012.
[3] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Machine Learning, 73(3):
243?272, 2008.
[4] P.L. Bartlett and S. Mendelson. Rademacher and gaussian complexities: Risk bounds and structural
results. Journal of Machine Learning Research, 3:463?482, 2002.
[5] H.D. Bondell, B.J. Reich, and H. Wang. Noncrossing quantile regression curve estimation. Biometrika,
97(4):825?838, 2010.
[6] C. Brouard, F. d?Alch?e Buc, and M. Szafranski. Semi-supervised Penalized Output Kernel Regression for
Link Prediction. In Proceedings of the 28th International Conference on Machine Learning, 2011.
[7] C. Brouard, M. Szafranski, and F. d?Alch?e Buc. Input Output Kernel Regression: Supervised and SemiSupervised Structured Output Prediction with Operator-Valued Kernels. Journal of Machine Learning
Research, 17(176):1?48, 2016.
[8] V. Chernozhukov, I. Fern?andez-Val, and A. Galichon. Quantile and Probability Curves Without Crossing.
Econometrica, 78(3):1093?1125, 2010.
[9] H. Dette and S. Volgushev. Non-crossing non-parametric estimates of quantile curves. Journal of the
Royal Statistical Society: Series B (Statistical Methodology), 70(3):609?627, 2008.
[10] F. Dinuzzo, C.S. Ong, P. Gehler, and G. Pillonetto. Learning Output Kernels with Block Coordinate
Descent. In Proceedings of the 28th International Conference of Machine Learning, 2011.
[11] O. Fercoq and P. Bianchi. A Coordinate Descent Primal-Dual Algorithm with Large Step Size and Possibly Non Separable Functions. arXiv:1508.04625 [math], 2015.
?
[12] M. Hallin and M. Siman.
Elliptical multiple-output quantile regression and convex optimization. Statistics
& Probability Letters, 109:232?237, 2016.
[13] X. He. Quantile Curves without Crossing. The American Statistician, 51(2):186?192, 1997.
[14] H. Kadri, E. Duflos, P. Preux, S. Canu, A. Rakotomamonjy, and J. Audiffren. Operator-valued Kernels
for Learning from Functional Response Data. Journal of Machine Learning Research, 16:1?54, 2015.
[15] R. Koenker. Quantile Regression. Cambridge University Press, Cambridge, New York, 2005.
[16] R. Koenker and G. Bassett. Regression Quantiles. Econometrica, 46(1):33?50, 1978.
[17] Y. Li, Y. Liu, and J. Zhu. Quantile Regression in Reproducing Kernel Hilbert Spaces. Journal of the
American Statistical Association, 102(477):255?268, 2007.
[18] Y. Liu and Y. Wu. Simultaneous multiple non-crossing quantile regression estimation using kernel constraints. Journal of nonparametric statistics, 23(2):415?437, 2011.
[19] A. Maurer. A vector-contraction inequality for Rademacher complexities. In Proceedings of The 27th
International Conference on Algorithmic Learning Theory, 2016.
[20] C.A. Micchelli and M.A. Pontil. On Learning Vector-Valued Functions. Neural Computation, 17:177?
204, 2005.
[21] H.Q. Minh, L. Bazzani, and V. Murino. A Unifying Framework in Vector-valued Reproducing Kernel
Hilbert Spaces for Manifold Regularization and Co-Regularized Multi-view Learning. Journal of Machine Learning Research, 17(25):1?72, 2016.
[22] S.K. Schnabel and P.H.C. Eilers. Simultaneous estimation of quantile curves using quantile sheets. AStA
Advances in Statistical Analysis, 97(1):77?87, 2012.
[23] S. Shalev-Shwartz and T. Zhang. Stochastic Dual Coordinate Ascent Methods for Regularized Loss
Minimization. Journal of Machine Learning Research, 14:567?599, 2013.
[24] V. Sindhwani, M.H. Quang, and A.C. Lozano. Scalable Matrix-valued Kernel Learning for Highdimensional Nonlinear Multivariate Regression and Granger Causality. In Proceedings of the TwentyNinth Conference on Uncertainty in Artificial Intelligence, 2013.
[25] I. Takeuchi and T. Furuhashi. Non-crossing quantile regressions by SVM. In 2004 IEEE International
Joint Conference on Neural Networks, 2004. Proceedings, July 2004.
[26] I. Takeuchi, Q.V. Le, T.D. Sears, and A.J. Smola. Nonparametric Quantile Estimation. Journal of Machine
Learning Research, 7:1231?1264, 2006.
[27] I. Takeuchi, T. Hongo, M. Sugiyama, and S. Nakajima. Parametric Task Learning. In Advances in Neural
Information Processing Systems 26, pages 1358?1366. Curran Associates, Inc., 2013.
[28] Y. Wu and Y. Liu. Stepwise multiple quantile regression estimation using non-crossing constraints. Statistics and Its Interface, 2:299?310, 2009.
9
| 6239 |@word repository:1 version:1 inversion:2 seems:2 norm:4 nd:7 closure:1 contraction:2 tr:4 born:1 series:1 liu:3 denoting:1 rkhs:9 tuned:1 ours:1 outperforms:1 current:1 recovered:1 comparing:2 elliptical:1 yet:3 tackling:1 written:1 numerical:5 additive:1 enables:1 mtfl:5 update:3 v:11 intelligence:1 utterly:1 ith:1 dinuzzo:1 pillonetto:1 math:1 location:2 org:1 zhang:1 height:2 along:5 quang:1 ik:1 boldfaced:1 inside:1 introduce:1 manner:1 x0:12 pairwise:1 theoretically:2 expected:4 cvxopt:3 behavior:1 multi:10 decreasing:2 pitfall:1 gov:1 cpu:4 considering:4 solver:4 increasing:1 provided:5 estimating:6 moreover:10 spain:1 bounded:2 mass:1 null:1 kk0:2 minimizes:1 caution:1 ag:4 guarantee:5 nutshell:1 exactly:1 universit:1 biometrika:1 hit:1 unit:1 medical:1 positive:2 limit:1 consequence:1 encoding:1 dalche:1 signed:1 championed:1 heteroscedastic:3 co:1 limited:1 range:1 bi:1 statistically:1 unique:1 acknowledgment:1 nchs:2 practice:3 block:1 definite:1 procedure:5 pontil:2 empirical:7 universal:1 significantly:3 pre:1 get:2 onto:1 close:1 selection:1 operator:9 interior:1 sheet:1 risk:9 context:1 intercept:10 influence:1 www:2 szafranski:2 projector:1 lagrangian:2 imposed:1 missing:1 center:1 go:2 economics:2 attention:1 independently:4 convex:5 decomposable:5 estimator:9 insight:2 deriving:1 vandenberghe:1 handle:2 curb:3 coordinate:16 updated:1 resp:2 enhanced:3 target:3 construction:1 olivier:2 us:3 curran:1 hypothesis:10 pa:1 crossing:34 trend:2 expensive:2 particularly:1 associate:1 gehler:1 solved:2 wang:1 murino:1 trade:1 ran:1 balanced:1 complexity:3 econometrica:2 ration:1 ong:1 depend:1 solving:3 tight:1 upon:1 efficiency:2 alch:3 girl:1 joint:16 various:2 regularizer:2 derivation:2 train:1 sears:1 artificial:2 shalev:1 birth:3 whose:2 quite:2 heuristic:1 valued:33 solve:1 ramp:1 ability:1 statistic:8 jointly:4 itself:1 hoc:1 differentiable:1 propose:3 product:3 coming:2 fr:1 uci:1 turned:1 achieve:1 adjoint:1 bazzani:1 parent:1 enhancement:2 regularity:1 assessing:1 rademacher:3 converges:1 derive:1 illustrate:1 ij:1 aug:4 predicted:1 come:7 indicate:3 kuhn:1 hetero:1 closely:1 stochastic:1 centered:1 enable:1 asta:1 education:2 generalization:4 karush:1 andez:1 extension:2 hold:2 lying:2 crab:1 considered:4 exp:3 lawrence:1 mapping:1 bj:2 predict:1 algorithmic:1 vary:2 theshelf:1 homoscedastic:3 purpose:3 estimation:21 favorable:1 chernozhukov:1 lose:2 highway:1 engel:1 tool:1 weighted:1 minimization:3 brought:1 clearly:2 always:1 gaussian:3 aim:4 arose:1 pn:5 hj:14 shelf:2 derived:1 focus:3 rank:1 industrial:1 el:1 cnrs:1 nn:5 explanatory:1 proj:4 going:1 france:1 interested:3 classification:1 dual:8 among:1 denoted:1 html:2 constrained:1 initialize:1 field:1 equal:4 never:1 evgeniou:1 representer:1 pinball:11 np:9 minimized:2 spline:3 others:1 few:4 simultaneously:3 preserve:1 national:1 intended:1 consisting:1 statistician:1 n1:3 ecology:1 ltci:1 subsumed:1 rearrangement:1 interest:4 psd:2 mcycle:1 workable:1 regressing:1 introduces:1 extreme:2 unconditional:2 primal:4 worker:1 necessary:1 iv:1 maurer:1 re:1 circle:1 theoretical:7 minimal:1 weaken:1 stopped:1 instance:1 measuring:1 cost:1 rakotomamonjy:1 addressing:1 deviation:4 uniform:4 usefulness:1 too:1 reported:1 dependency:1 supx:1 corrupted:1 kxi:1 synthetic:2 calibrated:1 thanks:4 international:4 stay:1 off:4 regressor:2 satisfied:1 recorded:1 choose:3 rosasco:1 possibly:1 resort:1 american:2 leading:1 li:4 inc:1 explicitly:1 ad:1 performed:1 h1:2 sine:2 view:1 reached:1 start:1 competitive:1 parallel:1 florence:2 contribution:1 minimize:2 chart:1 ni:2 takeuchi:6 square:1 variance:1 il:3 efficiently:3 critically:1 iid:2 fern:1 finalize:1 ago:1 explain:1 simultaneous:2 definition:2 against:1 competitor:2 pp:3 tucker:1 associated:5 proof:4 di:1 boil:4 couple:1 rational:1 proved:3 dataset:2 hilbert:4 focusing:1 appears:4 dette:1 violating:1 supervised:3 methodology:7 response:1 alvarez:1 formulation:2 done:2 though:2 pound:1 anderson:1 smola:1 until:1 working:1 sketch:2 hand:1 struggling:1 nonlinear:1 indicated:1 reveal:1 bullet:2 grows:1 nber:1 semisupervised:1 name:1 verify:1 true:5 lozano:1 regularization:3 alternating:1 symmetric:2 deal:3 during:1 criterion:1 outline:1 complete:2 ridge:1 duflos:1 fj:3 interface:1 postprocessing:1 wise:1 novel:5 recently:1 predominantly:1 common:4 vectorvalued:3 functional:2 qp:5 insensitive:1 association:1 he:1 refer:1 significant:3 cambridge:2 mother:3 unconstrained:2 grid:2 canu:1 similarly:3 hp:1 sugiyama:1 brouard:2 reich:1 surface:1 add:1 wilcoxon:1 multivariate:3 recent:1 showed:1 perspective:1 inf:1 store:1 inequality:3 approximators:1 yi:16 preserving:2 seen:1 impose:1 novelty:1 converge:1 monotonically:1 dashed:1 ii:1 semi:3 multiple:10 full:2 rj:1 july:1 offer:1 cross:3 concerning:1 post:1 prediction:7 scalable:1 regression:43 basic:1 sylvester:2 enhancing:1 expectation:1 essentially:1 arxiv:2 kernel:32 nakajima:1 achieved:1 maxime:2 addition:4 median:1 appropriately:1 extra:1 envelope:1 launched:1 salary:3 ascent:1 hz:2 contrary:1 seem:2 integer:1 structural:3 leverage:1 constraining:1 iii:1 identically:1 easy:1 enough:1 split:1 xj:4 vital:1 inner:2 regarding:4 idea:1 computable:1 benefit:2 shift:1 bartlett:1 york:1 remark:5 detailed:1 aimed:2 tune:1 nonparametric:2 svms:2 simplest:1 exist:1 estimated:5 four:1 prevent:2 verified:1 dahl:1 kept:1 minimizep:1 run:3 package:2 letter:1 uncertainty:1 distorted:1 wu:2 appendix:6 bound:7 quadratic:3 annual:1 constraint:18 kronecker:1 encodes:1 sake:1 fourier:1 speed:1 fercoq:3 span:2 chair:1 separable:4 embrace:1 structured:2 across:2 census:5 ecom:1 equation:2 bondell:1 previously:1 slack:1 granger:1 koenker:4 available:2 apply:2 worthwhile:1 appropriate:2 occurrence:1 rkhss:9 rp:23 assumes:1 denotes:2 include:1 standardized:1 running:1 completed:1 opportunity:1 unifying:1 medicine:2 quantile:78 especially:1 build:1 society:1 unchanged:1 tensor:1 objective:4 micchelli:1 already:2 quantity:2 parametric:5 strategy:1 concentration:1 usual:1 diagonal:1 said:2 subspace:1 distance:1 link:2 mapped:1 topic:1 manifold:3 embarrassing:2 assuming:1 besides:1 pointwise:1 relationship:4 kk:13 ratio:3 minimizing:5 providing:1 reformulate:1 nc:4 difficult:1 remind:1 favorably:2 trace:1 negative:1 implementation:6 unknown:2 perform:1 bianchi:1 observation:5 datasets:13 benchmark:2 finite:1 minh:1 descent:11 situation:2 extended:1 ever:1 rn:4 reproducing:4 arbitrary:1 community:1 introduced:5 pair:1 paris:2 required:2 extensive:1 dli:1 kkx:1 smo:4 learned:3 barcelona:1 nip:1 able:4 deepen:1 beyond:1 below:3 usually:1 remarking:2 pattern:1 sparsity:1 saclay:1 preux:1 built:1 max:2 including:1 program:2 memory:2 dismissed:2 royal:1 regularized:4 predicting:2 residual:1 zhu:1 scheme:3 improve:1 picture:2 legitimately:1 concludes:1 extract:1 health:1 audiffren:1 deviate:1 nice:1 literature:1 review:1 python:1 kf:4 val:1 relative:1 loss:26 highlight:1 bear:1 eilers:1 interesting:1 validation:2 foundation:1 supy:2 degree:1 principle:1 pregnancy:1 storing:1 row:1 penalized:1 supported:2 last:3 keeping:1 transpose:1 repeat:1 enjoys:1 bias:1 emp:4 taking:1 tolerance:2 distributed:1 slice:1 curve:16 overcome:1 plain:1 stand:1 cumulative:1 dimension:2 xn:1 author:2 gram:1 forward:1 regressors:7 far:1 social:1 approximate:2 compact:2 buc:3 hongo:1 keep:2 dealing:1 kkt:2 rid:1 xi:23 shwartz:1 continuous:1 decade:1 quantifies:1 table:5 nature:1 learn:1 investigated:2 necessarily:2 diag:2 did:1 significance:1 main:1 linearly:2 k2k:1 big:2 noise:2 bassett:2 kadri:1 fair:2 ykk:1 x1:1 augmented:2 causality:1 telecom:1 quantiles:16 geyser:1 depicts:1 jqr:6 kfj:1 x1i:1 candidate:2 third:1 down:7 theorem:8 formula:1 svm:4 exists:3 mendelson:1 stepwise:1 sequential:1 importance:1 kx:9 margin:1 gap:1 suited:2 logarithmic:1 appearance:1 infinitely:1 likely:1 scalar:8 sindhwani:1 cite:1 minimizer:3 conditional:17 goal:1 presentation:1 identity:1 consequently:1 room:1 shared:2 lipschitz:1 paristech:2 hard:6 included:2 specifically:1 infinite:1 operates:1 uniformly:1 determined:1 called:6 pas:1 duality:1 highdimensional:1 mark:1 support:1 modulated:1 assessed:1 schnabel:1 phenomenon:3 argyriou:1 avoiding:1 correlated:1 |
5,791 | 624 | Analogy--Watershed or Waterloo?
Structural alignment and the development of
connectionist models of analogy
Dedre Gentner
Department of Psychology
Northwestern University
2029 Sheridan Rd.
Evanston, IL 60208
Arthur B. Markman
Department of Psychology
Northwestern University
2029 Sheridan Rd.
Evanston, IL 60208
ABSTRACT
Neural network models have been criticized for their inability to make
use of compositional representations. In this paper, we describe a series
of psychological phenomena that demonstrate the role of structured
representations in cognition. These findings suggest that people
compare relational representations via a process of structural alignment.
This process will have to be captured by any model of cognition,
symbolic or subsymbolic.
1.0 INTRODUCTION
Pattern recognition is central to cognition. At the perceptual level, we notice key features
of the world (like symmetry), recognize objects in front of us and identify the letters on a
printed page. At a higher level, we recognize problems we have solved before and
determine similarities-including analogical similarities-between new situations and old
ones. Neural network models have been successful at capturing sensory pattern
recognition (e.g., Sabourin & Mitiche, 1992). In contrast, the determination of higher
level similarities has been well modeled by symbolic processes (Falkenhainer, Forbus, &
Gentner, 1989). An important question is whether neural networks can be extended to
high-level similarity and pattern recognition.
In this paper, we will summarize the constraints on cognitive representations suggested
by the psychological study of similarity and analogy. We focus on three themes: (1)
structural alignment; (2) structural projection; and (3) flexibility.
855
856
Gentner and Markman
2.0 STRUCTURAL ALIGNMENT IN SIMILARITY
Extensive psychological research has examined the way people compare pairs of items to
determine their similarity. Mounting evidence suggests that the similarity of two
complex items depends on the degree of match between their component objects (common
and distinctive attributes) and on the degree of match between the relations among the
component objects. Specifically, there is evidence that (I) similarity involves structured
pattern matching, (2) similarity involves structured pattern completion, (3) comparing the
same item with different things can highlight different aspects of the item and (4) even
comparisons of a single pair of items may yield multiple interpretations. We will
examine these four claims in the following sections.
2.1
SIMILARITY INVOLVES STRUCTURED PATTERN MATCHING
The central idea underlying structured pattern matching is that similarity involves an
alignment of relational structure. For example, in Figure la, configuration A is clearly
more similar to the top configuration than configuration B, because A has similar objects
taking part in the same relation (above), while B has similar objects taking part in a
different relation (next-to). This determination can be made regardless of whether the
objects taking part in the relations are similar. For example, in Figure Ib configuration
A is also more similar to the top configuration than is configuration B, because A shares
a relation with the top configuration, while B does not. As a check on this intuition, 10
subjects were asked to tell us which configuration (A or B) went best with the top
configuration for the triads in Figures la and lb. Al1lO subjects chose configuration A
for both triads. This example demonstrates that relations (such as the common above
relation) are important in similarity processing.
?
A
B
B
A
(A)
(B)
Figure 1. Examples of structural alignment in perception.
The importance of relations was also demonstrated by Palmer (1978) who asked subjects
to rate the similarity of pairs of configurations like those in Figure 2. The pair in Figure
2a shares the global property that both are open figures, while the pair in Figure 2b does
not. As would be expected if Subjects attend to relations when determining similarity,
Structural alignment & the development of connectionist models of analogy
higher similarity ratings were given to pairs like the one in Figure 2a than to pairs like
the one in Figure 2b. This finding can only be explained by appealing to structural
similarity, because both pairs of configurations share the same number of local line
segments. Consistent with this result, Palmer also found that subjects were faster to say
that the items in Figure 2b are different than that the items in Figure 2a are different. A
similar result was obtained by Lockhead and King (1977).
(A)
(B)
Figure 2. Structured pattern matching in a study by Palmer (1978).
Further research suggests that common bindings between relations and the items they
relate are also central to similarity. For example, Clement and Gentner (1991) presented
subjects with pairs of analogous stories. One story described organisms called Tams that
ate rocks, while the other described robots that collected data on a planet. In each story,
one matching fact also had a matching causal antecedent. For example, the Tams'
exhausting the minerals on the rock CAUSED them to move to another rock, while
the robots' exhausting the data on a planet CAUSED them to move to another
planet. A second matching fact did not have a matching causal antecedent. For
example, the Tams' underbelly could not function on a new rock and the
robots' probe could not function on a new planet, but the causes of these facts
did not match. Subjects were asked which of the two pairs of key facts (shown in bold)
was more important to the stories. Subjects selected the pair that had the matching causal
antecedent, suggesting that their mappings preserved the relational connections in the
stories.
2.2
STRUCTURED PATTERN COMPLETION
Pattern completion has long been a central feature of neural network models (Anderson,
Silverstein, Ritz, & Jones, 1977; Hopfield, 1982). For example, in the BSB model of
Anderson et aI., vectors in which some units are below their maximum value are filled in
by completing a pattern based on the vector similarities of the current activation pattern
to previously learned patterns.
The key issue here is the kind of information that guides pattern completion in humans.
Data from psychological studies suggests that subjects' pattern matching ability is
controlled by structural similarities rather than by geometric similarities. For example,
Medin and Goldstone presented subjects with pairs of objects like those in Figure 3
(Medin, Goldstone & Gentner, in press). The left-hand figure in both pairs is somewhat
ambiguous, but the right-hand figure is not. Subjects who were asked to list the
commonalities of the pair in Figure 3a said that both figures had three prongs, while
subjects who were asked to list the commonalities of the pair in Figure 3b said that both
figures had four prongs. This finding was obtained for 15 of 16 triads tested, and suggests
857
858
Gentner and Markman
that subjects were mapping the structure from the unambiguous figure onto the
ambiguous one. Of course, in order for the mapping to take place, the underlying
structure of the figures has to be readily alignable, and there must be ambiguity in the
target figure. In the pair in Figure 3c, the left hand item cannot be viewed as having four
prongs, and so this mapping is not made.
(A)
(B)
Itb.m ~
(C)
Figure 3: Example of structured pattern completion.
Structured pattern completion also occurs in conceptual structures. Clement and Gentner
(1991) extended the study described above by deleting the key matching facts from one of
the stories (e.g., the bold facts from the robot story). Subjects read both stories, then
predicted one new fact about the robot story. Subjects were free to predict anything at all,
but 50% of the subjects predicted the fact with the matching causal antecedent, while only
28% of the subjects predicted the fact with no matching causal antecedent. By
comparison, a control group that made predictions about the target story without seeing
the base predicted both facts at the same rate (about 5%). This finding underlines the
importance of connectivity in pattern completion. People's predictions were determined
not just by the local information, but by whether it was connected to matching
information. Thus, pattern completion is structure-sensitive.
2.3 DIFFERENT COMPARISONS-DIFFERENT INTERPRETATIONS
Comparison is flexible. When an item takes part in many comparisons, it may be
interpreted differently in each comparison. For example, in Figure 3a, the left figure is
interpreted as having 3 prongs, while in Figure 3b, it is interpreted as having 4 prongs.
Similarly, the comparison 'My surgeon is a butcher' conveys a clumsy surgeon, but
'Genghis Khan was a butcher' conveys a ruthless killer (Glucksberg and Keysar, 1992).
This type of flexibility is also evident in an example presented by Spellman and Holyoak
(1992). They pointed out that some politicians likened the Gulf War to World War II,
implying that the United States was acting as the world's policeman to stop a tyrant.
Other politicians compared Operation Desert Storm to Vietnam, implying that the United
States entered into a potentially endless conflict between two other nations. Clearly,
different comparisons highlighted different features of the Gulf War.
Structural alignment & the development of connectionist models of analogy
2.4
SAME COMPARISON-DIFFERENT INTERPRETATIONS
Even a single comparison can yield more than one distinct interpretation. This situation
may arise when the items are richly represented, with many different clusters of
knowledge. It can also arise when the comparison permits more than one alignment, as
when the similarities of the objects in an item suggest different correspondences than do
the relational similarities (i.e. components are cross-mapped (Gentner & Toupin, 1986?.
Markman and Gentner (in press) presented subjects with pairs of scenes like those
depicting the perceptual higher order relation monotonic increase in size shown in Figure
4. In Figure 4, the circle with the arrow over it in the left-hand figure is the largest circle
in the array. It is cross-mapped, since it is the same size as the middle circle in the righthand figure, but plays the same relational role as the left (largest) circle. Subjects were
given a mapping task in which they were asked to point to the object in the right-hand
figure that went with with the cross-mapped circle in the left-hand figure. In this task,
subjects chose the circle that looked most similar 91 % of the time. However, a second
group of subjects, who rated the similarity of the pair before doing this mapping, selected
the object playing the same relational role 61 % of the time. In both tasks, when subjects
were asked whether there were any other good choices, they generally described the other
possible mapping. These results show that the same comparison can be aligned in
different ways, and that similarity comparisons promote structural alignment.
,.
-
.:. '" """:
~
~
A
....
Figure 4: Stimuli with a cross-mapping from Markman and Gentner (m press).
"'-.;
Goldstone (personal communication) has demonstrated that, not only are comparisons
flexible, but subjects can attend to attribute and relation matches selectively. He
presented subjects with triads like the one in Figure 5. Subjects were asked to choose
either the bottom figure with the most attribute similarity to the top one, or the bottom
figure with the most relational similarity to the top one. In this study, and other pilot
studies, subjects were highly accurate at both task, suggesting flexibility to attend to
different kinds of similarity.
DD
DO
00
A
B
Figure 5: Sample stimuli from study by Goldstone.
859
860
Gentner and Markman
Similar flexibility can also be found in stimuli with conceptual relations. Gentner (1988)
presented children with double metaphors that can have two meanings, one based on
attribute similarities and a second based on relational similarities. For example, the
metaphor 'Plant stems are like drinking straws' can mean that both are round and skinny,
or that both transport fluids from low places to high places. Gentner found that young
children (age 5-6) made the attribute-based interpretation, while older children (age 9-10)
and adults could make either interpretation (but preferred the relation-based interpretation).
There are limits to this flexibility. People prefer to make structurally consistent
mappings (Gentner, 1983). For example, Spellman and Holyoak (1992) told subjects to
map Operation Desert Storm onto World War II. When they asked subjects to find a
correspondence for George Bush given that Saddam Hussein corresponded to Hitler,
subjects generally chose either FDR or Churchill. Then, subjects were asked to make a
mapping for the United States in 1991. Interestingly, subjects who mapped Bush to
FDR almost always mapped the US in 1991 to the US during World War II. In contrast,
subjects who mapped Bush to Churchill almost always mapped the US in 1991 to Britain
during World War II. Thus, subjects maintained structurally consistent mappings.
This type of flexibility adds significant complexity to the comparison process, because a
system cannot simply be trained to search for relational correspondences or be taught to
prefer only attribute matches. Rather, the comparison process must determine both
attribute and relation matches and must be able to keep different mappings distinct from
each other.
2.5
SUMMARY OF EMPIRICAL EVIDENCE
These findings suggest that comparisons of both perceptual and conceptual materials
involve structural alignment. Further, structural alignment promotes structure sensitive
pattern completion. Finally, comparisons allow for multiple interpretations of a single
item in different comparisons or multiple interpretations of a single comparison. Any
model of human cognition that involves comparison must exhibit these properties.
3.0 IMPLICATIONS FOR COGNITIVE MODELS
Many of the questions concerning the adequacy of connectionist models and neural
networks for high-level cognitive tasks have centered on linguistic processing and the
crucial role of compositional relational structures in sentence comprehension (Fodor &
McLaughlin, 1990; Fodor & Pylyshyn, 1988). Recent work has addressed this problem
by examining ways to represent hierarchical structure in connectionist models,
implementing stacks and binary trees to model variable binding and recursive sentence
processing (e.g., Elman, 1990; Pollack, 1990; Smolensky, 1990; see also Quinlan, 1991
for a review). It is too soon to tell how successful these methods will be, or whether
they can be extended to the general case of structural alignment.
The results summarized here underline the need for representations that permit structural
alignment. How should this be done? As van Gelder (1990) discusses, symbolic systems
traditionally use concatenative representation, in which symbol names are concatenated to
build a compositional representation. For example, a circle above a triangle could be
Structural alignment & the development of connectionist models of analogy
represented by the assertion above(circle,triangle). Such symbolic representations have
been used to model the analogy and similarity phenomena described here with some
success (Falkenhainer, et al., 1989). Van Gelder (1990) suggests a weaker criterion of
Junctional compositionality. In functionally compositional representations, tokens for
the symbols are not directly present in the representation, but they can be extracted from
the representation via some process. Van Gelder suggests that the natural representation
used by neural networks is functionally compositional. Analogously, the question of
whether connectionist models can model the phenomena described here should be couched
in terms of Junctional alignability: whether the representations can be decomposed and
aligned, rather than whether the structure is transparently present.
Along this track, an intriguing question is whether the surface form of functionally
compositional representations will be similar to the degree that the structures they
represent are similar. If so, the alignment process could take place simply by comparing
activation vectors. As yet, there are no networks that exhibit this behavior. Further,
given the evidence that geometric representations are insufficient to model human
similarity comparisons (see Tversky (1977) for a review), we are pessimistic about the
prospects that this type of model will be developed.
In conclusion, substantial psychological evidence suggests that determining the similarity
of two items requires a flexible alignment of structured representations. We suspect that
connectionist models of cognitive processes that involve comparisons will have to
exhibit concatenative compositionality in order to capture the flexibility inherent in
comparisons. However, we leave open the possibility that systems exhibiting functional
alignability will be successful.
Acknowledgments
This research was sponsored by ONR grant BNS-87-20301. We thank Jon Handler, Ed
Wisniewski, Phil Wolff and the whole Similarity and Analogy group for comments on
this work. We also thank Laura Kotovsky, Catherine Kreiser and Russ Poldrack for
running the pilot studies described above.
References
Anderson, J. A., Silverstein, 1. W., Ritz, S. A., & Jones, R. S. (1977). Distinctive
features, categorical perception and probability learning: Some applications of a neural
model. Psychological Review, 81,413-451.
Clement, C. A., & Gentner, D. (1991). Systematicity as a selection constraint in
analogical mapping. Cognitive Science, l5., 89-132.
Elman, J.L. (1990). Finding structure in time. Cognitive Science, M(2), 179-212.
Falkenhainer, B., Forbus, K. D., & Gentner, D. (1989). The structure-mapping engine:
Algorithm and examples. Artificial Intelligence, 41(1), 1-63.
Fodor, J., & McLaughlin, B. (1990). Connectionism and the problem of systematicity:
Why Smolensky's solution doesn't work. Cognition,.3.5., 183-204.
861
862
Gentner and Markman
Fodor, J. A., & Pylyshyn, Z. W. (1988). Connectionism and cognitive architecture: A
critical analysis. Cognition, 28, 3-71.
Gentner, D. (1983). Structure mapping: A theoretical framework for analogy. Cognitive
Science, 1, 155-170.
Gentner, D. (1988). Metaphor as structure mapping: The relational shift. Qllk:l
Development,,52,47-59.
Gentner, D., & Toupin, C. (1986). Systematicity and surface similarity in the
development of analogy. Cognitive Science,.ill, 277-300.
Glucksberg, S. & Keysar, B. (1990). Understanding metaphorical comparisons: Beyond
similarity. PsychQlogical Review, 21(1), 3-18.
Hopfield, J. J. (1982). Neural networks and physical systems with emergent collective
computational abilities. Proceedings of the National Academy of Sciences, 'fl., 25542558.
Lockhead, G. R., & King, M. C. (1977). Classifying integral stimuli. Journal of
Experimental PsycholQgy: Human Perception and Performance, .3.(3), 436-443.
Markman, A. B., & Gentner, D. (in press). Structural alignment during similarity
comparisons. Cognitive PsycholQgy.
Medin, D. L., Goldstone, R. L., & Gentner, D. (in press). Respects for similarity.
Psychological Review.
Palmer, S. E. (1978). Structural aspects of visual similarity. MemQry and Cognition,
Q(2),91-97.
Pollack, J. B. (1990). Recursive distributed representations. Artificial Intelligence, 46(12), 77-106.
Quinlan, P.T. (1991). CQnnectionism and PsycholQgy: A psycholQgical perspective on
new connectionist research. Chicago: The University of Chicago Press.
Sabourin, M. & Mitiche, A. (1992). Optical character recognition by a neural network.
Neural Networks, ~(5), 843-852.
Smolensky, P. (1990). Tensor product variable binding and the representation of
symbolic structures in connectionist systems. Artificial Intelligence,~, 159-216.
Spellman, B. A., & Holyoak, K. 1. (1992). If Saddam is Hitler then who is George
Bush? Analogical mapping between systems of social roles. Journal of Personality and
Social Psychology, Q1(6), 913-933.
Tversky, A. (1977). Features of similarity. Psychological Review, 84(4),327-352.
van Gelder, T. (1990). Compositionality: A connectionist variation on a classical theme.
Cognitive Science, 14(3), 355-384.
| 624 |@word middle:1 underline:2 open:2 holyoak:3 q1:1 wisniewski:1 configuration:12 series:1 united:3 interestingly:1 current:1 comparing:2 activation:2 yet:1 intriguing:1 must:4 readily:1 planet:4 chicago:2 sponsored:1 mounting:1 pylyshyn:2 implying:2 intelligence:3 selected:2 item:14 along:1 expected:1 behavior:1 elman:2 examine:1 decomposed:1 metaphor:3 underlying:2 churchill:2 killer:1 sabourin:2 kind:2 interpreted:3 gelder:4 developed:1 finding:6 lockhead:2 nation:1 demonstrates:1 evanston:2 control:1 unit:1 grant:1 before:2 attend:3 local:2 likened:1 limit:1 chose:3 examined:1 suggests:7 palmer:4 medin:3 acknowledgment:1 bsb:1 falkenhainer:3 recursive:2 empirical:1 printed:1 projection:1 matching:14 seeing:1 suggest:3 symbolic:5 onto:2 cannot:2 selection:1 map:1 demonstrated:2 britain:1 phil:1 regardless:1 kotovsky:1 array:1 ritz:2 traditionally:1 variation:1 analogous:1 fodor:4 target:2 play:1 recognition:4 bottom:2 role:5 solved:1 capture:1 connected:1 triad:4 went:2 prospect:1 substantial:1 intuition:1 complexity:1 asked:10 personal:1 tversky:2 trained:1 segment:1 surgeon:2 distinctive:2 triangle:2 hopfield:2 differently:1 emergent:1 represented:2 distinct:2 describe:1 concatenative:2 artificial:3 tell:2 corresponded:1 say:1 ability:2 highlighted:1 rock:4 product:1 aligned:2 entered:1 flexibility:7 academy:1 analogical:3 cluster:1 double:1 leave:1 object:10 completion:9 handler:1 predicted:4 involves:5 exhibiting:1 attribute:7 centered:1 human:4 material:1 implementing:1 pessimistic:1 connectionism:2 comprehension:1 drinking:1 cognition:7 mapping:17 predict:1 claim:1 commonality:2 waterloo:1 sensitive:2 largest:2 clearly:2 always:2 rather:3 linguistic:1 focus:1 check:1 contrast:2 relation:15 butcher:2 issue:1 among:1 flexible:3 ill:1 development:6 having:3 jones:2 markman:8 jon:1 promote:1 connectionist:11 stimulus:4 inherent:1 recognize:2 national:1 antecedent:5 skinny:1 highly:1 hussein:1 possibility:1 righthand:1 alignment:18 watershed:1 implication:1 accurate:1 endless:1 integral:1 arthur:1 filled:1 tree:1 old:1 circle:8 causal:5 pollack:2 theoretical:1 politician:2 criticized:1 psychological:8 assertion:1 prong:5 successful:3 examining:1 front:1 too:1 my:1 l5:1 told:1 straw:1 analogously:1 connectivity:1 central:4 ambiguity:1 choose:1 cognitive:11 tam:3 laura:1 suggesting:2 bold:2 summarized:1 caused:2 depends:1 systematicity:3 doing:1 il:2 who:7 yield:2 identify:1 silverstein:2 rus:1 saddam:2 ed:1 storm:2 conveys:2 stop:1 pilot:2 richly:1 knowledge:1 higher:4 done:1 anderson:3 just:1 hand:6 transport:1 name:1 read:1 couched:1 round:1 during:3 ambiguous:2 unambiguous:1 anything:1 maintained:1 criterion:1 evident:1 demonstrate:1 meaning:1 common:3 functional:1 poldrack:1 physical:1 interpretation:9 organism:1 he:1 functionally:3 significant:1 ai:1 rd:2 clement:3 similarly:1 pointed:1 had:4 robot:5 similarity:40 surface:2 metaphorical:1 base:1 add:1 recent:1 perspective:1 catherine:1 binary:1 success:1 onr:1 captured:1 george:2 somewhat:1 determine:3 ii:4 multiple:3 stem:1 match:6 determination:2 faster:1 cross:4 long:1 concerning:1 promotes:1 controlled:1 prediction:2 represent:2 preserved:1 addressed:1 crucial:1 comment:1 subject:33 suspect:1 thing:1 adequacy:1 structural:18 psychology:3 architecture:1 idea:1 mclaughlin:2 shift:1 whether:9 war:6 gulf:2 cause:1 compositional:6 generally:2 involve:2 goldstone:5 gentner:22 notice:1 transparently:1 track:1 taught:1 group:3 key:4 four:3 letter:1 policeman:1 place:4 almost:2 prefer:2 capturing:1 fl:1 completing:1 vietnam:1 correspondence:3 constraint:2 scene:1 aspect:2 bns:1 optical:1 department:2 structured:10 ate:1 character:1 appealing:1 explained:1 previously:1 discus:1 operation:2 permit:2 probe:1 hierarchical:1 personality:1 top:6 running:1 quinlan:2 concatenated:1 build:1 classical:1 tensor:1 move:2 question:4 occurs:1 looked:1 said:2 exhibit:3 thank:2 mapped:7 mineral:1 collected:1 modeled:1 insufficient:1 potentially:1 relate:1 fluid:1 fdr:2 collective:1 forbus:2 situation:2 relational:11 extended:3 communication:1 stack:1 lb:1 rating:1 compositionality:3 pair:18 extensive:1 connection:1 khan:1 sentence:2 conflict:1 engine:1 learned:1 adult:1 able:1 suggested:1 beyond:1 below:1 pattern:20 perception:3 smolensky:3 summarize:1 including:1 deleting:1 junctional:2 critical:1 natural:1 spellman:3 older:1 rated:1 categorical:1 review:6 geometric:2 understanding:1 determining:2 plant:1 highlight:1 northwestern:2 itb:1 analogy:10 age:2 degree:3 consistent:3 dd:1 story:10 playing:1 share:3 classifying:1 course:1 summary:1 token:1 free:1 soon:1 guide:1 allow:1 weaker:1 taking:3 van:4 distributed:1 world:6 doesn:1 sensory:1 made:4 social:2 preferred:1 keep:1 global:1 conceptual:3 search:1 why:1 symmetry:1 depicting:1 complex:1 did:2 arrow:1 whole:1 arise:2 child:3 clumsy:1 structurally:2 theme:2 perceptual:3 ib:1 subsymbolic:1 young:1 symbol:2 list:2 evidence:5 importance:2 simply:2 visual:1 binding:3 monotonic:1 extracted:1 viewed:1 king:2 specifically:1 determined:1 acting:1 exhausting:2 wolff:1 called:1 experimental:1 la:2 desert:2 selectively:1 people:4 inability:1 bush:4 tested:1 phenomenon:3 |
5,792 | 6,240 | Mixed Linear Regression with Multiple Components
Kai Zhong 1
Prateek Jain 2
Inderjit S. Dhillon 3
2
University of Texas at Austin
Microsoft Research India
2
[email protected],
[email protected]
3
[email protected]
1,3
1
Abstract
In this paper, we study the mixed linear regression (MLR) problem, where the
goal is to recover multiple underlying linear models from their unlabeled linear
measurements. We propose a non-convex objective function which we show is
locally strongly convex in the neighborhood of the ground truth. We use a tensor
method for initialization so that the initial models are in the local strong convexity
region. We then employ general convex optimization algorithms to minimize the
objective function. To the best of our knowledge, our approach provides first exact
recovery guarantees for the MLR problem with K 2 components. Moreover, our
e d) as well as near-optimal
method has near-optimal computational complexity O(N
e
sample complexity O(d) for constant K. Furthermore, we show that our nonconvex formulation can be extended to solving the subspace clustering problem
as well. In particular, when initialized within a small constant distance to the true
subspaces, our method converges to the global optima (and recovers true subspaces)
in time linear in the number of points. Furthermore, our empirical results indicate
that even with random initialization, our approach converges to the global optima
in linear time, providing speed-up of up to two orders of magnitude.
1
Introduction
The mixed linear regression (MLR) [7, 9, 29] models each observation as being generated from one
of the K unknown linear models; the identity of the generating model for each data point is also
unknown. MLR is a popular technique for capturing non-linear measurements while still keeping the
models simple and computationally efficient. Several widely-used variants of linear regression, such
as piecewise linear regression [14, 28] and locally linear regression [8], can be viewed as special
cases of MLR. MLR has also been applied in time-series analysis [6], trajectory clustering [15],
health care analysis [11] and phase retrieval [4]. See [27] for more applications.
In general, MLR is NP-hard [29] with the hardness arising due to lack of information about the model
labels (model from which a point is generated) as well as the model parameters. However, under
certain statistical assumptions, several recent works have provided poly-time algorithms for solving
MLR [2, 4, 9, 29]. But most of the existing recovery gurantees are restricted either to mixtures with
K = 2 components [4, 9, 29] or require poly(1/?) samples/time to achieve ?-approximate solution
[7, 24] (analysis of [29] for two components can obtain ? approximate solution in log(1/?) samples).
Hence, solving the MLR problem with K 2 mixtures while using near-optimal number of samples
and computational time is still an open question.
In this paper, we resolve the above question under standard statistical assumptions for constant
many mixture components K. To this end, we propose the following smooth objective function as a
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
surrogate to solve MLR:
f (w1 , w2 , ? ? ? , wK ) :=
n
X
?K
k=1 (yi
xTi wk )2 ,
(1)
i=1
where {(xi , yi ) 2 Rd+1 }i=1,2,??? ,N are the data points and {wk }k=1,2,??? ,K are the model parameters.
The intuition for this objective is that the objective value is zero when {wk }k=1,2,??? ,K is the global
optima and y?s do not contain any noise. Furthermore, the objective function is smooth and hence
less prone to getting stuck in arbitrary saddle points or oscillating between two points. The standard
EM [29] algorithm instead makes a ?sharp? selection of mixture component and hence the algorithm
is more likely to oscillate or get stuck. This intuition is reflected in Figure 1 (d) which shows that
with random initialization, EM algorithm routinely gets stuck at poor solutions, while our proposed
method based on the above objective still converges to the global optima.
Unfortunately, the above objective function is non-convex and is in general prone to poor saddle
points, local minima. However under certain standard assumptions, we show that the objective is
locally strongly convex (Theorem 1) in a small basin of attraction near the optimal solution. Moreover,
the objective function is smooth. Hence, we can use gradient descent method to achieve linear rate
of convergence to the global optima. But, we will need to initialize the optimization algorithm with
an iterate which lies in a small ball around the optima. To this end, we modify the tensor method
in [2, 7] to obtain a ?good? initialization point. Typically, tensor methods require computation of
third and higher order moments which leads to significantly worse sample complexity in terms of
data dimensionality d. However, for the special case of MLR, we provide a small modification of the
standard tensor method that achieves nearly optimal sample and time complexity bounds for constant
?
K (see Theorem 3) . More concretely, our approach requires O(d(K
log d)K ) many samples and
? d) computational time; note the exponential dependence on K. Also for constant K,
requires O(N
the method has nearly optimal sample and time complexity.
Subspace clustering: MLR can be viewed as a special case of subspace clustering (SC), since each
regressor-response pair lies in the subspace determined by this pair?s model parameters. However,
solving MLR using SC approaches is intractable because the dimension of each subspace is only one
less than the ambient dimension, which will easily violate the conditions for the recovery guarantees
of most methods (see e.g. Table 1 in [23] for the conditions of different methods). Nonetheless,
our objective for MLR easily extends to the subspace clustering problem. That is, given data points
{zi 2 Rd }i=1,2,??? ,N , the goal is to minimize the following objective w.r.t. K subspaces (each of
dimension at most r):
min
Uk 2Od?r ,k=1,2,??? ,K
f (U1 , U2 , ? ? ? , UK ) =
N
X
i=1
?
?K
k=1 Id
?
Uk UkT , zi ziT .
(2)
Uk denotes the basis spanned by k-th estimated subspace and Od?r ? Rd?r denotes the set of
orthonormal matrices, i.e., U T U = I if U 2 Od?r . We propose a power-method style algorithm
to alternately optimize (2) w.r.t {Uk }k=1,2,??? ,K , which takes only O(rdN ) time compared with
O(dN 2 ) for the state-of-the-art methods, e.g. [13, 22, 23].
Although EM with power method [4] shares the same computational complexity as ours, there
is no convergence guarantee for EM to the best of our knowledge. In contrast, we provide local
K
?
convergence guarantee for our method. That is, if N = O(rK
) and if data satisfies certain standard
assumptions, then starting from an initial point {Uk }k=1,??? ,K that lies in a small ball of constant
radius around the globally optimal solution, our method converges super-linearly to the globally
optimal solution. Unfortunately, our existing analyses do not provide global convergence guarantee
and we leave it as a topic for future work. Interestingly, our empirical results indicated that even with
randomly initialized {Uk }k=1,??? ,K , our method is able to recover the true subspace exactly using
nearly O(rK) samples.
We summarize our contributions below:
(1) MLR: We propose a non-convex continuous objective function for solving the mixed linear
regression problem. To the best of our knowledge, our algorithm is the first work that can handle K
2 components with global convergence guarantee in the noiseless case (Theorem 4). Our algorithm has
near-optimal linear (in d) sample complexity and near-optimal computational complexity; however,
our sample complexity dependence on K is exponential.
2
(2) Subspace Clustering: We extend our objective function to subspace clustering, which can be
optimized efficiently in O(rdN ) time compared with O(dN 2 ) for state-of-the-art methods. We also
provide a small basin of attraction in which our iterates converge to the global optima at super-linear
rate (Theorem 5).
2
Related Work
Mixed Linear Regression:
EM algorithm without careful initialization is only guaranteed to have local convergence [4, 21, 29].
[29] proposed a grid search method for initialization. However, it is limited to the two-component
case and seems non-trivial to extend to multiple components. It is known that exact minimization
for each step of EM is not scalable due to the O(d2 N + d3 ) complexity. Alternatively, we can
use EM with gradient update, whose local convergence is also guaranteed by [4] but only in the
two-symmetric-component case, i.e., when w2 = w1 .
Tensor Methods for MLR were studied by [7, 24]. [24] approximated the third-order moment directly
from samples with Gaussian distribution, while [7] learned the third-order moment from a low-rank
linear regression problem. Tensor methods can obtain the model parameters to any precision ? but
requires 1/?2 time/samples. Also, tensor methods can handle multiple components but suffer from
high sample complexity and high computational complexity. For example, the sample complexity
required by [7] and [24] is O(d6 ) and O(d3 ) respectively. On the other hand, the computational
burden mainly comes from the operation on tensor, which costs at least O(d3 ) for a very simple
tensor evaluation. [7] also suffers from the slow nuclear norm minimization when estimating the
second and third order moments. In contrast, we use tensor method only for initialization, i.e., we
require ? to be a certain constant. Moreover, with a simple trick, we can ensure that the sample and
time complexity of our initialization step is only linear in d and N .
Convex Formulation. Another approach to guarantee the recovery of the parameters is to relax
the non-convex problem to convex problem. [9] proposed a convex formulation of MLR with two
components. The authors provide upper bounds on the recovery errors in the noisy case and show their
algorithm is information-theoretically optimal. However, the convex formulation needs to solve a
nuclear norm function under linear constraints, which leads to high computational cost. The extension
from two components to multiple components for this formulation is also not straightforward.
Subspace Clustering:
Subspace clustering [13, 17, 22, 23] is an important data clustering problem arising in many research
areas. The most popular subspace clustering algorithms, such as [13, 17, 23], are based on a two-stage
algorithm ? first finding a neighborhood for each data point and then clustering the points given the
neighborhood. The first stage usually takes at least O(dN 2 ) time, which is prohibitive when N is
large. On the other hand, several methods such as K-subspaces clustering [18], K-SVD [1] and online
subspace clustering [25] do have linear time complexity O(rdN ) per iteration, however, there are no
global or local convergence guarantees. In contrast, we show locally superlinear convergence result
for an algorithm with computational complexity O(rdN ). Our empirical results indicate that random
initialization is also sufficient to get to the global optima; we leave further investigation of such an
algorithm for future work.
3
Mixed Linear Regression with Multiple Components
In this paper, we assume the dataset {(xi , yi ) 2 Rd+1 }i=1,2,??? ,N is generated by,
zi ? multinomial(p), xi ? N (0, Id ),
yi = xTi wz?i ,
(3)
where p is the proportion of different components satisfying pT 1 = 1, {wk? 2 Rd }k=1,2,??? ,K are
the ground truth parameters. The goal is to recover {wk? }k=1,2,??? ,K from the dataset. Our analysis is
based on noiseless cases but we illustrate the empirical performance of our algorithm for the noisy
cases, where yi = xTi wz?i + ei for some noise ei (see Figure 1).
Notation. We use [N ] to denote the set {1, 2, ? ? ? , N } and Sk ? [N ] to denote the index set of the
samples that come from k-th component. Define pmin := mink2[K] {pk }, pmax := maxk2[K] {pk }.
?
?
Define wj := wj wj? and wkj
:= wk? wj? . Define min := minj6=k {k wjk
k} and
3
?
min
:= maxj6=k {k wjk
k}. We assume max
is independent of the dimension d. Define w :=
Kd
(t)
[w1 ; w2 ; ? ? ? ; wK ] 2 R . We denote w as the parameters at t-th iteration and w(0) as the
initial parameters. For simplicity, we assume there are pk N samples from the k-th model in any
random subset of N samples. We use EJXK to denote the expectation of a random variable X. Let
max
T 2 Rd?d?d be a tensor and Tijk be the i, j, k-th entry of T . We say a tensor is supersymmetric
if Tijk is invariant under any permutation of i, j, k. We also use the same notation T to denote
d?r
the
P multi-array map from three matrices, A, B, C 2 R , to a new tensor: [T (A, B, C)]i,j,k =
p,q,l Tpql Api Bqj Clk . We say a tensor T is rank-one if T = a ? b ? c, where Tijk = ai bj ck .
We use kAk denote the spectral norm of the matrix A and i (A) to denote the i-th largest singular
value of A. For tensors, we use kT kop to denote the operator norm for a supersymmetric tensor T ,
2
kT kop := maxkak=1 |T (a, a, a)|. We use T(1) 2 Rd?d to denote the matricizing of T in the first
e
order, i.e., [T(1) ]i,(j 1)d+k = Tijk . Throughout the paper, we use O(d)
to denote O(d ? polylog(d)).
We assume K is a constant in general. However, if some numbers depend on K K , we will explicitly
present it in the big O notation. For simplicity, we just include higher-order terms of K and ignore
lower-order terms, e.g., O((2K)2K ) may be replaced by O(K K ).
3.1
Local Strong Convexity
In this section, we analyze the Hessian of objective (1).
Theorem 1 (Local Positive Definiteness). Let {xi , yi }i=1,2,??? ,N be sampled from the MLR model
(3). Let {wk }k=1,2,??? ,K be independent of the samples and lie in the neighborhood of the optimal
solution, i.e.,
k wk k := kwk wk? k ? cm min , 8k 2 [K],
(4)
where cm = O(pmin (3K) K ( min / max )2K 2 ), min = minj6=k {kwj? wk? k} and max =
maxj6=k {kwj? wk? k}. Let P
1 be a constant. Then if N
O((P K)K d logK+2 (d)), w.p.
1 O(Kd P ), we have,
1
2
2
pmin N 2K
r2 f (w + w) 10N (3K)K 2K
(5)
max I,
min I
8
for any w := [ w1 ; w2 ; ? ? ? ; wK ] satisfying k wk k ? cf min , where cf =
O(pmin (3K) K d K+1 ( min / max )2K 2 ).
The above theorem shows the Hessians of a small neighborhood around a fixed {wk }k=1,2,??? ,K ,
which is close enough to the optimum, are positive definiteness (PD). The conditions on
{wk }k=1,??? ,K and { wk }k=1,??? ,K are different. {wk }k=1,??? ,K are required to be independent
of samples and in a ball of radius cm min centered at the optimal solution. On the other hand,
{ wk }k=1,2,??? ,K can be dependent on the samples but are required to be in a smaller ball of radius
cf min . The conditions are natural as if min is very small then distinguishing between wk? and wk?0
is not possible and hence Hessians will not be PD w.r.t both the components.
2
f
To prove the theorem, we decompose the Hessian of Eq. (1) into multiple blocks, (rf )jl = @w@j @w
2
l
Rd?d . When wk ! wk? for all k 2 [K], the diagonal blocks of the Hessian will be strictly positive
definite. At the same time, the off-diagonal blocks will be close to zeros. The blocks are approximated
by the samples using matrix Bernstein inequality. The detailed proof can be found in Appendix A.2.
Traditional analysis of optimization methods on strongly convex functions, such as gradient descent,
requires the Hessians of all the parameters are PD. Theorem 1 implies that when wk = wk? for all
k = 1, 2, ? ? ? , K, a small basin of attraction around the optimum is strongly convex as formally stated
in the following corollary.
Corollary 1 (Strong Convexity near the Optimum). Let {xi , yi }i=1,2,??? ,N be sampled from the MLR
model (3). Let {wk }k=1,2,??? ,K lie in the neighborhood of the optimal solution, i.e.,
kwk wk? k ? cf min , 8k 2 [K],
(6)
K
where cf = O(pmin (3K) d K+1 ( min / max )2K 2 ). Then, for any constant P
1, if N
O((P K)K d logK+2 (d)), w.p. 1 O(Kd P ), the objective function f (w1 , w2 , ? ? ? , wK ) in Eq. (1)
is strongly convex. In particular, w.p. 1 O(Kd P ), for all w satisfying Eq. (6),
1
2
2
pmin N 2K
r2 f (w) 10N (3K)K 2K
(7)
max I.
min I
8
4
The strong convexity of Corollary 1 only holds in the basin of attraction near the optimum that has
diameter in the order of O(d K+1 ), which is too small to be achieved by our initialization method
?
(in Sec. 3.2) using O(d)
samples. Next, we show by a simple construction, the linear convergence of
gradient descent (GD) with resampling is still guaranteed when the solution is initialized in a much
larger neighborhood.
Theorem 2 (Convergence of Gradient Descent). Let {xi , yi }i=1,2,??? ,N be sampled from the MLR
model (3). Let {wk }k=1,2,??? ,K be independent of the samples and lie in the neighborhood of
the optimal solution, defined in Eq. (4). One iteration of gradient descent can be described as,
2
w+ = w ?rf (w), where ? = 1/(10N (3K)K 2K
O(K K d logK+2 (d)), w.p.
max ). Then, if N
1 O(Kd 2 ),
2
pmin 2K
min
kw+ w? k2 ? (1
)kw w? k2
(8)
2K
80(3K)K max 2
Remark. The linear convergence Eq. (8) requires the resampling of the data points for each iteration.
In Sec. 3.3, we combine Corollary 1, which doesn?t require resampling when the iterate is sufficiently
close to the optimum, to show that there exists an algorithm using a finite number of samples to
achieve any solution precision.
To prove Theorem 2, we prove the PD properties on a line between a current iterate and the optimum
by constructing a set of anchor points and then apply traditional analysis for the linear convergence
of gradient descent. The detailed proof can be found in Appendix A.3.
3.2
Initialization via Tensor method
In this section, we propose
a tensor method
to initialize the parameters. We define
q
y
q the second-order
y
moment M2 := E y 2 (x ? x I) and the third-order moments, M3 := E y 3 x ? x ? x
q 3
y
P
j2[d] E y (ej ? x ? ej + ej ? ej ? x + x ? ej ? ej ) . According to Lemma 6 in [24], M2 =
P
P
?
?
?
?
?
k=[K] 2pk wk ? wk and M3 =
k=[K] 6pk wk ? wk ? wk . Therefore by calculating the eigendecomposition of the estimated moments, we are able to recover the parameters to any precision
provided enough samples. Theorem 8 of [24] needs O(d3 ) sample complexity to obtain the model
parameters with certain precision. Such high sample complexity comes from the tensor concentration
bound. However, we find the problem of tensor eigendecomposition in MLR can be reduced to
RK?K?K space such that the sample complexity and computational complexity are O(poly(K)).
Our method is similar to the whitening process in [7, 19]. However, [7] needs O(d6 ) sample complexe
ity due to the nuclear-norm minimization problem, while ours requires only O(d).
For this sample
complexity, we need assume the following,
2/3 P
? 2
Assumption 1. The following quantities, K (M2 ), kM2 k, kM3 kop ,
k2[K] pk kwk k and
P
( k2[K] pk kwk? k3 )2/3 , have the same order of d, i.e., the ratios between any two of them are
independent of d.
The above assumption holds when {wk? }k=1,2,??? ,K are orthonormal to each other.
We formally present the tensor method in Algorithm 1 and its theoretical guarantee in Theorem 3.
Theorem 3. Under Assumption 1, if |?|
(0)
will output {wk }K
k=1 that satisfies,
(0)
kwk
O(d log2 (d)+log4 (d)), then w.p. 1 O(d
wk? k ? cm
min , 8k
which falls in the locally PD region, Eq. (4), in Theorem 1.
2
), Algorithm 1
2 [K]
? 2 explicitly will cost O(N d2 ) time, which
The proof can be found in Appendix B.2. Forming M
is expensive when d is large. We can compute each step of the power method without explicitly
2
T (t)
? 2 . In particular, we alternately compute Y? (t+1) = P
forming M
) Y (t) )
i2?M2 yi (xi (xi Y
and let Y (t+1) = QR(Y? (t+1) ). Now each power method iteration only needs O(KN d) time.
Furthermore, the number of iterations needed will be a constant, since power method has linear
convergence rate and we don?t need very accurate solution. For the proof of this claim, we refer
5
Algorithm 1 Initialization for MLR via Tensor Method
Input: {xi , yi }i2?
(0)
Output: {wk }K
k=1
1: Partition the dataset ? into ? = ?M2 [?2 [?3 with |?M2 | = O(d log2 (d)), |?2 | = O(d log2 (d))
and |?3 | = O(log4 (d))
? 2 :=
2: Compute
top-K eigenvectors, Y 2 Rd?K , of the second-order moment, M
P the approximate
1
2
I), by the power method.
i2?M2 yi (xi ? xi
|?M2 |
P
1
?
3: Compute R2 =
y 2 (Y T xi ? Y T xi I).
2|?2 |
i2?2
i
? 2 RK?K of R
? 2 , i.e., W
? = U
?2 ?
? 1/2 U
? T , where R
?2 =
4: Compute the whitening matrix W
2
2
?2 ?
? 2U
? T is the eigendecomposition of R
?2.
U
2
P
P
3
?3 = 1 P
5: Compute R
y
(r
?
r
i
i ? ri
i2?3 i
j2[K] ej ? ri ? ej
j2[K] ej ? ej ? ri
6|?3 |
P
T
j2[K] ri ? ej ? ej ), where ri = Y xi for all i 2 ?3 .
6: Compute the eigenvalues {?
ak }K
vk }K
k=1 and the eigenvectors {?
k=1 of the whitened tensor
K?K?K
?
?
?
?
R3 ( W , W , W ) 2 R
by using the robust tensor power method [2].
(0)
? T )? (?
7: Return the estimation of the models, wk = Y (W
ak v?k )
? 2 using O(KN d) and compute W
?
to the proof of Lemma 10 in Appendix B. Next we compute R
3
3
? 3 takes O(KN d + K N ) time. The robust tensor power method
in O(K ) time. Computing R
takes O(poly(K)polylog(d)) time. In summary, the computational complexity for the initialization
e
is O(KdN + K 3 N + poly(K)polylog(d)) = O(dN
).
3.3
Global Convergence Algorithm
We are now ready to show the complete algorithm, Algorithm 2, that has global convergence guarantee.
We use f? (w)Pto denote the objective function Eq. (1) generated from a subset of the dataset ?,
i.e.,f? (w) = i2? ?K
xTi wk )2 .
k=1 (yi
Theorem 4 (Global Convergence Guarantee). Let {xi , yi }i=1,2,??? ,N be sampled from the MLR
model (3) with N O(d(K log(d))2K+3 ). Let the step size ? be smaller than a positive constant.
Then given any precision ? > 0, after T = O(log(d/?)) iterations, w.p. 1 O(Kd 2 log(d)), the
output of Algorithm 2 satisfies
kw(T ) w? k ? ? min .
The detailed proof is in Appendix B.3. The computational complexity required by our algorithm
is near-optimal: (a) tensor method (Algorithm 1) is carefully employed such that only O(dN )
computation is needed; (b) gradient descent with resampling is conducted in log(d) iterations to
push the iterate to the next phase; (c) gradient descent without resampling is finally executed to
achieve any precision with log(1/?) iterations. Therefore the total computational complexity is
O(dN log(d/?)). As shown in the theorem, our algorithm can achieve any precision ? > 0 without
any sample complexity dependency on ?. This follows from Corollary 1 that shows local strong
convexity of objective (1) with a fixed set of samples. By contrast, tensor method [7, 24] requires
O(1/?2 ) samples and EM algorithm requires O(log(1/?)) samples[4, 29].
4
Subspace Clustering (SC)
The mixed linear regression problem can be viewed as clustering N (d + 1)-dimensional data points,
zi = [xi , yi ]T , into one of the K subspaces, {z : [wk? , 1]T z = 0} for k 2 [K]. Assume we have
data points {zi }i=1,2,??? ,N sampled from the following model,
ai ? multinomial(p), si ? N (0, Ir ), zi = Ua?i si ,
(9)
where p is the proportion of samples from different subspaces and satisfies pT 1 = 1 and
{Uk? }k=1,2,??? ,K are the bases of the ground truth subspaces. We can solve Eq. (2) by alternately
minimizing over Uk when fixing the others, which is equivalent to finding the top-r eigenvectors
6
?
?
PN
of i=1 ?ik zi ziT , where ?ik = ?j6=k Id Uj UjT , zi ziT . When the dimension is high, it is very
expensive to compute the exact top-r eigenvectors. A more efficient way is to use one iteration of the
power method (aka subspace iteration), which only takes O(KdN ) computational time per iteration.
We present our algorithm in Algorithm 3.
We show Algorithm 3 will converge to the ground truth when the initial subspaces are sufficiently close
? , V? ) := p1 kU U T V V T kF for some U
? , V? 2 Rd?r , where
to the underlying subspaces. Define D(U
2
? ), Span(V? ) respectively. Define Dmax := maxj6=q D(Uq? , U ? ),
U, V are orthogonal bases of Span(U
j
Dmin := minj6=q D(Uq? , Uj? ).
Theorem 5. Let {zi }i=1,2,??? ,N be sampled from subspace clustering model (9).
O(r(K log(r))2K+2 ) and the initial parameters {Uk0 }k2[K] satisfy
max{D(Uk? , Uk0 )} ? cs Dmin ,
k
If N
(10)
where cs = O(pmin /pmax (3K) K (Dmin /Dmax )2K 3 ), then w.p. 1 O(Kr 2 ), the sequence
t
{U1t , U2t , ? ? ? , UK
}t=1,2,??? generated by Algorithm 3 converges to the ground truth superlinearly. In
particular, for t := maxk {D(Uk? , Ukt )},
t+1
?
2
t /(2cs Dmin )
?
1
2
t.
We refer to Appendix C.2 for the proof. Compared to other methods, our sample complexity only
depends on the dimension of each subspace linearly. We refer to Table 1 in [23] for a comparison
of conditions for different methods. Note that if Dmin /Dmax is independent of r or d, then the
initialization radius cs is a constant. However, initialization within the required distance to the optima
is still an open question; tensor methods do not apply in this case. Interestingly, our experiments
seem to suggest that our proposed method converges to the global optima (in the setting considered
in the above theorem).
Algorithm 2 Gradient Descent for MLR
Algorithm 3 Power Method for SC
Input: {xi , yi }i=1,2,??? ,N , step size ?.
Input: data points {zi }i=1,2,??? ,N
Output: w
Output: {Uk }k2[K]
1: Partition
the
dataset
into 1: Some initialization, {Uk0 }k2[K] .
{?(t) }t=0,1,??? ,T0 +1
2: Partition the data into {?(t) }t=0,1,2,??? ,T .
2: Initialize w (0) by Algorithm 1 with ?(0)
3: for t = 0, 1, 2,?? ? ? , T do
?
3: for t = 1, 2, ? ? ? , T0 do
4:
?i = ?K
Ujt UjtT , zi ziT , i 2 ?(t)
j=1 Id
(t)
(t 1)
(t 1)
4:
w =w
?rf?(t) (w
)
5:
for k = 1, 2, ?? ? ? , K do
?
6:
?ik = ?i / Id Ukt UktT , zi ziT
5: for t = T0 + 1, T0 + 2, ? ? ? , T0 + T1 do
P
6:
w(t) = w(t 1) ?rf?(T0 +1) (w(t 1) ) 7:
Ukt+1
QR( i2?(t) ?ik zi ziT Ukt )
5
5.1
Numerical Experiments
Mixed Linear Regression
In this section, we use synthetic data to show the properties of our algorithm that minimizes Eq. (1),
which we call LOSCO (LOcally Strongly Convex Objective). We generate data points and parameters
from standard normal distribution. We set K = 3 and pk = 13 for all k 2 [K]. The error is defined
(t)
as ?(t) = min?2Perm([K]) {maxk2[K] kw?(k) wk? k/kwk? k}, where Perm([K]) is the set of all the
permutation functions on the set [K]. The errors reported in the paper are averaged over 10 trials. In
our experiments, we find there is no difference whether doing resampling or not. Hence, for simplicity,
we use the original dataset for all the processes. We set both of two parameters in the robust tensor
power method (denoted as N and L in Algorithm 1 in [2]) to be 100. The experiments are conducted
in Matlab. After the initialization, we use alternating minimization (i.e., block coordinate descent) to
exactly minimize the objective over wk for k = 1, 2, ? ? ? , K cyclicly.
Fig. 1(a) shows the recovery rate for different dimensions and different samples. We call the result of
a trial is a successful recovery if ?(t) < 10 6 for some t < 100. The recovery rate is the proportion of
7
N=6000
N=60000
N=600000
8000
log(?)
log(err)
-5
N
6000
4000
-10
0
0
-10
-10
LOSCO-ALT-tensor
EM-tensor
LOSCO-ALT-random
EM-random
-20
2000
0
0
200
400
600
800
1000
-30
-15
-10
(a) Sample complexity
0
0.5
1
1.5
LOSCO-ALT-tensor
EM-tensor
LOSCO-ALT-random
EM-random
-20
-30
log(?)
time(s)
200
time(s)
(b) Noisy case
(c) d = 100, N = 6k
(d) d = 1k, N = 60k
-5
d
log(err)
0
10000
0
0
100
300
400
Figure 1: (a),(b): Empirical performance of our method. (c), (d): performance of our methods vs
EM method. Our method with random initialization is signficantly better than EM with random
initialization. Performance of the two methods is comparable when initialized with tensor method.
10 trials with successful recovery. As shown in the figure, the sample complexity for exact recovery
is nearly linear to d. Fig. 1(b) shows the behavior of our algorithm in the noisy case. The noise is
drawn from ei 2 N (0, 2 ), i.i.d., and d is fixed as 100. As we can see from the figure, the solution
error is almost proportional to the noise deviation. Comparing among different N ?s, the solution error
decreases when N increases, so it seems consistent in presence of unbiased noise. We also illustrate
the performance of our tensor initialization method in Fig. 2(a) in Appendix D.
We next compare with EM algorithm [29], where we alternately assign labels to points and exactly
solve each model parameter according to the labels. EM has been shown to be very sensitive to the
initialization [29]. The grid search initialization method proposed in [29] is not feasible here, because
it only handles two components with a same magnitude. Therefore, we use random initialization
and tensor initialization for EM. We compare our method with EM on convergence speed under
different dimensions and different initialization methods. We use exact alternating minimization
(LOSCO-ALT) to optimize our objective (1), which has similar computational complexity as EM.
Fig. 1(c)(d) shows our method is competitive with EM on computational time, when it converges to
the optima. In the case of (d), EM with random initialization doesn?t converge to the optima, while
our method still converges. In Appendix D, we will show some more experimental results.
Table 1: Time (sec.) comparison for different subspace clustering methods
N/K
200
400
600
800
1000
5.2
SSC
22.08
152.61
442.29
918.94
1738.82
SSC-OMP
31.83
60.74
99.63
159.91
258.39
LRR
4.01
11.18
33.36
79.06
154.89
TSC
2.76
8.45
30.09
75.69
151.64
NSN+spectral
3.28
11.51
36.04
85.92
166.70
NSN+GSR
5.90
15.90
33.26
54.46
83.96
PSC
0.41
0.32
0.60
0.73
0.76
Subspace Clustering
In this section, we compare our subspace clustering method, which we call P SC (Power method
for Subspace Clustering), with state-of-the-art methods, SSC [13], SSC-OMP [12], LRR [22], TSC
[17], NSN+spectral [23] and NSN+GSR [23] on computational time. We fix K = 5, r = 30 and
d = 50. The ground truth Uk? is generated from Gaussian matrices. Each data point is a normalized
Gaussian vector in their own subspace. Set pk = 1/K. The initial subspace estimation is generated
by orthonormlizing Gaussian matrices. The stopping criterion for our algorithm is that every point is
clustered correctly, i.e., the clustering error (CE) (defined in [23]) is zero. We use publicly available
codes for all the other methods (see [23] for the links).
As we shown from Table 1, our method is much faster than all other methods especially when N
is large. Almost all CE?s corresponding to the results in Table 1 are very small, which are listed in
Appendix D. We also illustrate CE?s of our method for different N , d and r when fixing K = 5 in
Fig. 6 of Appendix D, from which we see whatever the ambient dimension d is, the clusters will be
exactly recovered when N is proportional to r.
Acknowledgement: This research was supported by NSF grants CCF-1320746, IIS-1546459 and
CCF-1564000.
8
References
[1] Amir Adler, Michael Elad, and Yacov Hel-Or. Linear-time subspace clustering via bipartite graph modeling.
IEEE transactions on neural networks and learning systems, 26(10):2234?2246, 2015.
[2] Animashree Anandkumar, Rong Ge, Daniel Hsu, Sham M Kakade, and Matus Telgarsky. Tensor decompositions for learning latent variable models. JMLR, 15:2773?2832, 2014.
[3] Peter Arbenz, Daniel Kressner, and D-MATH ETH Z?rich. Lecture notes on solving large scale eigenvalue
problems. http://people.inf.ethz.ch/arbenz/ewp/Lnotes/lsevp2010.pdf.
[4] Sivaraman Balakrishnan, Martin J Wainwright, and Bin Yu. Statistical guarantees for the em algorithm:
From population to sample-based analysis. Annals of Statistics, 2015.
[5] Emmanuel J. Cand?s and Benjamin Recht. Exact matrix completion via convex optimization. Foundations
of Computational Mathematics, 9(6):717?772, December 2009.
[6] Alexandre X Carvalho and Martin A Tanner. Mixtures-of-experts of autoregressive time series: asymptotic
normality and model specification. Neural Networks, 16(1):39?56, 2005.
[7] Arun T. Chaganty and Percy Liang. Spectral experts for estimating mixtures of linear regressions. In
ICML, pages 1040?1048, 2013.
[8] Xiujuan Chai, Shiguang Shan, Xilin Chen, and Wen Gao. Locally linear regression for pose-invariant face
recognition. Image Processing, 16(7):1716?1725, 2007.
[9] Yudong Chen, Xinyang Yi, and Constantine Caramanis. A convex formulation for mixed regression with
two components: Minimax optimal rates. COLT, 2014.
[10] Chandler Davis and William Morton Kahan. The rotation of eigenvectors by a perturbation. iii. SIAM
Journal on Numerical Analysis, 7(1):1?46, 1970.
[11] P. Deb and A. M. Holmes. Estimates of use and costs of behavioural health care: a comparison of standard
and finite mixture models. Health economics, 9(6):475?489, 2000.
[12] Eva L Dyer, Aswin C Sankaranarayanan, and Richard G Baraniuk. Greedy feature selection for subspace
clustering. The Journal of Machine Learning Research, 14(1):2487?2517, 2013.
[13] Ehsan Elhamifar and Ren? Vidal. Sparse subspace clustering. In CVPR, pages 2790?2797, 2009.
[14] Giancarlo Ferrari-Trecate and Marco Muselli. A new learning method for piecewise linear regression. In
Artificial Neural Networks?ICANN 2002, pages 444?449. Springer, 2002.
[15] Scott Gaffney and Padhraic Smyth. Trajectory clustering with mixtures of regression models. In KDD.
ACM, 1999.
[16] Jihun Hamm and Daniel D Lee. Grassmann discriminant analysis: a unifying view on subspace-based
learning. In ICML, pages 376?383. ACM, 2008.
[17] Reinhard Heckel and Helmut Bolcskei. Subspace clustering via thresholding and spectral clustering. In
Acoustics, Speech and Signal Processing, pages 3263?3267. IEEE, 2013.
[18] Jeffrey Ho, Ming-Husang Yang, Jongwoo Lim, Kuang-Chih Lee, and David Kriegman. Clustering
appearances of objects under varying illumination conditions. In CVPR, volume 1, pages I?11. IEEE,
2003.
[19] Daniel Hsu and Sham M Kakade. Learning mixtures of spherical gaussians: moment methods and spectral
decompositions. In ITCS, pages 11?20. ACM, 2013.
[20] Daniel Hsu, Sham M Kakade, and Tong Zhang. A tail inequality for quadratic forms of subgaussian
random vectors. Electronic Communications in Probability, 17(52):1?6, 2012.
[21] Abbas Khalili and Jiahua Chen. Variable selection in finite mixture of regression models. Journal of the
american Statistical association, 2012.
[22] Guangcan Liu, Zhouchen Lin, and Yong Yu. Robust subspace segmentation by low-rank representation. In
ICML, pages 663?670, 2010.
[23] Dohyung Park, Constantine Caramanis, and Sujay Sanghavi. Greedy subspace clustering. In NIPS, pages
2753?2761, 2014.
[24] Hanie Sedghi and Anima Anandkumar. Provable tensor methods for learning mixtures of generalized
linear models. arXiv preprint arXiv:1412.3046, 2014.
[25] Jie Shen, Ping Li, and Huan Xu. Online low-rank subspace clustering by basis dictionary pursuit. In ICML,
2016.
[26] Joel A. Tropp. User-friendly tail bounds for sums of random matrices. Foundations of Computational
Mathematics, 12(4):389?434, 2012.
[27] Kert Viele and Barbara Tong. Modeling with mixtures of linear regressions. Statistics and Computing,
12(4):315?330, 2002.
[28] Elisabeth Vieth. Fitting piecewise linear regression functions to biological responses. Journal of applied
physiology, 67(1):390?396, 1989.
[29] Xinyang Yi, Constantine Caramanis, and Sujay Sanghavi. Alternating minimization for mixed linear
regression. In ICML, pages 613?621, 2014.
9
| 6240 |@word trial:3 proportion:3 seems:2 norm:5 open:2 d2:2 decomposition:2 moment:9 initial:6 liu:1 series:2 daniel:5 ours:2 interestingly:2 xinyang:2 existing:2 err:2 current:1 com:1 od:3 comparing:1 recovered:1 si:2 numerical:2 partition:3 hanie:1 kdd:1 update:1 resampling:6 v:1 greedy:2 prohibitive:1 amir:1 provides:1 iterates:1 math:1 zhang:1 dn:6 ik:4 prove:3 combine:1 fitting:1 theoretically:1 hardness:1 behavior:1 p1:1 cand:1 multi:1 globally:2 ming:1 spherical:1 resolve:1 xti:4 ua:1 provided:2 spain:1 underlying:2 moreover:3 estimating:2 notation:3 prateek:1 pto:1 cm:4 superlinearly:1 minimizes:1 finding:2 bolcskei:1 guarantee:12 every:1 friendly:1 exactly:4 k2:7 uk:14 whatever:1 grant:1 ice:1 positive:4 t1:1 local:9 modify:1 kert:1 api:1 ak:2 id:5 initialization:26 studied:1 limited:1 psc:1 lrr:2 averaged:1 uk0:3 block:5 definite:1 hamm:1 area:1 empirical:5 eth:1 significantly:1 physiology:1 suggest:1 get:3 unlabeled:1 selection:3 superlinear:1 minj6:3 operator:1 close:4 optimize:2 equivalent:1 map:1 straightforward:1 economics:1 starting:1 convex:17 shen:1 simplicity:3 recovery:10 m2:8 attraction:4 array:1 holmes:1 spanned:1 orthonormal:2 nuclear:3 ity:1 population:1 handle:3 ferrari:1 coordinate:1 annals:1 pt:2 construction:1 user:1 exact:6 smyth:1 distinguishing:1 trick:1 approximated:2 satisfying:3 expensive:2 recognition:1 preprint:1 region:2 wj:4 eva:1 decrease:1 u1t:1 intuition:2 pd:5 convexity:5 complexity:29 benjamin:1 kriegman:1 depend:1 solving:6 bipartite:1 basis:2 easily:2 routinely:1 caramanis:3 jain:1 artificial:1 sc:5 neighborhood:8 whose:1 kai:1 widely:1 solve:4 say:2 relax:1 larger:1 elad:1 cvpr:2 statistic:2 kahan:1 noisy:4 online:2 sequence:1 eigenvalue:2 propose:5 km2:1 j2:4 bqj:1 achieve:5 getting:1 wjk:2 qr:2 chai:1 convergence:18 cluster:1 optimum:18 oscillating:1 generating:1 telgarsky:1 converges:8 leave:2 object:1 illustrate:3 polylog:3 completion:1 pose:1 fixing:2 eq:9 strong:5 zit:6 c:5 indicate:2 come:3 implies:1 radius:4 chandler:1 centered:1 bin:1 require:4 assign:1 fix:1 clustered:1 investigation:1 decompose:1 biological:1 extension:1 rong:1 strictly:1 hold:2 marco:1 around:4 sufficiently:2 ground:6 considered:1 normal:1 k3:1 bj:1 matus:1 claim:1 achieves:1 dictionary:1 estimation:2 label:3 utexas:2 sensitive:1 sivaraman:1 largest:1 arun:1 minimization:6 gaussian:4 super:2 ck:1 pn:1 ej:12 zhong:1 varying:1 corollary:5 morton:1 vk:1 rank:4 mainly:1 aka:1 contrast:4 helmut:1 xilin:1 dependent:1 stopping:1 typically:1 among:1 colt:1 denoted:1 art:3 special:3 initialize:3 kw:4 park:1 yu:2 icml:5 nearly:4 future:2 sanghavi:2 np:1 others:1 piecewise:3 richard:1 employ:1 wen:1 randomly:1 replaced:1 phase:2 jeffrey:1 microsoft:2 william:1 gaffney:1 evaluation:1 joel:1 mixture:12 kt:2 ambient:2 accurate:1 huan:1 orthogonal:1 initialized:4 theoretical:1 modeling:2 cost:4 deviation:1 subset:2 entry:1 kuang:1 successful:2 zhongkai:1 conducted:2 too:1 reported:1 kn:3 dependency:1 synthetic:1 gd:1 adler:1 recht:1 siam:1 lee:2 off:1 regressor:1 michael:1 tanner:1 w1:5 ukt:5 padhraic:1 ssc:4 worse:1 expert:2 american:1 style:1 return:1 pmin:8 li:1 sec:3 wk:44 satisfy:1 explicitly:3 depends:1 tijk:4 view:1 analyze:1 kwk:6 doing:1 competitive:1 recover:4 guangcan:1 contribution:1 minimize:3 ir:1 publicly:1 efficiently:1 itcs:1 ren:1 trajectory:2 j6:1 anima:1 ping:1 suffers:1 nonetheless:1 elisabeth:1 proof:7 recovers:1 sampled:6 hsu:3 dataset:6 popular:2 animashree:1 knowledge:3 lim:1 dimensionality:1 segmentation:1 carefully:1 alexandre:1 higher:2 reflected:1 response:2 formulation:6 strongly:6 furthermore:4 just:1 stage:2 hand:3 tropp:1 ei:3 lack:1 indicated:1 contain:1 true:3 unbiased:1 normalized:1 ccf:2 hence:6 alternating:3 symmetric:1 dhillon:1 i2:7 davis:1 kak:1 criterion:1 generalized:1 tsc:2 pdf:1 complete:1 dohyung:1 percy:1 image:1 rotation:1 multinomial:2 mlr:24 heckel:1 supersymmetric:2 jl:1 extend:2 volume:1 tail:2 association:1 measurement:2 refer:3 ai:2 chaganty:1 rd:10 sujay:2 grid:2 mathematics:2 zhouchen:1 maxj6:3 specification:1 whitening:2 base:2 own:1 recent:1 constantine:3 inf:1 barbara:1 certain:5 nonconvex:1 inequality:2 yi:17 minimum:1 care:2 omp:2 employed:1 converge:3 signal:1 ii:1 multiple:7 violate:1 sham:3 smooth:3 faster:1 retrieval:1 lin:1 grassmann:1 variant:1 regression:21 scalable:1 whitened:1 noiseless:2 expectation:1 arxiv:2 iteration:12 abbas:1 achieved:1 wkj:1 singular:1 w2:5 december:1 balakrishnan:1 seem:1 call:3 anandkumar:2 subgaussian:1 near:9 presence:1 yang:1 bernstein:1 iii:1 enough:2 iterate:4 zi:13 trecate:1 texas:1 t0:6 whether:1 suffer:1 peter:1 speech:1 hessian:6 oscillate:1 remark:1 matlab:1 hel:1 jie:1 detailed:3 eigenvectors:5 listed:1 reinhard:1 locally:7 diameter:1 reduced:1 generate:1 http:1 nsf:1 estimated:2 arising:2 per:2 clk:1 correctly:1 drawn:1 d3:4 ce:3 graph:1 sum:1 jongwoo:1 baraniuk:1 extends:1 throughout:1 almost:2 chih:1 electronic:1 appendix:10 comparable:1 capturing:1 bound:4 shan:1 guaranteed:3 giancarlo:1 quadratic:1 constraint:1 deb:1 ri:5 yong:1 u1:1 speed:2 min:19 span:2 martin:2 signficantly:1 according:2 ball:4 poor:2 kd:6 smaller:2 ewp:1 em:22 kakade:3 perm:2 modification:1 restricted:1 invariant:2 computationally:1 behavioural:1 dmax:3 r3:1 needed:2 ge:1 dyer:1 prajain:1 end:2 available:1 operation:1 gaussians:1 pursuit:1 vidal:1 apply:2 spectral:6 uq:2 ho:1 original:1 denotes:2 clustering:31 ensure:1 include:1 cf:5 top:3 log2:3 unifying:1 calculating:1 emmanuel:1 uj:2 especially:1 tensor:38 objective:21 question:3 quantity:1 concentration:1 dependence:2 diagonal:2 surrogate:1 traditional:2 gradient:10 subspace:41 distance:2 link:1 d6:2 topic:1 discriminant:1 trivial:1 provable:1 sedghi:1 code:1 index:1 providing:1 ratio:1 minimizing:1 liang:1 unfortunately:2 maxk2:2 executed:1 stated:1 kdn:2 pmax:2 shiguang:1 unknown:2 upper:1 dmin:5 observation:1 finite:3 descent:10 maxk:1 extended:1 communication:1 perturbation:1 arbitrary:1 sharp:1 david:1 pair:2 required:5 optimized:1 acoustic:1 learned:1 barcelona:1 nip:2 alternately:4 able:2 below:1 usually:1 aswin:1 scott:1 ujt:2 summarize:1 rf:4 wz:2 max:11 wainwright:1 power:12 natural:1 normality:1 minimax:1 ready:1 health:3 acknowledgement:1 kf:1 asymptotic:1 lecture:1 permutation:2 mixed:10 proportional:2 carvalho:1 eigendecomposition:3 foundation:2 basin:4 sufficient:1 consistent:1 thresholding:1 share:1 austin:1 prone:2 summary:1 supported:1 keeping:1 india:1 fall:1 face:1 sparse:1 yudong:1 dimension:9 rich:1 doesn:2 autoregressive:1 stuck:3 concretely:1 author:1 transaction:1 approximate:3 ignore:1 global:14 anchor:1 gsr:2 xi:17 alternatively:1 don:1 continuous:1 search:2 latent:1 sk:1 table:5 ku:1 robust:4 ehsan:1 nsn:4 poly:5 constructing:1 u2t:1 pk:9 icann:1 linearly:2 big:1 noise:5 xu:1 fig:5 definiteness:2 slow:1 tong:2 precision:7 exponential:2 lie:6 jmlr:1 third:5 rdn:4 theorem:18 rk:4 kop:3 r2:3 alt:5 intractable:1 burden:1 exists:1 sankaranarayanan:1 kr:1 logk:3 magnitude:2 illumination:1 push:1 elhamifar:1 chen:3 jiahua:1 saddle:2 likely:1 forming:2 gao:1 appearance:1 inderjit:2 u2:1 kwj:2 springer:1 ch:1 truth:6 satisfies:4 acm:3 goal:3 identity:1 viewed:3 careful:1 feasible:1 hard:1 determined:1 lemma:2 total:1 svd:1 experimental:1 m3:2 formally:2 log4:2 people:1 jihun:1 ethz:1 |
5,793 | 6,241 | A Theoretically Grounded Application of Dropout in
Recurrent Neural Networks
Yarin Gal
University of Cambridge
{yg279,zg201}@cam.ac.uk
Zoubin Ghahramani
Abstract
Recurrent neural networks (RNNs) stand at the forefront of many recent developments in deep learning. Yet a major difficulty with these models is their tendency to
overfit, with dropout shown to fail when applied to recurrent layers. Recent results
at the intersection of Bayesian modelling and deep learning offer a Bayesian interpretation of common deep learning techniques such as dropout. This grounding of
dropout in approximate Bayesian inference suggests an extension of the theoretical
results, offering insights into the use of dropout with RNN models. We apply this
new variational inference based dropout technique in LSTM and GRU models,
assessing it on language modelling and sentiment analysis tasks. The new approach
outperforms existing techniques, and to the best of our knowledge improves on the
single model state-of-the-art in language modelling with the Penn Treebank (73.4
test perplexity). This extends our arsenal of variational tools in deep learning.
1
Introduction
Recurrent neural networks (RNNs) are sequence-based models of key importance for natural language
understanding, language generation, video processing, and many other tasks [1?3]. The model?s input
is a sequence of symbols, where at each time step a simple neural network (RNN unit) is applied to a
single symbol, as well as to the network?s output from the previous time step. RNNs are powerful
models, showing superb performance on many tasks, but overfit quickly. Lack of regularisation in
RNN models makes it difficult to handle small data, and to avoid overfitting researchers often use
early stopping, or small and under-specified models [4].
Dropout is a popular regularisation technique with deep networks [5, 6] where network units are
randomly masked during training (dropped). But the technique has never been applied successfully
to RNNs. Empirical results have led many to believe that noise added to recurrent layers (connections
between RNN units) will be amplified for long sequences, and drown the signal [4]. Consequently,
existing research has concluded that the technique should be used with the inputs and outputs of the
RNN alone [4, 7?10]. But this approach still leads to overfitting, as is shown in our experiments.
Recent results at the intersection of Bayesian research and deep learning offer interpretation of
common deep learning techniques through Bayesian eyes [11?16]. This Bayesian view of deep
learning allowed the introduction of new techniques into the field, such as methods to obtain principled
uncertainty estimates from deep learning networks [14, 17]. Gal and Ghahramani [14] for example
showed that dropout can be interpreted as a variational approximation to the posterior of a Bayesian
neural network (NN). Their variational approximating distribution is a mixture of two Gaussians
with small variances, with the mean of one Gaussian fixed at zero. This grounding of dropout in
approximate Bayesian inference suggests that an extension of the theoretical results might offer
insights into the use of the technique with RNN models.
Here we focus on common RNN models in the field (LSTM [18], GRU [19]) and interpret these
as probabilistic models, i.e. as RNNs with network weights treated as random variables, and with
suitably defined likelihood functions. We then perform approximate variational inference in these
probabilistic Bayesian models (which we will refer to as Variational RNNs). Approximating the
posterior distribution over the weights with a mixture of Gaussians (with one component fixed at
zero and small variances) will lead to a tractable optimisation objective. Optimising this objective is
identical to performing a new variant of dropout in the respective RNNs.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
yt
1
yt
yt+1
yt
xt
1
xt
xt+1
xt
(a) Naive dropout RNN
1
yt
yt+1
1
xt
xt+1
(b) Variational RNN
Figure 1: Depiction of the dropout technique following our Bayesian interpretation (right)
compared to the standard technique in the field (left). Each square represents an RNN unit, with
horizontal arrows representing time dependence (recurrent connections). Vertical arrows represent
the input and output to each RNN unit. Coloured connections represent dropped-out inputs, with
different colours corresponding to different dropout masks. Dashed lines correspond to standard
connections with no dropout. Current techniques (naive dropout, left) use different masks at different
time steps, with no dropout on the recurrent layers. The proposed technique (Variational RNN, right)
uses the same dropout mask at each time step, including the recurrent layers.
In the new dropout variant, we repeat the same dropout mask at each time step for both inputs, outputs,
and recurrent layers (drop the same network units at each time step). This is in contrast to the existing
ad hoc techniques where different dropout masks are sampled at each time step for the inputs and
outputs alone (no dropout is used with the recurrent connections since the use of different masks
with these connections leads to deteriorated performance). Our method and its relation to existing
techniques is depicted in figure 1. When used with discrete inputs (i.e. words) we place a distribution
over the word embeddings as well. Dropout in the word-based model corresponds then to randomly
dropping word types in the sentence, and might be interpreted as forcing the model not to rely on
single words for its task.
We next survey related literature and background material, and then formalise our approximate
inference for the Variational RNN, resulting in the dropout variant proposed above. Experimental
results are presented thereafter.
2
Related Research
In the past few years a considerable body of work has been collected demonstrating the negative
effects of a naive application of dropout in RNNs? recurrent connections. Pachitariu and Sahani [7],
working with language models, reason that noise added in the recurrent connections of an RNN leads
to model instabilities. Instead, they add noise to the decoding part of the model alone. Bayer et al. [8]
apply a deterministic approximation of dropout (fast dropout) in RNNs. They reason that with dropout,
the RNN?s dynamics change dramatically, and that dropout should be applied to the ?non-dynamic?
parts of the model ? connections feeding from the hidden layer to the output layer. Pham et al. [9]
assess dropout with handwriting recognition tasks. They conclude that dropout in recurrent layers
disrupts the RNN?s ability to model sequences, and that dropout should be applied to feed-forward
connections and not to recurrent connections. The work by Zaremba, Sutskever, and Vinyals [4] was
developed in parallel to Pham et al. [9]. Zaremba et al. [4] assess the performance of dropout in RNNs
on a wide series of tasks. They show that applying dropout to the non-recurrent connections alone
results in improved performance, and provide (as yet unbeaten) state-of-the-art results in language
modelling on the Penn Treebank. They reason that without dropout only small models were used
in the past in order to avoid overfitting, whereas with the application of dropout larger models can
be used, leading to improved results. This work is considered a reference implementation by many
(and we compare to this as a baseline below). Bluche et al. [10] extend on the previous body of work
and perform exploratory analysis of the performance of dropout before, inside, and after the RNN?s
unit. They provide mixed results, not showing significant improvement on existing techniques. More
recently, and done in parallel to this work, Moon et al. [20] suggested a new variant of dropout in
RNNs in the speech recognition community. They randomly drop elements in the LSTM?s internal
cell ct and use the same mask at every time step. This is the closest to our proposed approach
(although fundamentally different to the approach we suggest, explained in ?4.1), and we compare to
this variant below as well.
2
Existing approaches are based on an empirical experimentation with different flavours of dropout,
following a process of trial-and-error. These approaches have led many to believe that dropout
cannot be extended to a large number of parameters within the recurrent layers, leaving them with
no regularisation. In contrast to these conclusions, we show that it is possible to derive a variational
inference based variant of dropout which successfully regularises such parameters, by grounding our
approach in recent theoretical research.
3
Background
We review necessary background in Bayesian neural networks and approximate variational inference.
Building on these ideas, in the next section we propose approximate inference in the probabilistic
RNN which will lead to a new variant of dropout.
3.1
Bayesian Neural Networks
Given training inputs X = {x1 , . . . , xN } and their corresponding outputs Y = {y1 , . . . , yN }, in
Bayesian (parametric) regression we would like to infer parameters ! of a function y = f ! (x) that
are likely to have generated our outputs. What parameters are likely to have generated our data?
Following the Bayesian approach we would put some prior distribution over the space of parameters,
p(!). This distribution represents our prior belief as to which parameters are likely to have generated
our data. We further need to define a likelihood distribution p(y|x, !). For classification tasks we
may assume a softmax likelihood,
!
X
!
!
p y = d|x, ! = Categorical exp(fd (x))/
exp(fd0 (x))
d0
or a Gaussian likelihood for regression. Given a dataset X, Y, we then look for the posterior
distribution over the space of parameters: p(!|X, Y). This distribution captures how likely various
function parameters are given our observed data. With it we can predict an output for a new input
point x? by integrating
Z
p(y? |x? , X, Y) = p(y? |x? , !)p(!|X, Y)d!.
(1)
One way to define a distribution over a parametric set of functions is to place a prior distribution over
a neural network?s weights, resulting in a Bayesian NN [21, 22]. Given weight matrices Wi and bias
vectors bi for layer i, we often place standard matrix Gaussian prior distributions over the weight
matrices, p(Wi ) = N (0, I) and often assume a point estimate for the bias vectors for simplicity.
3.2
Approximate Variational Inference in Bayesian Neural Networks
We are interested in finding the distribution of weight matrices (parametrising our functions) that have
generated our data. This is the posterior over the weights given our observables X, Y: p(!|X, Y).
This posterior is not tractable in general, and we may use variational inference to approximate it (as
was done in [23?25, 12]). We need to define an approximating variational distribution q(!), and then
minimise the KL divergence between the approximating distribution and the full posterior:
Z
KL q(!)||p(!|X, Y) /
q(!) log p(Y|X, !)d! + KL(q(!)||p(!))
=
N Z
X
i=1
q(!) log p(yi |f ! (xi ))d! + KL(q(!)||p(!)).
(2)
We next extend this approximate variational inference to probabilistic RNNs, and use a q(!) distribution that will give rise to a new variant of dropout in RNNs.
4
Variational Inference in Recurrent Neural Networks
In this section we will concentrate on simple RNN models for brevity of notation. Derivations for
LSTM and GRU follow similarly. Given input sequence x = [x1 , ..., xT ] of length T , a simple RNN
is formed by a repeated application of a function fh . This generates a hidden state ht for time step t:
ht = fh (xt , ht 1 ) = (xt Wh + ht 1 Uh + bh )
for some non-linearity . The model output can be defined, for example, as fy (hT ) = hT Wy + by .
We view this RNN as a probabilistic model by regarding ! = {Wh , Uh , bh , Wy , by } as random
3
variables (following normal prior distributions). To make the dependence on ! clear, we write fy!
for fy and similarly for fh! . We define our probabilistic model?s likelihood as above (section 3.1).
The posterior over random variables ! is rather complex, and we use variational inference with
approximating distribution q(!) to approximate it.
Evaluating each sum term in eq. (2) above with our RNN model we get
?
?
Z
Z
!
! !
q(!) log p(y|fy (hT ))d! = q(!) log p y fy fh (xT , hT 1 ) d!
?
?
Z
! !
!
!
= q(!) log p y fy fh (xT , fh (...fh (x1 , h0 )...)) d!
with h0 = 0. We approximate this with Monte Carlo (MC) integration with a single sample:
?
?
b !
b
b
b
!
!
!
b ? q(!)
? log p y fy fh (xT , fh (...fh (x1 , h0 )...)) ,
!
resulting in an unbiased estimator to each sum term.
This estimator is plugged into equation (2) to obtain our minimisation objective
?
?
N
X
L?
log p yi fy!b i fh!b i (xi,T , fh!b i (...fh!b i (xi,1 , h0 )...)) + KL(q(!)||p(!)).
(3)
i=1
bi , W
b i }, and that
ci , U
bi ,b
ci , b
b i = {W
Note that for each sequence xi we sample a new realisation !
y
y
h
h
h
bi
!
each symbol in the sequence xi = [xi,1 , ..., xi,T ] is passed through the function fh with the same
b i used at every time step t ? T .
ci , U
bi ,b
weight realisations W
h
h
h
Following [17] we define our approximating distribution to factorise over the weight matrices and
their rows in !. For every weight matrix row wk the approximating distribution is:
q(wk ) = pN (wk ; 0, 2 I) + (1 p)N (wk ; mk , 2 I)
with mk variational parameter (row vector), p given in advance (the dropout probability), and small
2
. We optimise over mk the variational parameters of the random weight matrices; these correspond
to the RNN?s weight matrices in the standard view1 . The KL in eq. (3) can be approximated as L2
regularisation over the variational parameters mk [17].
b ? q(!) corresponds to randomly zeroing (masking)
Evaluating the model output fy!b (?) with sample !
rows in each weight matrix W during the forward pass ? i.e. performing dropout. Our objective L is
identical to that of the standard RNN. In our RNN setting with a sequence input, each weight matrix
row is randomly masked once, and importantly the same mask is used through all time steps.2
Predictions can be approximated by either propagating the mean of each layer to the next (referred to
as the standard dropout approximation), or by approximating the posterior in eq. (1) with q(!),
Z
K
1 X
b k)
p(y? |x? , X, Y) ? p(y? |x? , !)q(!)d! ?
p(y? |x? , !
(4)
K
k=1
b k ? q(!), i.e. by performing dropout at test time and averaging results (MC dropout).
with !
4.1
Implementation and Relation to Dropout in RNNs
Implementing our approximate inference is identical to implementing dropout in RNNs with the
same network units dropped at each time step, randomly dropping inputs, outputs, and recurrent
connections. This is in contrast to existing techniques, where different network units would be
dropped at different time steps, and no dropout would be applied to the recurrent connections (fig. 1).
Certain RNN models such as LSTMs and GRUs use different gates within the RNN units. For
example, an LSTM is defined using four gates: ?input?, ?forget?, ?output?, and ?input modulation?,
i = sigm ht 1 Ui + xt Wi
f = sigm ht 1 Uf + xt Wf
o = sigm ht
1 Uo
g = tanh ht
+ xt Wo
1 Ug
+ xt Wg
1
Graves et al. [26] further factorise the approximating distribution over the elements of each row, and use a
Gaussian approximating distribution with each element (rather than a mixture); the approximating distribution
above seems to give better performance, and has a close relation with dropout [17].
2
In appendix A we discuss the relation of our dropout interpretation to the ensembling one.
4
ct = f ct 1 + i g
ht = o tanh(ct )
(5)
with ! = {Wi , Ui , Wf , Uf , Wo , Uo , Wg , Ug } weight matrices and the element-wise product.
Here an internal state ct (also referred to as cell) is updated additively.
Alternatively, the model could be re-parametrised as in [26]:
0 1 0
1
i
sigm ? ?
?
?
xt
B f C BsigmC
=
?
W
(6)
@oA @sigmA
ht 1
g
tanh
with ! = {W}, W a matrix of dimensions 2K by 4K (K being the dimensionality of xt ). We name
this parametrisation a tied-weights LSTM (compared to the untied-weights LSTM in eq. (5)).
Even though these two parametrisations result in the same deterministic model, they lead to different
approximating distributions q(!). With the first parametrisation one could use different dropout
masks for different gates (even when the same input xt is used). This is because the approximating
distribution is placed over the matrices rather than the inputs: we might drop certain rows in one
weight matrix W applied to xt and different rows in another matrix W0 applied to xt . With the
second parametrisations we would place a distribution over the single matrix W. This leads to a
faster forward-pass, but with slightly diminished results as we will see in the experiments section.
In more concrete terms, we may write our dropout variant with the second parametrisation (eq. (6)) as
0 1 0
1
i
sigm ? ?
?
?
xt zx
B f C BsigmC
(7)
@oA = @sigmA
ht 1 zh ? W
g
tanh
with zx , zh random masks repeated at all time steps (and similarly for the parametrisation in eq. (5)).
In comparison, Zaremba et al. [4]?s variant replaces zx in eq. (7) with a time-dependent mask: xt ztx
where ztx is sampled anew every time step (whereas zh is removed and the recurrent connection ht 1
is not dropped). On the other hand, Moon et al. [20]?s variant changes eq. (5) by adapting the internal
cell ct = ct zc with the same mask zc used at all time steps. Note that unlike [20], by viewing
dropout as an operation over the weights our technique trivially extends to RNNs and GRUs.
4.2
Word Embeddings Dropout
In datasets with continuous inputs we often apply dropout to the input layer ? i.e. to the input vector
itself. This is equivalent to placing a distribution over the weight matrix which follows the input and
approximately integrating over it (the matrix is optimised, therefore prone to overfitting otherwise).
But for models with discrete inputs such as words (where every word is mapped to a continuous
vector ? a word embedding) this is seldom done. With word embeddings the input can be seen as
either the word embedding itself, or, more conveniently, as a ?one-hot? encoding (a vector of zeros
with 1 at a single position). The product of the one-hot encoded vector with an embedding matrix
WE 2 RV ?D (where D is the embedding dimensionality and V is the number of words in the
vocabulary) then gives a word embedding. Curiously, this parameter layer is the largest layer in most
language applications, yet it is often not regularised. Since the embedding matrix is optimised it can
lead to overfitting, and it is therefore desirable to apply dropout to the one-hot encoded vectors. This
in effect is identical to dropping words at random throughout the input sentence, and can also be
interpreted as encouraging the model to not ?depend? on single words for its output.
Note that as before, we randomly set rows of the matrix WE 2 RV ?D to zero. Since we repeat the
same mask at each time step, we drop the same words throughout the sequence ? i.e. we drop word
types at random rather than word tokens (as an example, the sentence ?the dog and the cat? might
become ?? dog and ? cat? or ?the ? and the cat?, but never ?? dog and the cat?). A possible
inefficiency implementing this is the requirement to sample V Bernoulli random variables, where
V might be large. This can be solved by the observation that for sequences of length T , at most T
embeddings could be dropped (other dropped embeddings have no effect on the model output). For
T ? V it is therefore more efficient to first map the words to the word embeddings, and only then to
zero-out word embeddings based on their word type.
5
Experimental Evaluation
We start by implementing our proposed dropout variant into the Torch implementation of Zaremba
et al. [4], that has become a reference implementation for many in the field. Zaremba et al. [4] have
5
set a benchmark on the Penn Treebank that to the best of our knowledge hasn?t been beaten for
the past 2 years. We improve on [4]?s results, and show that our dropout variant improves model
performance compared to early-stopping and compared to using under-specified models. We continue
to evaluate our proposed dropout variant with both LSTM and GRU models on a sentiment analysis
task where labelled data is scarce. We finish by giving an in-depth analysis of the properties of the
proposed method, with code and many experiments deferred to the appendix due to space constraints.
5.1
Language Modelling
We replicate the language modelling experiment of Zaremba, Sutskever, and Vinyals [4]. The
experiment uses the Penn Treebank, a standard benchmark in the field. This dataset is considered
a small one in the language processing community, with 887, 521 tokens (words) in total, making
overfitting a considerable concern. Throughout the experiments we refer to LSTMs with the dropout
technique proposed following our Bayesian interpretation as Variational LSTMs, and refer to existing
dropout techniques as naive dropout LSTMs (different masks at different steps, applied to the input
and output of the LSTM alone). We refer to LSTMs with no dropout as standard LSTMs.
We implemented a Variational LSTM for both the medium model of [4] (2 layers with 650 units in
each layer) as well as their large model (2 layers with 1500 units in each layer). The only changes
we?ve made to [4]?s setting are 1) using our proposed dropout variant instead of naive dropout, and
2) tuning weight decay (which was chosen to be zero in [4]). All other hyper-parameters are kept
identical to [4]: learning rate decay was not tuned for our setting and is used following [4]. Dropout
parameters were optimised with grid search (tying the dropout probability over the embeddings
together with the one over the recurrent layers, and tying the dropout probability for the inputs and
outputs together as well). These are chosen to minimise validation perplexity3 . We further compared
to Moon et al. [20] who only drop elements in the LSTM internal state using the same mask at all
time steps (in addition to performing dropout on the inputs and outputs). We implemented their
dropout variant with each model size, and repeated the procedure above to find optimal dropout
probabilities (0.3 with the medium model, and 0.5 with the large model). We had to use early stopping
for the large model with [20]?s variant as the model starts overfitting after 16 epochs. Moon et al.
[20] proposed their dropout variant within the speech recognition community, where they did not
have to consider embeddings overfitting (which, as we will see below, affect the recurrent layers
considerably). We therefore performed an additional experiment using [20]?s variant together with
our embedding dropout (referred to as Moon et al. [20]+emb dropout).
Our results are given in table 1. For the variational LSTM we give results using both the tied weights
model (eq. (6)?(7), Variational (tied weights)), and without weight tying (eq. (5), Variational (untied
weights)). For each model we report performance using both the standard dropout approximation
(averaging the weights at test time ? propagating the mean of each approximating distribution as input
to the next layer), and using MC dropout (obtained by performing dropout at test time 1000 times,
and averaging the model outputs following eq. (4), denoted MC). For each model we report average
perplexity and standard deviation (each experiment was repeated 3 times with different random seeds
and the results were averaged). Model training time is given in words per second (WPS).
It is interesting that using the dropout approximation, weight tying results in lower validation error
and test error than the untied weights model. But with MC dropout the untied weights model performs
much better. Validation perplexity for the large model is improved from [4]?s 82.2 down to 77.3 (with
weight tying), or 77.9 without weight tying. Test perplexity is reduced from 78.4 down to 73.4 (with
MC dropout and untied weights). To the best of our knowledge, these are currently the best single
model perplexities on the Penn Treebank.
It seems that Moon et al. [20] underperform even compared to [4]. With no embedding dropout the
large model overfits and early stopping is required (with no early stopping the model?s validation
perplexity goes up to 131 within 30 epochs). Adding our embedding dropout, the model performs
much better, but still underperforms compared to applying dropout on the inputs and outputs alone.
Comparing our results to the non-regularised LSTM (evaluated with early stopping, giving similar
performance as the early stopping experiment in [4]) we see that for either model size an improvement
can be obtained by using our dropout variant. Comparing the medium sized Variational model to the
large one we see that a significant reduction in perplexity can be achieved by using a larger model.
This cannot be done with the non-regularised LSTM, where a larger model leads to worse results.
3
Optimal probabilities are 0.3 and 0.5 respectively for the large model, compared [4]?s 0.6 dropout probability,
and 0.2 and 0.35 respectively for the medium model, compared [4]?s 0.5 dropout probability.
6
Medium LSTM
Validation
Test
Non-regularized (early stopping)
121.1
121.7
Moon et al. [20]
100.7
97.0
Moon et al. [20] +emb dropout
88.9
86.5
Zaremba et al. [4]
86.2
82.7
Variational (tied weights)
81.8 ? 0.2 79.7 ? 0.1
Variational (tied weights, MC)
79.0 ? 0.1
Variational (untied weights)
81.9 ? 0.2 79.7 ? 0.1
Variational (untied weights, MC)
78.6 ? 0.1
Large LSTM
WPS Validation
Test
5.5K
128.3
127.4
4.8K
122.9
118.7
4.8K
88.8
86.0
5.5K
82.2
78.4
4.7K 77.3 ? 0.2 75.0 ? 0.1
74.1 ? 0.0
2.7K 77.9 ? 0.3 75.2 ? 0.2
73.4 ? 0.0
WPS
2.5K
3K
3K
2.5K
2.4K
1.6K
Table 1: Single model perplexity (on test and validation sets) for the Penn Treebank language
modelling task. Two model sizes are compared (a medium and a large LSTM, following [4]?s setup),
with number of processed words per second (WPS) reported. Both dropout approximation and MC
dropout are given for the test set with the Variational model. A common approach for regularisation is
to reduce model complexity (necessary with the non-regularised LSTM). With the Variational models
however, a significant reduction in perplexity is achieved by using larger models.
This shows that reducing the complexity of the model, a possible approach to avoid overfitting,
actually leads to a worse fit when using dropout.
We also see that the tied weights model achieves very close performance to that of the untied weights
one when using the dropout approximation. Assessing model run time though (on a Titan X GPU),
we see that tying the weights results in a more time-efficient implementation. This is because the
single matrix product is implemented as a single GPU kernel, instead of the four smaller matrix
products used in the untied weights model (where four GPU kernels are called sequentially). Note
though that a low level implementation should give similar run times.
We further experimented with a model averaging experiment following [4]?s setting, where several
large models are trained independently with their outputs averaged. We used Variational LSTMs
with MC dropout following the setup above. Using 10 Variational LSTMs we improve [4]?s test set
perplexity from 69.5 to 68.7 ? obtaining identical perplexity to [4]?s experiment with 38 models.
Lastly, we report validation perplexity with reduced learning rate decay (with the medium model).
Learning rate decay is often used for regularisation by setting the optimiser to make smaller steps
when the model starts overfitting (as done in [4]). By removing it we can assess the regularisation
effects of dropout alone. As can be seen in fig. 2, even with early stopping, Variational LSTM achieves
lower perplexity than naive dropout LSTM and standard LSTM. Note though that a significantly
lower perplexity for all models can be achieved with learning rate decay scheduling as seen in table 1
5.2
Sentiment Analysis
We next evaluate our dropout variant with both LSTM and GRU models on a sentiment analysis task,
where labelled data is scarce. We use MC dropout (which we compare to the dropout approximation
further in appendix B), and untied weights model parametrisations.
We use the raw Cornell film reviews corpus collected by Pang and Lee [27]. The dataset is composed
of 5000 film reviews. We extract consecutive segments of T words from each review for T = 200,
and use the corresponding film score as the observed output y. The model is built from one embedding
layer (of dimensionality 128), one LSTM layer (with 128 network units for each gate; GRU setting is
built similarly), and finally a fully connected layer applied to the last output of the LSTM (resulting
in a scalar output). We use the Adam optimiser [28] throughout the experiments, with batch size 128,
and MC dropout at test time with 10 samples.
Figure 2: Medium model validation perplexity for the Penn Treebank language modelling task.
Learning rate decay was reduced to assess model overfitting using dropout alone. Even with early
stopping, Variational LSTM achieves lower perplexity than naive dropout LSTM and standard LSTM.
Lower perplexity for all models can be achieved with learning rate decay scheduling, seen in table 1.
7
(c) GRU test error: variational,
(a) LSTM train error: variational, (b) LSTM test error: variational,
naive dropout, and standard LSTM. naive dropout, and standard LSTM. naive dropout, and standard LSTM.
Figure 3: Sentiment analysis error for Variational LSTM / GRU compared to naive dropout LSTM /
GRU and standard LSTM / GRU (with no dropout).
The main results can be seen in fig. 3. We compared Variational LSTM (with our dropout variant
applied with each weight layer) to standard techniques in the field. Training error is shown in fig. 3a
and test error is shown in fig. 3b. Optimal dropout probabilities and weight decay were used for each
model (see appendix B). It seems that the only model not to overfit is the Variational LSTM, which
achieves lowest test error as well. Variational GRU test error is shown in fig. 14 (with loss plot given
in appendix B). Optimal dropout probabilities and weight decay were used again for each model.
Variational GRU avoids overfitting to the data and converges to the lowest test error. Early stopping in
this dataset will result in smaller test error though (lowest test error is obtained by the non-regularised
GRU model at the second epoch). It is interesting to note that standard techniques exhibit peculiar
behaviour where test error repeatedly decreases and increases. This behaviour is not observed with
the Variational GRU. Convergence plots of the loss for each model are given in appendix B.
We next explore the effects of dropping-out different parts of the model. We assessed our Variational
LSTM with different combinations of dropout over the embeddings (pE = 0, 0.5) and recurrent
layers (pU = 0, 0.5) on the sentiment analysis task. The convergence plots can be seen in figure 4a. It
seems that without both strong embeddings regularisation and strong regularisation over the recurrent
layers the model would overfit rather quickly. The behaviour when pU = 0.5 and pE = 0 is quite
interesting: test error decreases and then increases before decreasing again. Also, it seems that when
pU = 0 and pE = 0.5 the model becomes very erratic.
Lastly, we tested the performance of Variational LSTM with different recurrent layer dropout
probabilities, fixing the embedding dropout probability at either pE = 0 or pE = 0.5 (figs. 4b-4c).
These results are rather intriguing. In this experiment all models have converged, with the loss getting
near zero (not shown). Yet it seems that with no embedding dropout, a higher dropout probability
within the recurrent layers leads to overfitting! This presumably happens because of the large number
of parameters in the embedding layer which is not regularised. Regularising the embedding layer with
dropout probability pE = 0.5 we see that a higher recurrent layer dropout probability indeed leads to
increased robustness to overfitting, as expected. These results suggest that embedding dropout can be
of crucial importance in some tasks.
In appendix B we assess the importance of weight decay with our dropout variant. Common practice
is to remove weight decay with naive dropout. Our results suggest that weight decay plays an
important role with our variant (it corresponds to our prior belief of the distribution over the weights).
6
Conclusions
We presented a new technique for recurrent neural network regularisation. Our RNN dropout variant
is theoretically motivated and its effectiveness was empirically demonstrated.
(a) Combinations of pE = 0, 0.5
with pU = 0, 0.5.
(b) pU = 0, ..., 0.5 with
fixed pE = 0.
(c) pU = 0, ..., 0.5 with
fixed pE = 0.5.
Figure 4: Test error for Variational LSTM with various settings on the sentiment analysis task.
Different dropout probabilities are used with the recurrent layer (pU ) and embedding layer (pE ).
8
References
[1] Martin Sundermeyer, Ralf Schl?ter, and Hermann Ney. LSTM neural networks for language modeling. In
INTERSPEECH, 2012.
[2] N Kalchbrenner and P Blunsom. Recurrent continuous translation models. In EMNLP, 2013.
[3] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. Sequence to sequence learning with neural networks. In
NIPS, 2014.
[4] Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv
preprint arXiv:1409.2329, 2014.
[5] Geoffrey E others Hinton. Improving neural networks by preventing co-adaptation of feature detectors.
arXiv preprint arXiv:1207.0580, 2012.
[6] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout:
A simple way to prevent neural networks from overfitting. JMLR, 2014.
[7] Marius Pachitariu and Maneesh Sahani. Regularization and nonlinearities for neural language models:
when are they needed? arXiv preprint arXiv:1301.5650, 2013.
[8] J Bayer et al. On fast dropout and its applicability to recurrent networks. arXiv preprint arXiv:1311.0701,
2013.
[9] Vu Pham, Theodore Bluche, Christopher Kermorvant, and Jerome Louradour. Dropout improves recurrent
neural networks for handwriting recognition. In ICFHR. IEEE, 2014.
[10] Th?odore Bluche, Christopher Kermorvant, and J?r?me Louradour. Where to apply dropout in recurrent
neural networks for handwriting recognition? In ICDAR. IEEE, 2015.
[11] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, 2014.
[12] Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural
network. In ICML, 2015.
[13] Jose Miguel Hernandez-Lobato and Ryan Adams. Probabilistic backpropagation for scalable learning of
Bayesian neural networks. In ICML, 2015.
[14] Yarin Gal and Zoubin Ghahramani. Bayesian convolutional neural networks with Bernoulli approximate
variational inference. arXiv:1506.02158, 2015.
[15] Diederik Kingma, Tim Salimans, and Max Welling. Variational dropout and the local reparameterization
trick. In NIPS. Curran Associates, Inc., 2015.
[16] Anoop Korattikara Balan, Vivek Rathod, Kevin P Murphy, and Max Welling. Bayesian dark knowledge.
In NIPS. Curran Associates, Inc., 2015.
[17] Yarin Gal and Zoubin Ghahramani. Dropout as a Bayesian approximation: Representing model uncertainty
in deep learning. arXiv:1506.02142, 2015.
[18] S Hochreiter and J Schmidhuber. Long short-term memory. Neural computation, 9(8), 1997.
[19] Kyunghyun Cho et al. Learning phrase representations using RNN encoder?decoder for statistical machine
translation. In EMNLP, Doha, Qatar, October 2014. ACL.
[20] Taesup Moon, Heeyoul Choi, Hoshik Lee, and Inchul Song. RnnDrop: A Novel Dropout for RNNs in
ASR. In ASRU Workshop, December 2015.
[21] David JC MacKay. A practical Bayesian framework for backpropagation networks. Neural computation, 4
(3):448?472, 1992.
[22] R M Neal. Bayesian learning for neural networks. PhD thesis, University of Toronto, 1995.
[23] Geoffrey E Hinton and Drew Van Camp. Keeping the neural networks simple by minimizing the description
length of the weights. In COLT, pages 5?13. ACM, 1993.
[24] David Barber and Christopher M Bishop. Ensemble learning in Bayesian neural networks. NATO ASI
SERIES F COMPUTER AND SYSTEMS SCIENCES, 168:215?238, 1998.
[25] Alex Graves. Practical variational inference for neural networks. In NIPS, 2011.
[26] Alan Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent
neural networks. In ICASSP. IEEE, 2013.
[27] Bo Pang and Lillian Lee. Seeing stars: Exploiting class relationships for sentiment categorization with
respect to rating scales. In ACL. ACL, 2005.
[28] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
[29] James Bergstra et al. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python
for Scientific Computing Conference (SciPy), June 2010. Oral Presentation.
[30] fchollet. Keras. https://github.com/fchollet/keras, 2015.
9
| 6241 |@word trial:1 seems:6 replicate:1 suitably:1 underperform:1 additively:1 reduction:2 inefficiency:1 series:2 score:1 qatar:1 jimenez:1 offering:1 tuned:1 outperforms:1 existing:8 past:3 current:1 comparing:2 com:1 yet:4 intriguing:1 diederik:2 gpu:4 remove:1 drop:6 plot:3 alone:8 generative:1 short:1 math:1 toronto:1 wierstra:2 become:2 inside:1 theoretically:2 mask:15 expected:1 indeed:1 disrupts:1 salakhutdinov:1 decreasing:1 encouraging:1 cpu:1 becomes:1 spain:1 notation:1 linearity:1 medium:8 lowest:3 what:1 tying:7 interpreted:3 developed:1 superb:1 finding:1 gal:4 every:5 zaremba:8 uk:1 unit:13 penn:7 uo:2 yn:1 before:3 dropped:7 local:1 encoding:1 optimised:3 modulation:1 approximately:1 hernandez:1 might:5 rnns:17 blunsom:1 acl:3 theodore:1 suggests:2 co:1 forefront:1 bi:5 averaged:2 practical:2 vu:1 practice:1 backpropagation:3 procedure:1 rnn:29 arsenal:1 empirical:2 adapting:1 significantly:1 maneesh:1 asi:1 word:26 integrating:2 seeing:1 zoubin:3 suggest:3 get:1 cannot:2 close:2 bh:2 put:1 scheduling:2 applying:2 instability:1 fd0:1 equivalent:1 deterministic:2 map:1 yt:6 demonstrated:1 lobato:1 go:1 independently:1 jimmy:1 survey:1 simplicity:1 scipy:1 insight:2 estimator:2 importantly:1 ralf:1 reparameterization:1 embedding:16 handle:1 exploratory:1 updated:1 deteriorated:1 play:1 us:2 curran:2 regularised:6 trick:1 element:5 associate:2 recognition:6 approximated:2 observed:3 role:1 preprint:5 solved:1 capture:1 connected:1 decrease:2 removed:1 principled:1 ui:2 complexity:2 cam:1 dynamic:2 trained:1 depend:1 segment:1 oral:1 observables:1 uh:2 icassp:1 various:2 cat:4 sigm:5 derivation:1 train:1 fast:2 monte:1 hyper:1 kevin:1 h0:4 kalchbrenner:1 quite:1 encoded:2 larger:4 film:3 otherwise:1 wg:2 encoder:1 ability:1 itself:2 shakir:1 hoc:1 sequence:12 propose:1 product:4 adaptation:1 korattikara:1 amplified:1 description:1 getting:1 sutskever:5 convergence:2 exploiting:1 requirement:1 assessing:2 categorization:1 adam:3 converges:1 tim:1 derive:1 recurrent:35 ac:1 fixing:1 propagating:2 miguel:1 schl:1 eq:11 strong:2 implemented:3 concentrate:1 hermann:1 stochastic:2 viewing:1 material:1 implementing:4 feeding:1 behaviour:3 ryan:1 extension:2 pham:3 considered:2 drown:1 normal:1 exp:2 presumably:1 seed:1 predict:1 major:1 achieves:4 early:11 consecutive:1 fh:14 ruslan:1 tanh:4 currently:1 largest:1 successfully:2 tool:1 gaussian:4 rather:6 avoid:3 pn:1 cornell:1 minimisation:1 rezende:1 focus:1 june:1 improvement:2 modelling:8 likelihood:5 bernoulli:2 contrast:3 baseline:1 wf:2 camp:1 inference:17 dependent:1 stopping:11 nn:2 bluche:3 torch:1 hidden:2 relation:4 interested:1 classification:1 colt:1 denoted:1 development:1 art:2 softmax:1 integration:1 mackay:1 field:6 once:1 never:2 asr:1 koray:1 optimising:1 identical:6 represents:2 look:1 placing:1 icml:3 report:3 others:1 fundamentally:1 realisation:2 few:1 randomly:7 ztx:2 composed:1 divergence:1 ve:1 murphy:1 factorise:2 kermorvant:2 fd:1 evaluation:1 deferred:1 mixture:3 parametrised:1 peculiar:1 bayer:2 necessary:2 respective:1 plugged:1 re:1 theoretical:3 formalise:1 mk:4 increased:1 modeling:1 phrase:1 applicability:1 deviation:1 masked:2 krizhevsky:1 reported:1 considerably:1 cho:1 grus:2 lstm:41 probabilistic:7 lee:3 decoding:1 together:3 quickly:2 concrete:1 ilya:3 parametrisation:4 again:2 thesis:1 emnlp:2 worse:2 leading:1 wojciech:1 nonlinearities:1 star:1 bergstra:1 wk:4 inc:2 titan:1 hasn:1 jc:1 ad:1 performed:1 view:2 overfits:1 compiler:1 start:3 parallel:2 masking:1 ass:5 pang:2 square:1 formed:1 moon:9 variance:2 who:1 convolutional:1 ensemble:1 correspond:2 bayesian:24 raw:1 kavukcuoglu:1 mc:12 carlo:1 researcher:1 zx:3 odore:1 converged:1 detector:1 mohamed:2 james:1 handwriting:3 sampled:2 dataset:4 popular:1 wh:2 view1:1 knowledge:4 improves:3 dimensionality:3 actually:1 feed:1 higher:2 danilo:1 follow:1 improved:3 done:5 though:5 evaluated:1 lastly:2 jerome:1 overfit:4 working:1 hand:1 horizontal:1 lstms:8 christopher:3 rahman:1 lack:1 scientific:1 believe:2 grounding:3 effect:5 building:1 name:1 unbiased:1 regularization:2 kyunghyun:1 neal:1 vivek:1 during:2 interspeech:1 performs:2 variational:51 wise:1 novel:1 recently:1 charles:1 common:5 ug:2 empirically:1 extend:2 interpretation:5 interpret:1 refer:4 significant:3 cambridge:1 tuning:1 seldom:1 trivially:1 grid:1 similarly:4 zeroing:1 doha:1 language:14 had:1 depiction:1 add:1 pu:7 posterior:8 closest:1 recent:4 showed:1 perplexity:17 forcing:1 schmidhuber:1 certain:2 taesup:1 continue:1 yi:2 optimiser:2 seen:6 additional:1 signal:1 dashed:1 rv:2 full:1 desirable:1 infer:1 d0:1 alan:1 faster:1 offer:3 long:2 prediction:1 variant:25 regression:2 scalable:1 optimisation:1 arxiv:12 grounded:1 represent:2 kernel:2 achieved:4 cell:3 underperforms:1 hochreiter:1 background:3 whereas:2 addition:1 leaving:1 concluded:1 crucial:1 unlike:1 december:1 effectiveness:1 near:1 kera:2 ter:1 embeddings:11 affect:1 finish:1 fit:1 reduce:1 idea:1 regarding:1 minimise:2 blundell:1 motivated:1 expression:1 curiously:1 colour:1 passed:1 sentiment:8 wo:2 song:1 speech:3 repeatedly:1 deep:12 dramatically:1 clear:1 dark:1 processed:1 reduced:3 http:1 per:2 discrete:2 write:2 dropping:4 sundermeyer:1 key:1 thereafter:1 four:3 demonstrating:1 prevent:1 ht:16 kept:1 year:2 sum:2 run:2 jose:1 powerful:1 uncertainty:3 extends:2 place:4 throughout:4 appendix:7 flavour:1 dropout:129 layer:34 ct:7 replaces:1 constraint:1 alex:2 untied:10 generates:1 nitish:1 performing:5 martin:1 marius:1 uf:2 combination:2 smaller:3 slightly:1 wi:4 making:1 happens:1 quoc:1 explained:1 theano:1 equation:1 discus:1 icdar:1 fail:1 needed:1 tractable:2 gaussians:2 operation:1 pachitariu:2 experimentation:1 apply:5 salimans:1 ney:1 batch:1 robustness:1 gate:4 giving:2 ghahramani:4 approximating:14 objective:4 added:2 parametric:2 dependence:2 exhibit:1 mapped:1 oa:2 decoder:1 w0:1 me:1 barber:1 collected:2 fy:9 yg279:1 reason:3 length:3 code:1 relationship:1 minimizing:1 difficult:1 setup:2 october:1 sigma:2 negative:1 rise:1 ba:1 implementation:6 perform:2 vertical:1 observation:1 datasets:1 benchmark:2 daan:2 lillian:1 extended:1 hinton:4 y1:1 community:3 rating:1 david:2 dog:3 gru:14 specified:2 kl:6 sentence:3 connection:15 required:1 barcelona:1 kingma:2 nip:5 suggested:1 below:3 wy:2 built:2 max:2 icfhr:1 including:1 video:1 belief:2 optimise:1 hot:3 erratic:1 difficulty:1 rely:1 natural:1 treated:1 regularized:1 scarce:2 representing:2 improve:2 github:1 eye:1 julien:1 categorical:1 naive:12 extract:1 sahani:2 epoch:3 understanding:1 rathod:1 python:1 coloured:1 literature:1 review:4 prior:6 l2:1 regularisation:10 graf:3 zh:3 fully:1 loss:3 mixed:1 generation:1 interesting:3 geoffrey:4 validation:9 abdel:1 treebank:7 translation:2 row:9 prone:1 balan:1 token:2 repeat:2 placed:1 last:1 keeping:1 fchollet:2 zc:2 bias:2 vv:1 emb:2 wide:1 van:1 dimension:1 xn:1 stand:1 evaluating:2 vocabulary:1 depth:1 avoids:1 forward:3 made:1 preventing:1 welling:2 approximate:14 nato:1 memory:1 anew:1 overfitting:15 sequentially:1 corpus:1 conclude:1 xi:7 alternatively:1 continuous:3 search:1 table:4 obtaining:1 improving:1 parametrisations:3 complex:1 louradour:2 did:1 main:1 arrow:2 noise:3 allowed:1 repeated:4 yarin:3 body:2 x1:4 fig:7 referred:3 ensembling:1 cornebise:1 position:1 tied:6 pe:10 jmlr:1 down:2 removing:1 choi:1 xt:23 bishop:1 showing:2 symbol:3 decay:12 beaten:1 experimented:1 concern:1 workshop:1 adding:1 importance:3 ci:3 drew:1 phd:1 parametrising:1 intersection:2 led:2 depicted:1 forget:1 likely:4 explore:1 conveniently:1 vinyals:4 scalar:1 bo:1 regularises:1 corresponds:3 acm:1 sized:1 presentation:1 consequently:1 labelled:2 asru:1 considerable:2 change:3 regularising:1 diminished:1 reducing:1 averaging:4 total:1 called:1 pas:2 tendency:1 experimental:2 internal:4 assessed:1 brevity:1 anoop:1 oriol:2 evaluate:2 tested:1 srivastava:1 |
5,794 | 6,242 | A primal-dual method for conic constrained
distributed optimization problems
Necdet Serhat Aybat
Department of Industrial Engineering
Penn State University
University Park, PA 16802
[email protected]
Erfan Yazdandoost Hamedani
Department of Industrial Engineering
Penn State University
University Park, PA 16802
[email protected]
Abstract
We consider cooperative multi-agent consensus optimization problems over an
undirected network of agents, where only those agents connected by an edge
can directly communicate. The objective is to minimize the sum of agentspecific composite convex functions over agent-specific private conic constraint
sets; hence, the optimal consensus decision should lie in the intersection of these
private sets. We provide convergence rates in sub-optimality, infeasibility and
consensus violation; examine the effect of underlying network topology on the
convergence rates of the proposed decentralized algorithms; and show how to extend these methods to handle time-varying communication networks.
1
Introduction
Let G = (N , E) denote a connected undirected graph of N computing nodes, where N ,
{1, . . . , N } and E ? N ? N denotes the set of edges ? without loss of generality assume that
(i, j) ? E implies i < j. Suppose nodes i and j can exchange information only if (i, j) ? E, and
each node i ? N has a private (local) cost function ?i : Rn ? R ? {+?} such that
?i (x) , ?i (x) + fi (x),
(1)
where ?i : Rn ? R ? {+?} is a possibly non-smooth convex function, and fi : Rn ? R is a
smooth convex function. We assume that fi is differentiable on an open set containing dom ?i with
a Lipschitz continuous gradient ?fi , of which Lipschitz constant is Li ; and the prox map of ?i ,
prox?i (x) , argmin ?i (y) +
y?Rn
1
2
ky ? xk2 ,
(2)
is efficiently computable for i ? N , where k.k denotes the Euclidean norm. Let Ni , {j ? N :
(i, j) ? E or (j, i) ? E} denote the set of neighboring nodes of i ? N , and di , |Ni | is the degree
of node i ? N . Consider the following minimization problem:
min
x?Rn
X
i?N
?i (x)
s.t.
Ai x ? bi ? Ki ,
?i ? N ,
(3)
where Ai ? Rmi ?n , bi ? Rmi and Ki ? Rmi is a closed, convex cone. Suppose that projections
onto Ki can be computed efficiently, while the projection onto the preimage A?1
i (Ki +bi ) is assumed
to be impractical, e.g., when Ki is the positive semidefinite cone, projection to preimage requires
solving an SDP. Our objective is to solve (3) in a decentralized fashion using the computing nodes
N and exchanging information only along the edges E. In Section 2 and Section 3, we consider (3)
when the topology of the connectivity graph is static and time-varying, respectively.
This computational setting, i.e., decentralized consensus optimization, appears as a generic model
for various applications in signal processing, e.g., [1, 2], machine learning, e.g., [3, 4, 5] and statistical inference, e.g., [6]. Clearly, (3) can also be solved in a ?centralized? fashion by communicating all the private functions ?i to a central node, and solving the overall problem at this
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
node. However, such an approach can be very expensive both from communication and computation perspectives when compared to the distributed algorithms which are far more scalable
to increasing problem data and network sizes. In particular, suppose (Ai , bi ) ? Rm?(n+1) and
2
?i (x) = ? kxk1 + kAi x ? bi k for some given ? > 0 for i ? N such that m ? n and N ? 1.
Hence, (3) is a very large scale LASSO problem with distributed data. To solve (3) in a centralized
fashion, the data {(Ai , bi ) : i ? N } needs to be communicated to the central node. This can be
prohibitively expensive, and may also violate privacy constraints ? in case some node i does not
want to reveal the details of its private data. Furthermore, it requires that the central node has large
enough memory to be able to accommodate all the data. On the other hand, at the expense of slower
convergence, one can completely do away with a central node, and seek for consensus among all
the nodes on an optimal decision using ?local? decisions communicated by the neighboring nodes.
From computational perspective, for certain cases, computing partial gradients locally can be more
computationally efficient when compared to computing the entire gradient at a central node. With
these considerations in mind, we propose decentralized algorithms that can compute solutions to (3)
using only local computations without explicitly requiring the nodes to communicate the functions
{?i : i ? N }; thereby, circumventing all privacy, communication and memory issues. Examples
of constrained machine learning problems that fit into our framework include multiple kernel learning [7], and primal linear support vector machine (SVM) problems. In the numerical section we
implemented the proposed algorithms on the primal SVM problem.
1.1
Previous Work
There has been active research [8, 9, 10, 11, 12] on solving convex-concave saddle point problems
minx maxy L(x, y). In [9] primal-dual proximal algorithms are proposed for convex-concave problems with known saddle-point structure minx maxy Ls (x, y) , ?(x) + hT x, yi ? h(y), where ?
and h are convex functions, and T is a linear map. These algorithms converge with rate O(1/k) for
the primal-dual gap, and they can be modified to yield a convergence rate of O(1/k 2 ) when either
? or h is strongly convex, and O(1/ek ) linear rate, when both ? and h are strongly convex. More
recently, in [11] Chambolle and Pock extend their previous work in [9], using simpler proofs, to
handle composite convex primal functions, i.e., sum of smooth and (possibly) nonsmooth functions,
and to deal with proximity operators based on Bregman distance functions.
P
Consider minx?Rn { i?N ?i (x) : x ? ?i?N Xi } over G = (N , E). Although the unconstrained consensus optimization, i.e., Xi = Rn , is well studied ? see [13, 14] and the references
therein, the constrained case is still an immature, and recently developing area of active research
[13, 14, 15, 16, 17, 18, 19]. Other than few exceptions, e.g., [15, 16, 17], the methods in literature require that each node compute a projection on the privately known set Xi in addition to
consensus and (sub)gradient steps, e.g., [18, 19]. Moreover, among those few exceptions that do not
use projections onto Xi when ?Xi is not easy to compute, only [15, 16] can handle agent-specific
constraints without assuming global knowledge of the constraints by all agents. However, no rate
results in terms of suboptimality, local infeasibility, and consensus violation exist for the primaldual distributed methods in [15, 16] when implemented for the agent-specific conic constraint sets
Xi = {x : Ai x ? bi ? Ki } studied in this paper. In [15], a consensus-based distributed primaldual perturbation (PDP) algorithm using a square summable but not summable step-size sequence
is proposed. The objective is to minimize a composition of a global network function (smooth) with
the summation of local objective functions (smooth), subject to local compact sets and inequality
constraints on the summation of agent specific constrained functions. They showed that the local
primal-dual iterate sequence converges to a global optimal primal-dual solution; however, no rate
result was provided. The proposed PDP method can also handle non-smooth constraints with similar convergence guarantees. Finally, while we were preparing this paper, we became aware of a
very recent work [16] related to ours. The authors proposed a distributed algorithm on time-varying
communication network for solving saddle-point problems subject to consensus constraints. The
algorithm can also be applied to solve consensus optimization problems with inequality constraints
that can be written as summation of local convex functions of local and global variables. Under
some assumptions, it is shown that using a carefully selected decreasing
step-size sequence, the
?
ergodic average of primal-dual sequence converges with O(1/ k) rate in terms of saddle-point
evaluation error; however, when applied to constrained optimization problems, no rate in terms of
either suboptimality or infeasibility is provided.
2
Contribution. We propose primal-dual algorithms for distributed optimization subject to agent
specific conic constraints. By assuming composite convex structure on the primal functions, we
show that our proposed algorithms converge with O(1/k) rate where k is the number of consensus
iterations. To the best of our knowledge, this is the best rate result for our setting. Indeed, ?-optimal
and ?-feasible solution can be computed within O(1/?) consensus iterations for the static topology,
and within O(1/?1+1/p ) consensus iterations for the dynamic topology for any rational p ? 1,
although O(1) constant gets larger for large p. Moreover, these methods are fully distributed, i.e.,
the agents are not required to know any global parameter depending on the entire network topology,
e.g., the second smallest eigenvalue of the Laplacian; instead, we only assume that agents know who
their neighbors are. Due to limited space, we put all the technical proofs to the appendix.
1.2
Preliminary
Let X and Y be finite-dimensional vector spaces. In a recent paper, Chambolle and Pock [11]
proposed a primal-dual algorithm (PDA) for the following convex-concave saddle-point problem:
min max L(x, y) , ?(x) + hT x, yi ? h(y),
x?X y?Y
where ?(x) , ?(x) + f (x),
(4)
? and h are possibly non-smooth convex functions, f is a convex function and has a Lipschitz
continuous gradient defined on dom ? with constant L, and T is a linear map. Briefly, given x0 , y0
and algorithm parameters ?x , ?y > 0, PDA consists of two proximal-gradient steps:
E D
E
D
1
xk+1 ? argmin ?(x) + f (xk ) + ?f (xk ), x ? xk + T x, yk +
Dx (x, xk )
?
x
x
D
E
1
Dy (y, yk ),
yk+1 ? argmin h(y) ? T (2xk+1 ? xk ), y +
?y
y
(5a)
(5b)
where Dx and Dy are Bregman distance functions corresponding to some continuously differentiable strongly convex ?x and ?y such that dom ?x ? dom ? and dom ?y ? dom h. In particu? ) , ?x (x) ? ?x (?
? i, and Dy is defined similarly. In [11], a simple
x) ? h??x (?
x), x ? x
lar, Dx (x, x
proof for the ergodic convergence is provided for (5); indeed, it is shown that, when the convexity
2
(T ), then
modulus for ?x and ?y is 1, if ?, ? > 0 are chosen such that ( ?1x ? L) ?1y ? ?max
?K ) ?
L(?
xK , y) ? L(x, y
1
K
?K ,
for all x, y ? X ? Y, where x
1
1
Dx (x, x0 ) +
Dy (y, y0 ) ? T (x ? x0 ), y ? y0 ,
?x
?y
1
K
PK
k=1
?K ,
xk and y
1
K
PK
k=1
(6)
yk .
First, we define the notation used throughout the paper. Next, in Theorem 1.1, we discuss a special
case of (4), which will help us prove the main results of this paper, and also allow us to develop
decentralized algorithms for the consensus optimization problem in (3). The proposed algorithms in
this paper can distribute the computation over the nodes such that each node?s computation is based
on the local topology of G and the private information only available to that node.
Notation. Throughout the paper, k.k denotes the Euclidean norm. Given a convex set S, let ?S (.)
denote its support function, i.e., ?S (?) , supw?S h?, wi, let IS (?) denote the indicator function of
S, i.e., IS (w) = 0 for w ? S and equal to +? otherwise, and let PS (w) , argmin{kv ? wk :
v ? S} denote the projection onto S. For a closed convex set S, we define the distance function
as dS (w) , kPS (w) ? wk. Given a convex cone K ? Rm , let K? denote its dual cone, i.e.,
K? , {? ? Rm : h?, wi ? 0 ?w ? K}, and K? , ?K? denote the polar cone of K. Note that for
a given cone K ? Rm , ?K (?) = 0 for ? ? K? and equal to +? if ? 6? K? , i.e., ?K (?) = IK? (?)
for all ? ? Rm . Cone K is called proper if it is closed, convex, pointed, and it has a nonempty
interior. Given a convex function g : Rn ? R ? {+?}, its convex conjugate is defined as g ? (w) ,
sup??Rn hw, ?i ? g(?). ? denotes the Kronecker product, and In is the n ? n identity matrix.
Definition 1. Let X , ?i?N Rn and X ? x = [xi ]i?N ; Y , ?i?N Rmi ? Rm0 , Y ? y =
P
[? ? ?? ]? and ? = [?i ]i?N ? Rm , where m , i?N mi , and ? denotes the Cartesian product.
Given parameters ? > 0, ?i , ?i > 0 for i ? N , let D? , ?1 Im0 , D? , diag([ ?1i Imi ]i?N ),
and D? , diag([ ?1i In ]i?N ). Defining ?x (x) , 12 x? D? x and ?y (y) , 12 ? ? D? ? + 12 ?? D? ?
? ) = 12 kx ? x
? k2D? , and Dy (y, y
?) =
leads to the following Bregman distance functions: Dx (x, x
1
2
2
1
1
?
?
+
? ? ?
?
, where the Q-norm is defined as kzk , (z Qz) 2 for Q ? 0.
???
2
D?
2
Q
D?
3
Theorem 1.1. Let X , Y, and Bregman functions Dx , Dy be defined as in Definition 1. Suppose
P
P
?(x) , i?N ?i (xi ), and h(y) , h0 (?) + i?N hi (?i ), where {?i }i?N are composite convex
functions defined as in (1), and {hi }i?N are closed convex with simple prox-maps. Given A0 ?
?
Rm0 ?n|N | and {Ai }i?N such that Ai ? Rmi ?n , let T = [A? A?
0 ] , where A , diag([Ai ]i?N ) ?
Rm?n|N | is a block-diagonal matrix. Given the initial point (x0 , y0 ), the PDA iterate sequence
according to
{xk , yk }k?1 , generated
? (5a) and (5b) when ?x = ?y = 1, satisfies (6) for all K ? 1
?
??
D
?
if Q(A,
A0 ) , ? ?A
?A0
?A?
D?
0
?A?
0
? ? , diag([( 1 ? Li )In ]i?N ). Moreover, if a
0 ? 0, where D
?i
D?
?
saddle point exists for (4), and Q(A,
A0 ) ? 0, then {xk , yk }k?1 converges to a saddle point of (4);
k ?k
hence, {?
x , y }k?1 converges to the same point.
Although the proof of Theorem 1.1 follows from the lines of [11], we provide the proof in the
appendix for the sake of completeness as it will be used repeatedly to derive our results.
Next we discuss how (5) can be implemented to compute an ?-optimal solution to (3) in a distributed
way using only O(1/?) communications over the communication graph G while respecting nodespecific privacy requirements. Later, in Section 3, we consider the scenario where the topology of
1
the connectivity graph is time-varying, and propose a distributed algorithm that requires O(1/?1+ p )
communications for any p ? 1. Finally, in Section 4 we test the proposed algorithms for solving the
primal SVM problem in a decentralized manner. These results are shown under Assumption 1.1.
Assumption 1.1. The duality gap for (3) is zero, and a primal-dual solution to (3) exists.
A sufficient condition for this is the existence of a Slater point, i.e., there exists x
? ? relint(dom ?)
? ? bi ? int(Ki ) for i ? N , where dom ? = ?i?N dom ?i .
such that Ai x
2
Static Network Topology
Let xi ? Rn denote the local decision vector of node i ? N . By taking advantage of the fact that G
is connected, we can reformulate (3) as the following distributed consensus optimization problem:
min
xi ?Rn , i?N
(
X
i?N
?i (xi ) | xi = xj : ?ij , ?(i, j) ? E,
Ai xi ? bi ? Ki : ?i , ?i ? N
)
,
(7)
where ?ij ? Rn and ?i ? Rmi are the corresponding dual variables. Let x = [xi ]i?N ? Rn|N | . The
consensus constraints xi = xj for (i, j) ? E can be formulated as M x = 0, where M ? Rn|E|?n|N |
is a block matrix such that M = H ? In where H is the oriented edge-node incidence matrix, i.e.,
the entry H(i,j),l , corresponding to edge (i, j) ? E and l ? N , is equal to 1 if l = i, ?1 if l = j,
and 0 otherwise. Note that M T M = H T H ? In = ? ? In , where ? ? R|N |?|N | denotes the graph
Laplacian of G, i.e., ?ii = di , ?ij = ?1 if (i, j) ? E or (j, i) ? E, and equal to 0 otherwise.
?
For any closed convex set S, we have ?S? (?) = IS (?); therefore, using the fact that ?K
= IKi for
i
i ? N , one can obtain the following saddle point problem corresponding to (7),
min max L(x, y) ,
x
y
X
i?N
?i (xi ) + h?i , Ai xi ? bi i ? ?Ki (?i ) + h?, M xi,
where y = [? ? ?? ]? for ? = [?ij ](i,j)?E ? Rn|E| , ? = [?i ]i?N ? Rm , and m ,
P
i?N
(8)
mi .
Next, we study the distributed implementation of PDA in (5a)-(5b) to solve (8). Let ?(x) ,
P
P
i?N ?i (xi ), and h(y) ,
i?N ?Ki (?i ) + hbi , ?i i. Define the block-diagonal matrix A ,
m?n|N |
diag([Ai ]i?N ) ? R
and T = [A? M ? ]? . Therefore, given the initial iterates x0 , ? 0 , ?0
and parameters ? > 0, ?i , ?i > 0 for i ? N , choosing Dx and Dy as defined in Definition 1, and
setting ?x = ?y = 1, PDA iterations
in (5a)-(5b) take the following form:
h
i
xk+1 ? argminh?k , M xi +
x
X
i?N
?i (xi ) + h?f (xki ), xi i + hAi xi ? bi , ?ik i +
1
kxi ? xki k2 ,
2?i
1
k?i ? ?ik k2 , i ? N
2?i
n
o
1
? argmin ? hM (2xk+1 ? xk ), ?i +
k? ? ?k k2 = ?k + ?M (2xk+1 ? xk ).
2?
?
(9a)
?ik+1 ? argmin ?Ki (?i ) ? hAi (2xk+1
? xki ) ? bi , ?i i +
i
(9b)
?k+1
(9c)
?i
4
Since Ki is a cone, prox?i ?K (.) = PKi? (.); hence, ?ik+1 can be written in closed form as
i
?ik+1
= PK?i ?ik + ?i Ai (2xk+1
? xki ) ? bi ,
i
i ? N.
Using recursion in (9c), we can write ?k+1 as a partial summation of primal iterates {x? }k?=0 , i.e.,
Pk?1
Pk
?k = ?0 + ? ?=0 M (2x?+1 ? x? ). Let ?0 ? ?M x0 , s0 ? x0 , and sk , xk + ?=1 x? for
k ? 1; hence, ?k = ?M sk . Using the fact that M ? M = ? ? In , we obtain
hM x, ?k i = ? hx, (? ? In )sk i = ?
P
i?N hxi ,
P
k
j?Ni (si
? skj )i.
Thus, PDA iterations given in (9) for the static graph G can be computed in a decentralized way, via
the node-specific computations as in Algorithm DPDA-S displayed in Fig. 1 below.
Algorithm DPDA-S ( x0 , ? 0 , ?, {?i , ?i }i?N )
Initialization: s0i ? x0i , i ? N
Step k: (k ? 0)
P
k
k
k
,
1. xk+1
? prox?i ?i xki ? ?i ?fi (xki ) + A?
i ?i + ?
i
j?Ni (si ? sj )
Pk+1 ?
k+1
?
x
+
x
,
i
?
N
2. sk+1
i
i
?=1 i
3. ?ik+1 ? PK?i ?ik + ?i Ai (2xk+1
? xki ) ? bi , i ? N
i
i?N
Figure 1: Distributed Primal Dual Algorithm for Static G (DPDA-S)
The convergence rate for DPDA-S, given in (6), follows from Theorem 1.1 with the help of following
?
technical lemma which provides a sufficient condition for Q(A,
A0 ) ? 0.
Lemma 2.1. Given {?i , ?i }i?N and ? such that ? > 0, and ?i , ?i > 0 for i ? N , let A0 = M and
? , Q(A,
?
A0 ) 0 if {?i , ?i }i?N and ? are chosen such that
A , diag([Ai ]i?N ). Then Q
1
? Li ? 2?di
?i
1
2
? ?max
(Ai ),
?i
? i ? N,
(10)
? ? 0 if (10) holds with strict inequality, where Q(A,
?
and Q
A0 ) is defined in Theorem 1.1.
?1
2
(Ai ) for any ci > 0 satisfies (10).
Remark 2.1. Choosing ?i = (ci + Li + 2?di ) , ?i = ci /?max
Next, we quantify the suboptimality and infeasibility of the DPDA-S iterate sequence.
Theorem 2.2. Suppose Assumption 1.1 holds. Let {xk , ? k , ?k }k?0 be the sequence generated by
Algorithm DPDA-S, displayed in Fig. 1, initialized from an arbitrary x0 and ? 0 = 0. Let step-sizes
{?i , ?i }i?N and ? be chosen satisfying (10) with strict inequality. Then {xk , ? k , ?k }k?0 converges
to {x? , ? ? , ?? }, a saddle point of (8) such that x? = 1 ? x? and (x? , ? ? ) is a primal-dual optimal
solution to (3); moreover, the following error bounds hold for all K ? 1:
?K k +
k?? k kM x
X
i?N
?K
k?i? k dKi (Ai x
i ? bi ) ? ?1 /K,
|?(?
xK ) ? ?(x? )| ? ?1 /K,
h
i
2 P
?K ,
where ?1 , ?2 k?? k2 ? ?2
M x0
+ i?N 2?1i kx?i ? x0i k2 + ?4i k?i? k2 , and x
3
1
K
PK
k=1
xk .
Dynamic Network Topology
In this section we develop a distributed primal-dual algorithm for solving (3) when the communication network topology is time-varying. We assume a compact domain, i.e., let Di ,
maxxi ,x?i ?dom ?i kx ? x? k and B , maxi?N Di < ?. Let C be the set of consensus decisions:
C , {x ? Rn|N | : xi = x
?, ?i ? N for some x
? ? Rn s.t. k?
xk ? B},
then one can reformulate (3) in a decentralized way as follows:
min max L(x, y) ,
x
y
X
i?N
?i (xi ) + h?i , Ai xi ? bi i ? ?Ki (?i ) + h?, xi ? ?C (?),
where y = [? ? ?? ]? such that ? ? Rn|N | , ? = [?i ]i?N ? Rm , and m ,
5
P
i?N
mi .
(11)
P
Next, we consider the implementation of PDA in (5) to solve (11). Let ?(x) , i?N ?i (xi ), and
P
h(y) , ?C (?) + i?N ?Ki (?i ) + hbi , ?i i. Define the block-diagonal matrix A , diag([Ai ]i?N ) ?
Rm?n|N | and T = [A? In|N | ]? . Therefore, given the initial iterates x0 , ? 0 , ?0 and parameters ? >
0, ?i , ?i > 0 for i ? N , choosing Dx and Dy as defined in Definition 1, and setting ?x = ?y = 1,
PDA iterations given in (5) take the following form: Starting from ?0 = ?0 , compute for i ? N
xk+1
? argmin ?i (xi ) + h?f (xki ), xi i + hAi xi ? bi , ?ik i + hxi , ?ki i +
i
x
? xki ) ? bi , ?i i +
?ik+1 ? argmin ?Ki (?i ) ? hAi (2xk+1
i
?i
?k+1 ? argmin ?C (?) ? h2xk+1 ? xk , ?i +
?
1
kxi ? xki k22 ,
2?i
1
k?i ? ?ik k22 ,
2?i
1
k? ? ?k k22 ,
2?
(12a)
(12b)
?k+1 ? ?k+1 .
(12c)
Using extended Moreau decomposition for proximal operators, ?k+1 can be written as
1
k? ? (?k + ?(2xk+1 ? xk ))k2 = prox??C (?k + ?(2xk+1 ? xk ))
2?
1
= ?k + ?(2xk+1 ? xk ) ? ? PC
(13)
?k + 2xk+1 ? xk .
?
?k+1 = argmin ?C (?) +
?
B
Let 1 ? R|N | be the vector all ones, B0 , {x ? Rn : kxk ? B}. Note PB0 (x) = x min{1, kxk
}.
For any x = [xi ]i?N ? Rn|N | , PC (x) can be computed as
PC (x) = 1 ? p(x),
where p(x) , argmin
??B0
X
i?N
k? ? xi k2 = argmin k? ?
??B0
1 X
xi k 2 .
|N | i?N
(14)
Let B , {x : kxi k ? B, i ? N } = ?i?N B0 . Hence, we can write PC (x) = PB ((W ? In )x)
where W , |N1 | 11? ? R|N |?|N | . Equivalently,
P
PC (x) = PB (1 ? p?(x)) , where p?(x) , |N1 | i?N xi .
(15)
Although x-step and ?-step of the PDA implementation in (12) can be computed locally at each
node, computing ?k+1 requires communication among the nodes. Indeed, evaluating the average
operator p?(.) is not a simple operation in a decentralized computational setting which only allows
for communication among neighbors. In order to overcome this issue, we will approximate p?(.)
operator using multi-consensus steps, and analyze the resulting iterations as an inexact primal-dual
algorithm. In [20], this idea has been exploited within a distributed primal algorithm for unconstrained consensus optimization problems. We define the consensus step as one time exchanging
local variables among neighboring nodes ? the details of this operation will be discussed shortly.
Since the connectivity network is dynamic, let G t = (N , E t ) be the connectivity network at the time
t-th consensus step is realized for t ? Z+ . We adopt the information exchange model in [21].
Assumption 3.1. Let V t ? R|N |?|N | be the weight matrix corresponding to G t = (N , E t ) at the
time of t-th consensus step and Nit , {j ? N : (i, j) ? E t or (j, i) ? E t }. Suppose for all
t ? Z+ : (i) V t is doubly stochastic; (ii) there exists ? ? (0, 1) such that for i ? N , Vijt ? ? if
j ? Nit , and Vijt = 0 if j ?
/ Nit ; (iii) G ? = (N , E ? ) is connected where E ? , {(i, j) ? N ? N :
t
(i, j) ? E for infinitely many t ? Z+ }, and there exists Z ? T ? > 1 such that if (i, j) ? E ? , then
?
(i, j) ? E t ? E t+1 ? ... ? E t+T ?1 for all t ? 1.
Lemma 3.1. [21] Let Assumption 3.1 holds, and W t,s = V t V t?1 ...V s+1 for t ? s + 1. Given
s ? 0 the entries of W t,s converges to N1 as t ? ? with a geometric rate, i.e., for all i, j ? N , one
?
?
?
?
has Wijt,s ? N1 ? ??t?s , where ? , 2(1+? ?T )/(1?? T ), ? , (1?? T )1/T , and T? , (N ?1)T ? .
Consider the k-th iteration of PDA as shown in (12). Instead of computing ?k+1 exactly according
to (13), we propose to approximate ?k+1 with the help of Lemma 3.1 and set ?k+1 to this approximation. In particular, let tk be the total number of consensus steps done before k-th iteration of
PDA, and let qk ? 1 be the number of consensus steps within iteration k. For x = [xi ]i?N , define
Rk (x) , PB (W tk +qk ,tk ? In ) x
k
(16)
to approximate PC (x) in (13). Note that R (?) can be computed in a distributed fashion requiring
qk communications with the neighbors for each node. Indeed,
Rk (x) = [Rki (x)]i?N
such that Rki (x) , PB0
6
X
j?N
t +qk ,tk
Wijk
xj .
(17)
Moreover, the approximation error, Rk (x) ? PC (x), for any x can be bounded as in (18) due to
non-expansivity of PB and using Lemma 3.1. From (15), we get for all i ? N ,
X t +q ,t
kRki (x) ? PB0 p?(x) k = kPB0
Wijk k k xj ? PB0
j?N
?k
X
t +q ,t
Wijk k k
j?N
?
1
N
1
N
X
j?N
xj k
?
xj k ? N ??qk kxk .
(18)
Thus, (15) implies that kRk (x) ? PC (x)k ? N ??qk kxk. Next, to obtain an inexact variant of
(12), we replace the exact computation in (12c) with the inexact iteration rule:
?k+1 ? ?k + ?(2xk+1 ? xk ) ? ?Rk
1 k
?
?
+ 2xk+1 ? xk .
(19)
Thus, PDA iterations given in (12) can be computed inexactly, but in decentralized way for dynamic
connectivity, via the node-specific computations as in Algorithm DPDA-D displayed in Fig. 2 below.
Algorithm DPDA-D ( x0 , ? 0 , ?, {?i , ?i }i?N , {qk }k?0 )
Initialization: ?0i ? 0, i ? N
Step k: (k ? 0)
k
k
, ri ?
1. xk+1
? prox?i ?i xki ? ?i ?fi (xki ) + A?
i ?i + ? i
i
k+1
k+1
k
k
? PK?i ?i + ?i Ai (2xi ? xi ) ? bi , i ? N
2. ?i
3. For ? = 1, . . .P
, qk
t +?
4.
ri ? j?N tk +? ?{i} Vijk rj , i ? N
1 k
?
? i
+ 2xk+1
? xki
i
i?N
i
5. End For
6. ?k+1
? ?ki + ?(2xk+1
? xki ) ? ?PB0 ri ,
i
i
i?N
Figure 2: Distributed Primal Dual Algorithm for Dynamic G t (DPDA-D)
Next, we define the proximal error sequence {ek }k?1 as in (20), which will be used later for analyzing the convergence of Algorithm DPDA-D displayed in Fig. 2.
ek+1 , PC
1 k
?
?
+ 2xk+1 ? xk ? Rk ?1 ?k + 2xk+1 ? xk ;
(20)
hence, ?k = ?k + ?ek for k ? 1 when (12c) is replaced with (19). In the rest, we assume ?0 = 0.
The following observation will also be useful to prove error bounds for DPDA-D iterate sequence.
For each i ? N , the definition of Rki in (17) implies that Rki (x) ? B0 for all x; hence, from (19),
k?k+1
k ? k?ki + ?(2xk+1
? xki )k + ?kRki
i
i
1 k
?
?
Thus, we trivially get the following bound on
?k
:
+ 2xk+1 ? xk k ? k?ki k + 4?B.
?
k?k k ? 4? N B k.
(21)
Moreover, for any ? and ? we have that
?C (?) = sup h?, xi + h? ? ?, xi ? ?C (?) +
x?C
?
N B k? ? ?k.
(22)
Theorem 3.2. Suppose Assumption 1.1 holds. Starting from ?0 = 0, ? 0 = 0, and an arbitrary
x0 , let {xk , ? k , ?k }?k?0 be the iterate sequence generated using Algorithm DPDA-D, displayed in
p
Fig. 2, using qk = k consensus steps at the k-th iteration for all k ? 1 for some rational p ? 1.
Let primal-dual step-sizes {?i , ?i }i?N and ? be chosen such that the following holds:
1
?i
? Li ? ?
1
2
> ?max
(Ai ),
?i
? i ? N.
(23)
Then {xk , ? k , ?k }k?0 converges to {x? , ? ? , ?? }, a saddle point of (11) such that x? = 1 ? x?
and (x? , ? ? ) is a primal-dual optimal solution to (3). Moreover, the following bounds hold for all
K ? 1:
k?? k dC? (?
xK ) +
X
i?N
?K
k?i? k dKi (Ai x
i ? bi ) ?
?2 + ?3 (K)
,
K
|?(?
xK ) ? ?(x? )| ?
?2 + ?3 (K)
,
K
h
i
!1 ?
0
?
1
4
k
?
0 2
? 2
x ? x ?
+ P
,
kx
k?
x
,
?
,
2k?
k
k?
k+
?x
k
+
k
2
i
i
i
i?N ?i
k=1
?i
h ?
i
?
P
K
k?
k
qk
and ?3 (K) , 8N 2 B 2 ?
2?k 2 + ? + ?N B k . Moreover, supK?Z+ ?3 (K) < ?;
k=1 ?
?K
where x
hence,
,
1
K
PK
1
K ?3 (K)
1
= O( K
).
7
Remark 3.1. Note that the suboptimality, infeasibility and consensus violation at the K-th iteration
is O(?3 (K)/K), where ?3 (K) denotes the error accumulation due to approximation errors, and
PK
?3 (K) can be bounded above for all K ? 1 as ?3 (K) ? R k=1 ?qk k 2 for some constant
?
p
?p
P?
R > 0. Since k=1 ? k k 2 < ? for any p ? 1, if one chooses qk = k for k ? 1, then the
total number of communications per node until the end of K-th iteration can be bounded above by
PK
1+1/p
). For large p, qk grow slowly, it makes the method more practical at the cost
k=1 qk = O(K
of longer convergence time due to increase in O(1) constant. Note that qk = (log(k))2 also works
and it grows very slowly. We assume agents know qk as a function of k at the beginning, hence,
synchronicity can be achieved by simply counting local communications with each neighbor.
4
Numerical Section
We tested DPDA-S and DPDA-D on a primal linear SVM problem where the data is distributed
among the computing nodes in N . For the static case, communication network G = (N , E) is a
connected graph that is generated by randomly adding edges to a spanning tree, generated uniformly
at random, until a desired algebraic connectivity is achieved. For the dynamic case, for each consensus round t ? 1, G t is generated as in the static case, and V t , I ? 1c ?t , where ?t is the Laplacian
of G t , and the constant c > dtmax . We ran DPDA-S and DPDA-D on the line and complete graphs as
well to see the topology effect ? for the dynamic case when the topology is line, each G t is a random
line graph. Let S , {1, 2, .., s} and D , {(x? , y? ) ? Rn ? {?1, +1} : ? ? S} be a set of feature
vector and label pairs. Suppose S is partitioned into Stest and Strain , i.e., the index sets for the
test and training data; let {Si }i?N be a partition of Strain among the nodes N . Let w = [wi ]i?N ,
b = [bi ]i?N , and ? ? R|Strain | such that wi ? Rn and bi ? R for i ? N .
Consider the following distributed SVM problem:
min
w,b,?
n X
1
2
i?N
kwi k2 + C |N |
XX
i?N ??Si
?? :
y? (wiT x? + bi ) ? 1 ? ?? , ?? ? 0, ? ? Si , i ? N ,
wi = wj , bi = bj (i, j) ? E
o
Similar to [3], {x? }??S is generated from two-dimensional multivariate Gaussian distribution with
covariance matrix ? = [1, 0; 0, 2] and with mean vector either m1 = [?1, ?1]T or m2 = [1, 1]T
with equal probability. The experiment was performed for C = 2, |N | ?
= 10, s = 900 such
that |Stest | = 600, |Si | = 30 for i ? N , i.e., |Strain | = 300, and qk = k. We ran DPDA-S
and DPDA-D on line, random, and complete graphs, where the random graph is generated such
that the algebraic connectivity is approximately 4.
Relative
suboptimality and relative consensus
violation, i.e., max(i,j)?E k[wi? bi ]? ? [wj? bj ]? k/
[w? ? b? ]
, and absolute feasibility violation are
plotted against iteration counter in Fig. 3, where [w? ? b? ] denotes the optimal solution to the central
problem. As expected, the convergence is slower when the connectivity of the graph is weaker.
Furthermore, visual comparison between DPDA-S, local SVMs (for two nodes) and centralized
SVM for the same training and test data sets is given in Fig. 4 and Fig. 5 in the appendix.
Figure 3: Static (top) and Dynamic (bottom) network topologies: line, random, and complete graphs
8
References
[1] Qing Ling and Zhi Tian. Decentralized sparse signal recovery for compressive sleeping wireless sensor
networks. Signal Processing, IEEE Transactions on, 58(7):3816?3827, 2010.
[2] Ioannis D Schizas, Alejandro Ribeiro, and Georgios B Giannakis. Consensus in ad hoc WSNs with noisy
links - Part I: Distributed estimation of deterministic signals. Signal Processing, IEEE Transactions on,
56(1):350?364, 2008.
[3] Pedro A Forero, Alfonso Cano, and Georgios B Giannakis. Consensus-based distributed support vector
machines. The Journal of Machine Learning Research, 11:1663?1707, 2010.
[4] Ryan McDonald, Keith Hall, and Gideon Mann. Distributed training strategies for the structured perceptron. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of
the Association for Computational Linguistics, pages 456?464. Association for Computational Linguistics, 2010.
[5] F. Yan, S. Sundaram, S. Vishwanathan, and Y. Qi. Distributed autonomous online learning: Regrets
and intrinsic privacy-preserving properties. Knowledge and Data Engineering, IEEE Transactions on,
25(11):2483?2493, 2013.
[6] Gonzalo Mateos, Juan Andr?es Bazerque, and Georgios B Giannakis. Distributed sparse linear regression.
Signal Processing, IEEE Transactions on, 58(10):5262?5276, 2010.
[7] Francis R Bach, Gert RG Lanckriet, and Michael I Jordan. Multiple kernel learning, conic duality, and the
smo algorithm. In Proceedings of the twenty-first international conference on Machine learning, page 6.
ACM, 2004.
[8] Angelia Nedi?c and Asuman Ozdaglar. Subgradient methods for saddle-point problems. Journal of optimization theory and applications, 142(1):205?228, 2009.
[9] Antonin Chambolle and Thomas Pock. A first-order primal-dual algorithm for convex problems with
applications to imaging. Journal of Mathematical Imaging and Vision, 40(1):120?145, 2011.
[10] Bingsheng He and Xiaoming Yuan. Convergence analysis of primal-dual algorithms for a saddle-point
problem: from contraction perspective. SIAM Journal on Imaging Sciences, 5(1):119?149, 2012.
[11] Antonin Chambolle and Thomas Pock. On the ergodic convergence rates of a first-order primal?dual
algorithm. Mathematical Programming, 159(1):253?287, 2016.
[12] Yunmei Chen, Guanghui Lan, and Yuyuan Ouyang. Optimal primal-dual methods for a class of saddle
point problems. SIAM Journal on Optimization, 24(4):1779?1814, 2014.
[13] A. Nedic and A. Ozdaglar. Convex Optimization in Signal Processing and Communications, chapter
Cooperative Distributed Multi-agent Optimization, pages 340?385. Cambridge University Press, 2010.
[14] A. Nedi?c. Distributed optimization. In Encyclopedia of Systems and Control, pages 1?12. Springer, 2014.
[15] Tsung-Hui Chang, Angelia Nedic, and Anna Scaglione.
Distributed constrained optimization
by consensus-based primal-dual perturbation method. Automatic Control, IEEE Transactions on,
59(6):1524?1538, 2014.
[16] David Mateos-N?un? ez and Jorge Cort?es. Distributed subgradient methods for saddle-point problems. In
2015 54th IEEE Conference on Decision and Control (CDC), pages 5462?5467, Dec 2015.
[17] Deming Yuan, Shengyuan Xu, and Huanyu Zhao. Distributed primal?dual subgradient method for multiagent optimization via consensus algorithms. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE
Transactions on, 41(6):1715?1724, 2011.
[18] Angelia Nedi?c, Asuman Ozdaglar, and Pablo A Parrilo. Constrained consensus and optimization in multiagent networks. Automatic Control, IEEE Transactions on, 55(4):922?938, 2010.
[19] Kunal Srivastava, Angelia Nedi?c, and Du?san M Stipanovi?c. Distributed constrained optimization over
noisy networks. In Decision and Control (CDC), 2010 49th IEEE Conference on, pages 1945?1950.
IEEE, 2010.
[20] Albert I Chen and Asuman Ozdaglar. A fast distributed proximal-gradient method. In Communication,
Control, and Computing (Allerton), 2012 50th Annual Allerton Conference on, pages 601?608. IEEE,
2012.
[21] Angelia Nedi?c and Asuman Ozdaglar. Distributed subgradient methods for multi-agent optimization.
Automatic Control, IEEE Transactions on, 54(1):48?61, 2009.
[22] Ralph Tyrell Rockafellar. Convex analysis. Princeton university press, 2015.
[23] H. Robbins and D. Siegmund. Optimizing methods in statistics (Proc. Sympos., Ohio State Univ., Columbus, Ohio, 1971), chapter A convergence theorem for non negative almost supermartingales and some
applications, pages 233 ? 257. New York: Academic Press, 1971.
9
| 6242 |@word private:6 briefly:1 norm:3 open:1 iki:1 km:1 seek:1 decomposition:1 covariance:1 contraction:1 thereby:1 accommodate:1 initial:3 ours:1 cort:1 incidence:1 si:6 dx:8 written:3 numerical:2 synchronicity:1 partition:1 sundaram:1 selected:1 xk:55 beginning:1 stest:2 completeness:1 iterates:3 node:32 provides:1 allerton:2 simpler:1 mathematical:2 along:1 ik:12 yuan:2 consists:1 prove:2 doubly:1 deming:1 manner:1 privacy:4 x0:13 expected:1 indeed:4 examine:1 sdp:1 multi:4 decreasing:1 zhi:1 increasing:1 spain:1 provided:3 underlying:1 moreover:8 notation:2 bounded:3 xx:1 argmin:12 ouyang:1 compressive:1 impractical:1 guarantee:1 concave:3 exactly:1 prohibitively:1 rm:10 k2:9 control:7 ozdaglar:5 penn:2 positive:1 before:1 engineering:3 local:14 pock:4 analyzing:1 approximately:1 therein:1 studied:2 initialization:2 limited:1 bi:25 tian:1 practical:1 block:4 regret:1 communicated:2 area:1 yan:1 composite:4 projection:6 get:3 onto:4 interior:1 operator:4 put:1 accumulation:1 map:4 deterministic:1 nit:3 starting:2 l:1 convex:28 ergodic:3 nedi:5 wit:1 recovery:1 communicating:1 rule:1 m2:1 handle:4 gert:1 autonomous:1 siegmund:1 suppose:8 exact:1 programming:1 lanckriet:1 pa:2 kunal:1 expensive:2 satisfying:1 yuyuan:1 skj:1 slater:1 cooperative:2 kxk1:1 bottom:1 solved:1 wj:2 connected:5 counter:1 yk:6 ran:2 convexity:1 respecting:1 pda:12 dynamic:8 dom:10 solving:6 completely:1 bingsheng:1 various:1 chapter:3 univ:1 fast:1 kp:1 sympos:1 choosing:3 h0:1 kai:1 solve:5 larger:1 otherwise:3 statistic:1 noisy:2 online:1 hoc:1 sequence:10 differentiable:2 eigenvalue:1 advantage:1 propose:4 product:2 neighboring:3 kv:1 ky:1 convergence:13 p:1 requirement:1 converges:7 tk:5 help:3 depending:1 develop:2 derive:1 x0i:2 ij:4 b0:5 keith:1 implemented:3 implies:3 quantify:1 stochastic:1 human:1 supermartingales:1 mann:1 exchange:2 require:1 hx:1 tyrell:1 preliminary:1 ryan:1 summation:4 hold:7 proximity:1 hall:1 bj:2 adopt:1 smallest:1 xk2:1 polar:1 estimation:1 proc:1 label:1 robbins:1 minimization:1 clearly:1 sensor:1 gaussian:1 rki:4 modified:1 pki:1 varying:5 rm0:2 industrial:2 inference:1 entire:2 a0:8 ralph:1 issue:2 overall:1 dual:24 among:7 supw:1 constrained:8 special:1 equal:5 aware:1 psu:2 preparing:1 park:2 nonsmooth:1 few:2 oriented:1 randomly:1 qing:1 replaced:1 n1:4 centralized:3 wijk:3 evaluation:1 violation:5 semidefinite:1 pc:9 primal:30 bregman:4 edge:6 partial:2 tree:1 euclidean:2 initialized:1 desired:1 plotted:1 hbi:2 exchanging:2 cost:2 entry:2 imi:1 proximal:5 kxi:3 chooses:1 angelia:5 guanghui:1 international:1 siam:2 michael:1 continuously:1 connectivity:8 central:6 containing:1 possibly:3 summable:2 slowly:2 juan:1 ek:4 american:1 zhao:1 li:5 distribute:1 prox:7 relint:1 parrilo:1 ioannis:1 wk:2 north:1 int:1 rockafellar:1 explicitly:1 ad:1 tsung:1 later:2 performed:1 closed:6 analyze:1 sup:2 francis:1 im0:1 contribution:1 minimize:2 square:1 ni:4 became:1 qk:17 who:1 efficiently:2 yield:1 asuman:4 cybernetics:2 scaglione:1 definition:5 inexact:3 against:1 aybat:1 di:6 proof:5 static:8 mi:3 rational:2 knowledge:3 carefully:1 appears:1 done:1 strongly:3 generality:1 furthermore:2 chambolle:4 until:2 d:1 hand:1 lar:1 columbus:1 reveal:1 grows:1 modulus:1 preimage:2 effect:2 k22:3 requiring:2 hence:10 deal:1 round:1 suboptimality:5 forero:1 complete:3 mcdonald:1 cano:1 consideration:1 ohio:2 fi:6 recently:2 extend:2 discussed:1 m1:1 association:2 he:1 composition:1 cambridge:1 ai:23 automatic:3 unconstrained:2 trivially:1 similarly:1 pointed:1 language:1 gonzalo:1 hxi:2 longer:1 alejandro:1 multivariate:1 showed:1 recent:2 perspective:3 optimizing:1 scenario:1 certain:1 inequality:4 jorge:1 immature:1 yi:2 exploited:1 preserving:1 converge:2 signal:7 ii:2 violate:1 multiple:2 rj:1 yunmei:1 smooth:7 technical:2 academic:1 bach:1 laplacian:3 feasibility:1 qi:1 scalable:1 variant:1 regression:1 vision:1 albert:1 iteration:16 kernel:2 achieved:2 sleeping:1 dec:1 addition:1 want:1 grow:1 rest:1 strict:2 kwi:1 subject:3 undirected:2 alfonso:1 jordan:1 counting:1 iii:1 enough:1 easy:1 iterate:5 xj:6 fit:1 topology:13 lasso:1 idea:1 computable:1 algebraic:2 york:1 repeatedly:1 remark:2 useful:1 encyclopedia:1 locally:2 svms:1 exist:1 andr:1 per:1 write:2 lan:1 pb:4 ht:2 imaging:3 graph:13 circumventing:1 subgradient:4 sum:2 cone:8 communicate:2 throughout:2 almost:1 decision:7 appendix:3 dy:8 ki:19 hi:2 bound:4 annual:2 rmi:6 constraint:11 kronecker:1 vishwanathan:1 ri:3 sake:1 optimality:1 min:7 xki:15 xiaoming:1 department:2 developing:1 according:2 structured:1 conjugate:1 giannakis:3 y0:4 wi:6 partitioned:1 maxy:2 computationally:1 discus:2 nonempty:1 mind:1 know:3 end:2 available:1 operation:2 decentralized:11 dtmax:1 away:1 generic:1 shortly:1 slower:2 existence:1 thomas:2 denotes:8 top:1 include:1 linguistics:2 objective:4 realized:1 strategy:1 diagonal:3 hai:4 gradient:7 minx:3 distance:4 link:1 consensus:34 spanning:1 assuming:2 index:1 reformulate:2 equivalently:1 expense:1 negative:1 implementation:3 proper:1 twenty:1 observation:1 finite:1 displayed:5 defining:1 extended:1 communication:16 strain:4 pb0:5 pdp:2 rn:23 perturbation:2 dc:1 arbitrary:2 david:1 pablo:1 pair:1 required:1 smo:1 barcelona:1 nip:1 able:1 below:2 gideon:1 max:8 memory:2 indicator:1 recursion:1 nedic:2 technology:1 mateos:2 conic:5 hm:2 literature:1 geometric:1 relative:2 georgios:3 loss:1 fully:1 cdc:2 multiagent:2 k2d:1 agent:14 degree:1 sufficient:2 s0:1 wireless:1 allow:1 weaker:1 perceptron:1 neighbor:4 taking:1 absolute:1 sparse:2 moreau:1 distributed:32 kzk:1 overcome:1 evaluating:1 author:1 san:1 far:1 ribeiro:1 transaction:8 sj:1 approximate:3 compact:2 global:5 active:2 assumed:1 xi:40 continuous:2 un:1 infeasibility:5 s0i:1 sk:4 qz:1 du:1 domain:1 diag:7 krk:1 anna:1 pk:12 main:1 privately:1 ling:1 antonin:2 xu:1 fig:8 fashion:4 sub:2 lie:1 hw:1 maxxi:1 theorem:8 rk:5 specific:7 maxi:1 dki:2 svm:6 exists:5 intrinsic:1 adding:1 ci:3 hui:1 cartesian:1 kx:4 gap:2 chen:2 rg:1 intersection:1 simply:1 saddle:14 infinitely:1 bazerque:1 ez:1 visual:1 kxk:4 supk:1 chang:1 springer:1 pedro:1 primaldual:2 satisfies:2 inexactly:1 acm:1 identity:1 formulated:1 lipschitz:3 replace:1 feasible:1 man:1 uniformly:1 lemma:5 called:1 total:2 duality:2 e:2 exception:2 support:3 argminh:1 princeton:1 tested:1 srivastava:1 |
5,795 | 6,243 | Infinite Hidden Semi-Markov Modulated Interaction
Point Process
Peng Lin?? , Bang Zhang? , Ting Guo? , Yang Wang? , Fang Chen?
Data61 CSIRO, Australian Technology Park, 13 Garden Street, Eveleigh NSW 2015, Australia
?
School of Computer Science and Engineering, The University of New South Wales, Australia
{peng.lin, bang.zhang, ting.guo, yang.wang, fang.chen}@data61.csiro.au
?
Abstract
The correlation between events is ubiquitous and important for temporal events
modelling. In many cases, the correlation exists between not only events? emitted
observations, but also their arrival times. State space models (e.g., hidden Markov
model) and stochastic interaction point process models (e.g., Hawkes process)
have been studied extensively yet separately for the two types of correlations
in the past. In this paper, we propose a Bayesian nonparametric approach that
considers both types of correlations via unifying and generalizing the hidden semiMarkov model and interaction point process model. The proposed approach can
simultaneously model both the observations and arrival times of temporal events,
and automatically determine the number of latent states from data. A Metropoliswithin-particle-Gibbs sampler with ancestor resampling is developed for efficient
posterior inference. The approach is tested on both synthetic and real-world data
with promising outcomes.
1
Introduction
Temporal events modeling is a classic machine learning problem that has drawn enormous research
attentions for decades. It has wide applications in many areas, such as financial modelling, social
events analysis, seismological and epidemiological forecasting. An event is often associated with
an arrival time and an observation, e.g., a scalar or vector. For example, a trading event in financial
market has a trading time and a trading price. A message in social network has a posting time and a
sequence of words. A main task of temporal events modelling is to capture the underlying events
correlation and use it to make predictions for future events? observations and/or arrival times.
The correlation between events? observations can be readily found in many real-world cases in which
an event?s observation is influenced by its predecessors? observations. For examples, the price of a
trading event is impacted by former trading prices. The content of a new social message is affected
by the contents of the previous messages. State space model (SSM), e.g., the hidden Markov model
(HMM) [16], is one of the most prevalent frameworks that consider such correlation. It models the
correlation via latent state dependency. Each event in the HMM is associated with a latent state
that can emit an observation. A latent state is independent of all but the most recent state, i.e.,
Markovianity. Hence, a future event observation can be predicted based on the observed events and
inferred mechanisms of emission and transition.
Despite its popularity, the HMM lacks the flexibility to model event arrival time. It only allows
fixed inter-arrival time. The duration of a type of state follows a geometric distribution with its
self-transition probability as the parameter due to the strict Markovian constraint. The hidden semiMarkov model (HSMM) [14, 21] was developed to allow non-geometric state duration. It is an
extension of the HMM by allowing the underlying state transition process to be a semi-Markov chain
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
with a variable duration time for each state. In addition to the HMM components, the HSMM models
the duration of a state as a random variable and a state can emit a sequence of observations.
The HSMM allows the flexibility of variable inter-arrival times, but it does not consider events?
correlation on arrival times. In many real-world applications, one event can trigger the occurrences of
others in the near future. For instance, earthquakes and epidemics are diffusible events, i.e., one can
cause the occurrences of others. Trading events in financial markets arrive in clusters. Information
propagation in social network shows contagious and clustering characteristics. All these events
exhibit interaction characteristics in terms of arrival times. The likelihood of an event?s arrival time is
affected by the previous events? arrival times. Stochastic interaction point process (IPP), e.g., Hawkes
process [6], is a widely adopted framework for capturing such arrival time correlation. It models the
correlation via a conditional intensity function that depicts the event intensity depending on all the
previous events? arrival times. However, unlike the SSMs, it lacks the capability of modelling events?
latent states and their interactions.
It is clearly desirable in real-world applications to have both arrival time correlation and observation
correlation considered in a unified manner so that we can estimate both when and how events will
appear. Inspired by the merits of SSMs and IPPs, we propose a novel Bayesian nonparametric
approach that unifies and generalizes SSMs and IPPs via a latent semi-Markov state chain with
infinitely countable number of states. The latent states governs both the observation emission and
new event triggering mechanism. An efficient sampling method is developed within the framework of
particle Markov chain Monte Carlo (PMCMC) [1] for the posterior inference of the proposed model.
We first review closely related techniques in Section 2, and give the description of the proposed model
in Section 3. Then Section 4 presents the inference algorithm. In Section 5, we show the results of
the empirical studies on both synthetic and real-word data. Conclusions are drawn in Section 6.
2
Preliminaries
In this section, we review the techniques that are closely related to the proposed method, namely
hidden (semi-)Markov model, its Bayesian nonparametric extension and Hawkes process.
2.1
Hidden (Semi-)Markov Model
The HMM [16] is one of the most popular approaches for temporal event modelling. It utilizes a
sequence of latent states with Markovian property to model the dynamics of temporal events. Each
event in the HMM is associated with a latent state that determines the event?s observation via a
emission probability distribution. The state of an event is independent of all but its most recent
predecessor?s state (i.e., Markovianity) following a transition probability distribution. The HMM
consists of 4 components: (1) an initial state probability distribution,(2) a finite latent state space,
(3) a state transition matrix, and (4) an emission probability distribution. As a result, the inference
for the HMM involves: inferring (1) the initial state probability distribution, (2) the sequence of the
latent states, (3) the state transition matrix and (4) the emission probability distribution.
The HMM has proven to be an excellent general framework modelling sequential data, but it has two
significant drawbacks: (1) The durations of events (or the inter-arrival times between events) are
fixed to a common value. The state duration distributions are restricted to a geometric form. Such
setting lacks the flexibility for real-world applications. (2) The size of the latent state space in the
HMM must be set a priori instead of learning from data.
The hidden semi-Markov model (HSMM) [14, 21] is a popular extension to the HMM, which tries to
mitigate the first drawback of the HMM. It allows latent states to have variable durations, thereby
forming a semi-Markov chain. It reduces to the HMM when durations follow a geometric distribution.
Additional to the 4 components of the HMM, HSMM has a state duration probability distribution. As
a result, the inference procedure for the HSMM also involves the inference of the duration probability
distribution. It is worth noting that the interaction between events in terms of event arrival time is
neglected by both the HMM and the HSMM.
2
2.2
Hierarchical Dirichlet Process Prior for State Transition
The recent development in Bayesian nonparametrics helps address the second drawback of the HMM.
Here, we briefly review the Hierarchical Dirichlet Process HMM (HDP-HMM). Let (?, B) be a
measurable space and G0 be a probability measure on it. A Dirichlet process (DP) G is a distribution
of a random probability measure over the measurable space (?, B). For any finite measurable partition
(A1 , ? ? ? , Ar ) of ?, the random vector (G(A1 ), ? ? ? , G(Ar )) follows a finite Dirichlet distribution
parameterized by (?0 G0 (A1 ), ? ? ? , ?0 G0 (Ar )), where ?0 is a positive real number.
HDP is defined based on DP for modelling grouped data. It is a distribution over a collection of
random probability measures over the measurable space (?, B). Each one of these random probability
measure Gk is associated with a group. A global random probability measure G0 distributed as
a DP is used as a mean measure with concentration parameter ? and base probability measure H.
Because the HMM can be treated as a set of mixture models in a dynamic manner, each of which
corresponds to a value of the current state, the HDP becomes a natural choice as the prior over the
state transitions [2, 18]. The generative HDP-HMM model can be summarized as:
? | ? v GEM(?), ?k |?0 , ? v DP(?0 , ?), ?k | ?, H v H(?),
sn |sn?1 , (?k )?
yn | sn , (?k )?
.
k=1 v ?sn?1 ,
k=1 v F (?sn )
(1)
GEM denotes stick-breaking process. The variable sequence ?k indicates the latent state sequence.
yn represents the observation. HDP acts the role of a prior over the infinite transition matrices. Each
?k is a draw from a DP, it depicts the transition distribution from state k. The probability measures
from which ?k ?s are drawn are parameterized by the same discrete base measure ?. ? parameterizes
the emission distribution F . Usually H is set to be conjugate of F simplifying inference. ? controls
base measure ??s degree of concentration. ?0 plays the role of governing the variability of the prior
mean measure across the rows of the transition matrix.
Because the HDP prior doesn?t distinguish self-transitions from transitions to other states, it is
vulnerable to unnecessary frequent switching of states and more states. Thus, [5] proposed a
sticky HDP-HMM to include a self-transition bias parameter into the state transition measure
?k ? DP (?0 + ?, (?0 ? + ??k )/(?0 + ?)), where ? controls the stickness of the transition matrix.
2.3
Hawkes Process
Stochastic point process [3] is a rich family of models that are designed for tackling various of
temporal event modeling problems. A stochastic point process can be defined via its conditional
intensity function that provides an equivalent representation as a counting process for temporal events.
Given N (t) denoting the number of events occurred in the time interval [0, t) and ?t indicating the
arrival times of the temporal events before t, the intensity for a time point t conditioned on the arrival
times of all the previous events is defined as:
?(t|?t ) = lim
?t?0
E[N (t + ?t) ? N (t)|?t ]
.
?t
(2)
It is worth noting that we do not consider edge effect in this paper, hence no events exist before time
0. A variety of point processes has been developed with distinct functional forms of intensity for
various modeling purposes. Interaction point process (IPP) [4] considers point interactions with an
intensity function dependent on historical events. Hawkes process [7, 6] is one of the most popular
and flexible IPPs. Its conditional intensity has the following functional form:
X
?(t) = ?(t) +
?n (t ? tn ).
(3)
tn <t
We use ?(t) to represent intensity function conditioned on previous points ?t with the consideration
of notation simplicity. The function ?(t) is a non-negative background intensity function which is
often set to a positive real number. The function ?n (4t) represents the triggering kernel of event
tn . It is a decay function defined on [0, ?) depicting the decayed P
influence of triggering new events.
A typical decay function is in exponential form, i.e., ?(t) = ? + tn <t ?0 ? exp(?? 0 (t ? tn )). As
discussed in [7, 10], because the superposition of several Poisson processes is also a Poisson process,
Hawkes process can be considered as a conditional Poisson process that is a constituted by combining
a background Poisson process ?(t) and a set of triggered Poisson processes with intensity ?n (t ? tn ).
3
Figure 1: (1) An intuitive illustration of the iHSMM-IPP model. Every event in the iHSMM-IPP
model is associated with a latent state s, an arrival time t and an observable value y. The colours of
points indicate latent states. Blue curve shows the event intensity. The top part of the figure illustrates
the IPP component of the iHSMM-IPP model and the bottom part illustrates the HSMM component.
The two components are integrated together via an infinite countable semi-Markov latent state chain.
(2) Graphical model of the iHSMM-IPP model. The top part shows the HDP-HMM.
3
Infinite Hidden Semi-Markov Modulated Interaction Point Process
(iHSMM-IPP)
Inspired by the merits of SSMs and IPPs, we propose an infinite hidden semi-Markov modulated
interaction point process model (iHSMM-IPP). It is a Bayesian nonparametric stochastic point
process with a latent semi-Markov state chain determining both event emission probabilities and
event triggering kernels. An intuitive illustration is given in Fig. 1 (1). Each temporal event in the
iHSMM-IPP is represented by a stochastic point and each point is associated with a hidden discrete
state {si } that plays the role of determining event emission and triggering mechanism. As in SSMs
and IPPs, the event emission probabilities guide the generation of event observations {yi } and the
event triggering kernels influence the occurrence times {ti } of events. The hidden state depends only
on the most recent event?s state. The size of the latent state space is infinite countable with the HDP
prior.
The model can be formally defined as the following and its corresponding graphical model is given in
Fig. 1 (2).
? | ? v GEM(?),
0
?k | ?0 , ? v DP(?0 , ?),
0
?k | ?, H v H (?),
tn | ? v PP(? +
n?1
X
sn |
?k | ?, H v H(?),
sn?1 , (?k )?
k=1
??si (t ? ti )), yn |
v ?sn?1 ,
sn , (?k )?
k=1
(4)
v F (?sn ).
i=1
We use ??si (?) to denote the triggering kernel parameterized by ?si which is indexed by latent
state si . We use ?si (?) instead of ??si (?) for the remaining of the paper for the sake of notation
simplicity. The iHSMM-IPP is a generative model that can be used for generating a series of events
with arrival times and emitted observations. The arrival time tn is drawn from a Poisson process.
We do not consider edge effect in this work. Therefore, the first event?s arrival time, t1 , is drawn
from a homogeneous Poisson process parameterized by a hyper-parameter ?. For n > 1, tn is
drawn from an inhomogeneous Poisson process whose conditional intensity function is defined as:
Pn?1
? + i=1 ?si (t ? ti ). As defined before, ?si (?) indicates the triggering kernel of a former point
i whose latent state is si . The state of the point sn is drawn following the guidance of the HDP
prior as in the HDP-HMM. The emitted observation yn is generation from the emission probability
distribution F (?) parameterized by ?sn which is determined by the state sn .
4
4
Posterior Inference for the iHSMM-IPP
In this section, we describe the inference method for the proposed iHSMM-IPP model. Despite its
flexibility, the proposed iHSMM-IPP model faces three challenges for efficient posterior inference:
(1) strong correlation nature of its temporal dynamics (2) non-Markovianity introduced by the event
triggering mechanism, and (3) infinite dimensional state transition. The traditional sampling methods
for high dimensional probability distributions, e.g., MCMC, sequential Monte Carlo (SMC), are
unreliable when highly correlated variables are updated independently, which can be the case for the
iHSMM-IPP model. So we develop the inference algorithm within the framework of particle MCMC
(PMCMC), a family of inferential methods recently developed in [1]. The key idea of PMCMC is to
use SMC to construct a proposal kernel for an MCMC sampler. It not only improves over traditional
MCMC methods but also makes Bayesian inference feasible for a large class of statistical models. For
tackling the non-Markovianity, ancestor resampling scheme [13] is incorporated into our inference
algorithm. As existing forward-backward sampling methods, ancestor resampling uses backward
sampling to improve the mixing of PMCMC. However, it achieves the same effect in a single forward
sweep instead of using separate forward and backward sweeps. More importantly, it provides an
effective way of sampling for non-Markovian SSMs.
Given a sequence of N events, {yn , tn }N
n=1 , the inference algorithm needs to sample the hidden
state sequence, {sn }N
,
emission
distribution
parameters ?1:K , background event intensity ?,
n=1
triggering kernel parameters, ?1:K (we omit ? and use ?1:K instead of ??1:K for notation simplicity
as before), transition matrix, ? 1:K , and the HDP parameters (?0 , ?, ?, ?). We use K to represent
the number of active states and ? to indicate the set of variables excluding the latent state sequence,
i.e., ? = {?0 , ?, ?, ?, ?, ?1:K , ?1:K , ?1:K }. Only major variables are listed, and ? may also include
other variables, such as the probability of initial latent state. At a high level, all the variables are
updated iteratively using a particle Gibbs (PG) sampler. A conditional SMC is performed as a
proposal kernel for updating latent state sequence in each PG iteration. An ancestor resampling
scheme is adopted in the conditional SMC for handling the non-Markovianity caused by the triggering
mechanism. Metropolis sampling is used in each PG iteration to update background event intensity ?
and triggering kernel parameters ?1:K . The remaining variables in ? can be sampled by following
the scheme in [5, 18] readily. The proposal distribution q? (?) in the conditional SMC can be set by
following [19]. The PG sampler is given in the following:
Step 1: Initialization, i = 0, set ?(0), s1:N (0), B1:N (0).
Step 2: For iteration i > 1
(a) Sample ?(i) ? p{?|y1:N , t1:N , s1:N (i ? 1)}.
(b) Run a conditional SMC algorithm targeting p?(i) (s1:N |y1:N , t1:N ) conditional on
s1:N (i ? 1) and B1:N (i ? 1).
(c) Sample s1:N (i) ? p??(i) (?|y1:N , t1:N ).
We use B1:N to represent the ancestral lineage of the prespecified state path s1:N and p??(i) (?|y1:N ) to
represent the particle approximation of p?(i) (?|y1:N ). The details of the conditional SMC algorithm
are given in the following. It is worth noting that the conditioned latent state path is only updated via
the ancestor resampling.
BN
B2
1
Step 1: Let s1:N = {sB
1 , s2 , ? ? ? , sN } denote the path that is associated with the ancestral lineage
B1:N
Step 2: For n = 1,
(a) For j 6= B1 , sample sj1 ? q? (?|y1 ), j ? [1, ? ? ? , J]. (J denotes the number of particles.)
(b) Compute weights w1 (sj1 ) = p(sj1 )F (y1 |sj1 )/q? (sj1 |y1 ) and normalize the weights
PJ
j
W1j = w1 (sj1 )/ m=1 w1 (sm
1 ). (We use p(s1 ) to represent the probability of the
j
initial latent state and q? (s1 |y1 ) to represent the proposal distribution conditional on
the variable set ?.)
Step 3: For n = 2, ? ? ? , N :
1:J
(a) For j 6= Bn , sample ancestor index of particle j: ajn?1 ? Cat(?|Wn?1
).
5
aj
n?1
(b) For j 6= Bn , sample sjn ? q? (?|yn , sn?1
). If sjn = K + 1 then create a new state using
the stick-breaking construction for the HDP:
(i) Sample a new transition probability ?K+1 ? Dir(?0 ?).
(ii) Use stick-breaking construction to expand ? ? [?, ?K+1 ]:
K
Y
0
0
?K+1
? Beta(1, ?), ?K+1 = ?K+1
(1 ? ?l0 ).
l=1
(iii) Expand transition probability vectors ?k to include transitions to state K + 1 via
the HDP stick-breaking construction:
?k ? [?k,1 , ? ? ? , ?k,K+1 ], ?k ? [1, K + 1], where
K+1
K
X
Y
0
0
0
?k,K+1
? Beta(?0 ?K+1 , ?0 (1 ?
?l )), ?k,K+1 = ?k,K+1
(1 ? ?k,l
).
l=1
l=1
(iv) Sample parameters for a new emission probability and triggering kernel ?K+1 ?
H and ?1:K ? H 0 .
(d) Perform ancestor resampling for the conditioned state path. Compute the ancestor
p,j
p,j
Bn
n
weights w
?n?1|N
via Eq. 7 and Eq. 8 and resample aB
?n?1|N
.
n as p(an = j) ? w
(e) Compute and normalize particle weights:
J
X
ajn?1
ajn?1
wn (sjn ) = ?(sjn |sn?1
)F (yn |sjn )/q? (sjn |sn?1
, yn ), Wn (sjn ) = wn (sjn )/(
wn (sjn )).
j=1
4.1
Metropolis Sampling for Background Intensity and Triggering Kernel
For the inference of the background intensity ? and the parameters of triggering kernels ?k in the step
2 (a) of the PG sampler, Metropolis sampling is used. As described in [3], the conditional likelihood
of the occurrences of a sequence of events in IPP can be expressed as:
L , p(t1:N |?, ?1:K ) =
!
N
Y
Z
?(tn ) exp ?
T
?(t)dt .
(5)
0
n=1
We describe the Metropolis update for ?k , and similar update can be derived for ?. The normal
distribution with the current value of ?k as mean is used as the proposal
The proposed
distribution.
?
p(?
? k
)
candidate ?k? will be accepted with the probability: A(?k? , ?k ) = min 1, p(?
.
The
ratio can be
? k)
computed as:
!
P
N
Y
?(tn ) + u<n ?s?u (tn ? tu )
p?(?k? )
p(?k? ) p(t1:N |?k? , rest)
p(?k? )
P
=
?
=
?
p?(?k )
p(?k ) p(t1:N |?k , rest)
p(?k )
?(tn ) + u<n ?su (tn ? tu )
n=1
?
?
X
? exp ?
(?su (T ? tu ) ? ??su (T ? tu ))? .
(6)
u?[1,N ]
We use ?(?) to represent the cumulative distribution function of the kernel function ?(?). We use
?s?u (?) to represent the u-th event?s triggering kernel candidate if su = k. It remains the current
triggering kernel otherwise. [0, T ] indicates the time period of the N events.
4.2
Truncated Ancestor Resampling for Non-Markovianity
Truncated ancestor resampling [13] is used for tackling the non-Markovianity caused by the triggering
mechanism of the proposed model. The ancestor weight can be computed as:
p,j
w
?n|N
= wnj
?n+p ({sj1:n , s0n+1:n+p })
?t (sj1:n )
=
?n+p ({sj1:n , s0n+1:n+p })
?n (sj1:n )
p
Y
p(s1:p , y1:p , t1:p )
L(t1:p )
=
?
F (yj |sj )?(sj |sj?1 )
p(s1:n , y1:n , t1:n )
L(t1:n ) j=n+1
(7)
(8)
For notation simplicity, we use wnj to represent wn (sjn ). In general, n + p needs to reach the last
event in the sequence. However, due the computational cost and the influence decay of the past events
in the proposed iHSMM-IPP, it is practical and feasible to use only a small number of events as an
approximation instead of using all the remaining events in Eq. 8.
6
Figure 2: (1) Normalized Hamming distance errors for synthetic data. (2) Cleaned energy consumption readings of the REDD data set. (3) Estimated states by the proposed iHSMM-IPP model.
5
Empirical Study
In the following experiments, we demonstrate the performance of the proposed inference algorithm
and show the applications of the proposed iHSMM-IPP model in real-world settings.
5.1
Synthetic Data
As in [20, 5, 19], we generate the synthetic data of 1000 events via a 4-state Gaussian emission HMM
with self-transition probability of 0.75 and the remaining probability mass uniformly distributed
over the other 3 states. The means of emission are set to ?2.0 ?0.5 1.0 4.0 with the deviation of
0.5. The occurrence times of events are generated via the Hawkes process with 4 different triggering
kernels, each of which corresponds to a HMM state. The background
intensity is set to 0.6 and
P
the triggering kernels take the exponential form: ?(t) = 0.6 + tn <t ?0 ? exp(?? 0 (t ? tn )) with
{0.1, 0.9}, {0.5, 0.9}, {0.1, 0.6}, {0.5, 0.6} as the {?0 , ? 0 } parameter pairs of the kernels. A thinning
process [15] (a point process variant of rejection sampling) is used to generate event times of Hawkes
process.
We compared 4 related methods to demonstrate the performance of the proposed iHSMM-IPP model
and inference algorithm: particle Gibbs sampler for sticky HDP-HMM [19], weak-limit sampler for
HDP-HSMM [8], Metropolis-within-Gibbs sampler for marked Hawkes process [17] and variational
inference for marked Hawkes process [11]. The normalized Hamming distance error is used to
measure
the performance of the estimated state sequences. The Diff distance used in [22] (i.e.,
R
2
?
(?(t)??(t))
dt
? represent the true and estimated kernels respectively) is adopted for
R
, ?(t) and ?(t)
(?(t))2 dt
measuring the performance of the estimated triggering kernels. The estimated ones are greedily
matched to minimize their distances from the ground truth.
The average results of the normalized Hamming distance errors are shown in Fig. 2 (1) and the
Diff distance errors are shown in the second column of Table 1. The results show that the proposed
inference method can not only quickly converge to an accurate estimation of the latent state sequence
but also well recover the underlying triggering kernels. Its clear advantage over the compared SSMs
and marked Hawkes processes is due to its considerations of both occurrence times and emitted
observations for the inference.
5.2
Understanding Energy Consumption Behaviours of Households
In this section, we use energy consumption data from the Reference Energy Disaggregation Dataset
(REDD ) [9] to demonstrate the application of the proposed model. The data set was collected via
smart meters recording detailed appliance-level electricity consumption information for individual
house. The data sets were collected with the intension to understand household energy usage patterns
and make recommendations for efficient consumption. The 1 Hz low frequency REDD data is used
and down sampled to 1 reading per minute covering 1 day energy consumption. Very low and high
consumption readings are removed from the reading sequence. Fig. 2 (2) shows the cleaned reading
sequence. Colours indicate appliance types and readings are in Watts.
The appliance types are modelled as latent states in the proposed iHSMM-IPP model. The readings
are the emitted observations of states governed by Gaussian distributions. The relationship between
the usages of different appliances is modelled via the state transition matrix. The triggering kernels
7
Method
iHSMM-IPP
M-MHawkes
VI-MHawkes
HDP-HSMM
S-HDP-HMM
Synthetic
Diff
0.36
0.55
0.62
-
REDD
Hamming
LogLik
0.30
?120.11
0.63
?173.36
0.76
?193.62
0.42
?147.52
0.55
?163.28
Hamming
0.39
0.64
0.78
0.52
0.59
LogLik
?677
?1035
?1200
?850
?993
Pipe
MSE Failures
82.8
142.2
166.7
103.8
128.5
MSE Hours
28.6
80.2
93.7
42.3
55.9
Table 1: Results on Synthetic, REDD and Pipe data sets.
of states in the model depict the influences of appliances on triggering the following energy consumptions, e.g., the usage of washing machine triggers the following energy usage of dryer. As in
the first experiment, exponential form of trigger is adopted and independent exponential priors with
hyper-parameter 0.01 are used for kernel parameters (?0 , ? 0 ).
The 4 methods used in the first experiment are compared with the proposed model. The average
results of the normalized Hamming distance errors and the log likelihoods are shown in the third and
fourth columns of Table 1. The proposed model outperforms the other methods due to the fact that it
has the flexibility to capture the interaction between the usages of different appliances. Other models
mainly rely on the emitted observations, i.e., readings for inferring the types of appliances.
5.3
Understanding Infrastructure Failure Behaviours and Impacts
Drinking water pipe networks are valuable infrastructure assets. Their failures (e.g., pipe bursts and
leaks) can cause tremendous social and economic costs. Hence, it is of significant importance to
understand the behaviours of pipe failures (i.e., occurrence time, failure type, labour hours for repair).
In particular, the relationship between the types of two consecutive failures, the triggering effect of a
failure on the intensity of future failures and the labour hours taken for a certain type of failure can
help provide not only insights but also guidance to make informed maintenance strategies.
In this experiment, a sequence of 1600 failures occurred in the same zone within 15 years with 10
different failure types [12] are used for testing the performance of the proposed iHSMM-IPP model.
Failure types are modelled as latent states. Labour hours for repair are emissions of states, which are
modelled by Gaussian distributions. It is well observed in industry that pipe failures occur in clusters,
i.e., certain types of failures can cause high failure risk in near future. Such behaviours are modelled
via the triggering kernels of states.
As in the first experiment, we compare the proposed iHSMM-IPP model with 4 related methods.
The sequence is divided into two parts 90% and 10%. The first part of the sequence is used for
training models. The normalized Hamming distance errors and log likelihoods are used for measuring
the performances on the first part. Then the models are used for predicting the remaining 10% of
the sequence. The predicted total number of failures and total labour hours for each failure type
are compared with ground truth by using mean square error. The results are shown in the last four
columns of Table 1. It can be seen that the proposed iHSMM-IPP achieves the best performance
for both the estimation on the first part of the sequence and the prediction on the second part of the
sequence. Its superiority comes from the fact that it well utilizes both the observed labour hours and
occurrence times while others only consider part of the observed information or have limitations on
model flexibility.
6
Conclusion
In this work, we proposed a new Bayesian nonparametric stochastic point process model, namely the
infinite hidden semi-Markov modulated interaction point process model. It captures both emitted
observations and arrival times of temporal events for capturing the underlying event correlation. A
Metropolis-within-particle-Gibbs sampler with truncated ancestor resampling is developed for the
posterior inference of the proposed model. The effectiveness of the sampler is shown on a synthetic
data set with the comparison of 4 related state-of-the-art methods. The superiority of the proposed
model over the compared methods is also demonstrated in two real-world applications, i.e., household
energy consumption modelling and infrastructure failure modelling.
8
References
[1] C. Andrieu, A. Doucet, and R. Holenstein. Particle markov chain monte carlo methods. Journal of the
Royal Statistical Society: Series B (Statistical Methodology), 72(3):269?342, 2010.
[2] M. J. Beal, Z. Ghahramani, and C. E. Rasmussen. The infinite hidden markov model. In NIPS, pages
577?584, 2001.
[3] D. J. Daley and D. Vere-Jones. An introduction to the theory of point processes: volume II: general theory
and structure. Springer Science & Business Media, 2007.
[4] P. J. Diggle, T. Fiksel, P. Grabarnik, Y. Ogata, D. Stoyan, and M. Tanemura. On parameter estimation for
pairwise interaction point processes. International Statistical Review, pages 99?117, 1994.
[5] E. B. Fox, E. B. Sudderth, M. I. Jordan, and A. S. Willsky. An hdp-hmm for systems with state persistence.
In Proceedings of the 25th international conference on Machine learning, pages 312?319. ACM, 2008.
[6] A. G. Hawkes. Spectra of some self-exciting and mutually exciting point processes. Biometrika, 58(1):83?
90, 1971.
[7] A. G. Hawkes and D. Oakes. A cluster process representation of a self-exciting process. Journal of Applied
Probability, pages 493?503, 1974.
[8] M. J. Johnson and A. Willsky. The hierarchical dirichlet process hidden semi-markov model. arXiv
preprint arXiv:1203.3485, 2012.
[9] K. J.Z. and J. M.J. Redd: A public data set for energy disaggregation research. In Proceedings of the
SustKDD Workshop on Data Mining Appliations in Sustainbility, 2011.
[10] J. Kingman. On doubly stochastic poisson processes. In Mathematical Proceedings of the Cambridge
Philosophical Society, volume 60, pages 923?930. Cambridge Univ Press, 1964.
[11] L. Li and H. Zha. Energy usage behavior modeling in energy disaggregation via marked hawkes process. In
Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, January 25-30, 2015, Austin,
Texas, USA., pages 672?678, 2015.
[12] P. Lin, B. Zhang, T. Guo, Y. Wang, and F. Chen. Interaction point processes via infinite branching model.
The Thirtieth AAAI Conference on Artificial Intelligence (AAAI), 2016.
[13] F. Lindsten, M. I. Jordan, and T. B. Sch?n. Particle gibbs with ancestor sampling. The Journal of Machine
Learning Research, 15(1):2145?2184, 2014.
[14] K. P. Murphy. Hidden semi-markov models (hsmms). unpublished notes, 2, 2002.
[15] Y. Ogata. On lewis? simulation method for point processes. Information Theory, IEEE Transactions on,
27(1):23?31, 1981.
[16] L. R. Rabiner. A tutorial on hidden markov models and selected applications in speech recognition.
Proceedings of the IEEE, 77(2):257?286, 1989.
[17] J. G. Rasmussen. Bayesian inference for hawkes processes. Methodology and Computing in Applied
Probability, 15(3):623?642, 2013.
[18] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical dirichlet processes. Journal of the
american statistical association, 101(476), 2006.
[19] N. Tripuraneni, S. Gu, H. Ge, and Z. Ghahramani. Particle gibbs for infinite hidden markov models. In
Advances in Neural Information Processing Systems, pages 2386?2394, 2015.
[20] J. Van Gael, Y. Saatci, Y. W. Teh, and Z. Ghahramani. Beam sampling for the infinite hidden markov
model. In Proceedings of the 25th international conference on Machine learning, pages 1088?1095. ACM,
2008.
[21] S.-Z. Yu. Hidden semi-markov models. Artificial Intelligence, 174(2):215?243, 2010.
[22] K. Zhou, H. Zha, and L. Song. Learning triggering kernels for multi-dimensional hawkes processes. In
ICML, pages 1301?1309, 2013.
9
| 6243 |@word briefly:1 simulation:1 bn:4 simplifying:1 pg:5 nsw:1 thereby:1 initial:4 series:2 denoting:1 past:2 existing:1 outperforms:1 current:3 disaggregation:3 si:10 yet:1 tackling:3 must:1 readily:2 loglik:2 vere:1 partition:1 designed:1 update:3 depict:1 resampling:9 generative:2 intelligence:3 s0n:2 selected:1 prespecified:1 blei:1 infrastructure:3 provides:2 appliance:7 ssm:1 zhang:3 mathematical:1 burst:1 predecessor:2 beta:2 consists:1 doubly:1 wale:1 manner:2 pairwise:1 inter:3 peng:2 market:2 behavior:1 multi:1 inspired:2 automatically:1 becomes:1 spain:1 underlying:4 notation:4 matched:1 mass:1 medium:1 developed:6 informed:1 lindsten:1 unified:1 tripuraneni:1 temporal:12 mitigate:1 every:1 ajn:3 act:1 ti:3 biometrika:1 stick:4 control:2 omit:1 appear:1 yn:8 superiority:2 positive:2 before:4 t1:11 engineering:1 limit:1 switching:1 despite:2 path:4 au:1 studied:1 initialization:1 smc:7 practical:1 earthquake:1 yj:1 testing:1 epidemiological:1 procedure:1 area:1 empirical:2 inferential:1 seismological:1 word:2 diggle:1 persistence:1 targeting:1 risk:1 influence:4 measurable:4 equivalent:1 demonstrated:1 attention:1 duration:10 independently:1 simplicity:4 lineage:2 insight:1 importantly:1 fang:2 financial:3 classic:1 updated:3 construction:3 trigger:3 play:2 homogeneous:1 us:1 recognition:1 updating:1 observed:4 role:3 bottom:1 preprint:1 wang:3 capture:3 sticky:2 removed:1 valuable:1 sj1:10 leak:1 dynamic:3 neglected:1 smart:1 gu:1 various:2 represented:1 cat:1 univ:1 distinct:1 describe:2 effective:1 monte:3 artificial:3 hyper:2 outcome:1 whose:2 widely:1 otherwise:1 epidemic:1 beal:2 sequence:22 triggered:1 advantage:1 propose:3 interaction:15 frequent:1 tu:4 combining:1 mixing:1 flexibility:6 description:1 intuitive:2 normalize:2 cluster:3 generating:1 help:2 depending:1 develop:1 school:1 eq:3 strong:1 predicted:2 involves:2 trading:6 australian:1 indicate:3 come:1 inhomogeneous:1 closely:2 drawback:3 stochastic:8 australia:2 public:1 behaviour:4 preliminary:1 extension:3 drinking:1 considered:2 ground:2 normal:1 exp:4 major:1 achieves:2 consecutive:1 resample:1 purpose:1 estimation:3 superposition:1 grouped:1 create:1 clearly:1 gaussian:3 pn:1 zhou:1 thirtieth:1 l0:1 emission:15 derived:1 modelling:9 prevalent:1 likelihood:4 indicates:3 mainly:1 greedily:1 inference:22 dependent:1 sb:1 integrated:1 hidden:21 ancestor:13 expand:2 flexible:1 priori:1 development:1 art:1 construct:1 sampling:11 represents:2 park:1 jones:1 yu:1 icml:1 future:5 others:3 simultaneously:1 individual:1 saatci:1 murphy:1 csiro:2 ab:1 message:3 highly:1 mining:1 mixture:1 chain:7 accurate:1 emit:2 edge:2 fox:1 indexed:1 iv:1 guidance:2 instance:1 column:3 modeling:4 industry:1 markovian:3 ar:3 measuring:2 electricity:1 cost:2 deviation:1 markovianity:7 johnson:1 dependency:1 data61:2 dir:1 synthetic:8 decayed:1 international:3 ancestral:2 together:1 quickly:1 w1:3 aaai:3 oakes:1 american:1 semimarkov:2 kingman:1 li:1 summarized:1 b2:1 caused:2 depends:1 vi:1 performed:1 try:1 zha:2 recover:1 capability:1 minimize:1 square:1 characteristic:2 rabiner:1 weak:1 bayesian:8 unifies:1 modelled:5 carlo:3 worth:3 asset:1 holenstein:1 influenced:1 reach:1 failure:18 energy:12 pp:1 frequency:1 associated:7 hamming:7 sampled:2 dataset:1 popular:3 lim:1 improves:1 ubiquitous:1 thinning:1 dt:3 day:1 follow:1 methodology:2 impacted:1 nonparametrics:1 governing:1 correlation:15 su:4 lack:3 propagation:1 aj:1 usage:6 effect:4 usa:1 normalized:5 true:1 former:2 hence:3 andrieu:1 iteratively:1 self:6 branching:1 covering:1 hawkes:16 demonstrate:3 tn:17 variational:1 consideration:2 novel:1 recently:1 common:1 functional:2 labour:5 volume:2 discussed:1 occurred:2 association:1 significant:2 cambridge:2 gibbs:7 particle:13 base:3 posterior:5 recent:4 certain:2 yi:1 seen:1 additional:1 ssms:7 determine:1 converge:1 period:1 semi:15 ii:2 desirable:1 reduces:1 lin:3 divided:1 a1:3 impact:1 prediction:2 variant:1 maintenance:1 poisson:9 arxiv:2 iteration:3 represent:10 kernel:25 beam:1 proposal:5 addition:1 background:7 separately:1 interval:1 sudderth:1 sch:1 rest:2 unlike:1 strict:1 south:1 recording:1 hz:1 effectiveness:1 jordan:3 emitted:7 near:2 yang:2 diffusible:1 noting:3 counting:1 iii:1 wn:6 variety:1 triggering:27 economic:1 idea:1 parameterizes:1 texas:1 grabarnik:1 colour:2 forecasting:1 song:1 speech:1 cause:3 gael:1 governs:1 listed:1 clear:1 detailed:1 nonparametric:5 extensively:1 generate:2 exist:1 tutorial:1 estimated:5 popularity:1 per:1 blue:1 discrete:2 affected:2 group:1 key:1 four:1 enormous:1 drawn:7 pj:1 backward:3 year:1 run:1 parameterized:5 fourth:1 arrive:1 family:2 utilizes:2 draw:1 capturing:2 distinguish:1 occur:1 constraint:1 sake:1 min:1 watt:1 conjugate:1 across:1 metropolis:6 s1:11 restricted:1 repair:2 dryer:1 taken:1 washing:1 mutually:1 remains:1 mechanism:6 merit:2 ge:1 adopted:4 generalizes:1 hierarchical:4 occurrence:8 denotes:2 clustering:1 dirichlet:6 include:3 top:2 graphical:2 remaining:5 unifying:1 household:3 ting:2 ghahramani:3 society:2 sweep:2 g0:4 strategy:1 concentration:2 traditional:2 exhibit:1 dp:7 distance:8 separate:1 street:1 hmm:29 consumption:9 considers:2 collected:2 water:1 willsky:2 hdp:19 index:1 relationship:2 illustration:2 ratio:1 intension:1 gk:1 negative:1 countable:3 twenty:1 perform:1 allowing:1 teh:2 observation:21 markov:23 sm:1 finite:3 truncated:3 january:1 variability:1 incorporated:1 excluding:1 y1:11 ninth:1 ipps:5 intensity:18 inferred:1 introduced:1 namely:2 cleaned:2 pair:1 pipe:6 unpublished:1 philosophical:1 tremendous:1 barcelona:1 hour:6 nip:2 address:1 usually:1 redd:6 pattern:1 reading:8 challenge:1 royal:1 garden:1 event:77 treated:1 natural:1 rely:1 predicting:1 business:1 scheme:3 improve:1 technology:1 sn:18 review:4 geometric:4 prior:8 understanding:2 meter:1 determining:2 generation:2 limitation:1 proven:1 degree:1 exciting:3 row:1 austin:1 pmcmc:4 last:2 rasmussen:2 bias:1 allow:1 guide:1 understand:2 wide:1 face:1 distributed:2 van:1 curve:1 world:7 transition:23 rich:1 doesn:1 cumulative:1 forward:3 collection:1 ipp:25 historical:1 social:5 transaction:1 sj:3 observable:1 unreliable:1 global:1 active:1 doucet:1 b1:5 gem:3 unnecessary:1 spectrum:1 w1j:1 latent:29 decade:1 table:4 promising:1 nature:1 correlated:1 depicting:1 mse:2 excellent:1 main:1 constituted:1 s2:1 arrival:23 sjn:10 fig:4 hsmm:10 depicts:2 inferring:2 daley:1 exponential:4 candidate:2 house:1 governed:1 breaking:4 third:1 posting:1 ogata:2 down:1 minute:1 contagious:1 decay:3 exists:1 workshop:1 sequential:2 importance:1 conditioned:4 illustrates:2 chen:3 rejection:1 generalizing:1 infinitely:1 forming:1 expressed:1 scalar:1 vulnerable:1 recommendation:1 springer:1 corresponds:2 truth:2 determines:1 lewis:1 acm:2 conditional:13 marked:4 bang:2 price:3 content:2 feasible:2 infinite:12 typical:1 determined:1 uniformly:1 sampler:10 diff:3 total:2 wnj:2 accepted:1 indicating:1 formally:1 zone:1 guo:3 modulated:4 mcmc:4 tested:1 handling:1 |
5,796 | 6,244 | High resolution neural connectivity from incomplete
tracing data using nonnegative spline regression
Kameron Decker Harris
Applied Mathematics, U. of Washington
[email protected]
Stefan Mihalas
Allen Institute for Brain Science
Applied Mathematics, U. of Washington
[email protected]
Eric Shea-Brown
Applied Mathematics, U. of Washington
Allen Institute for Brain Science
[email protected]
Abstract
Whole-brain neural connectivity data are now available from viral tracing experiments, which reveal the connections between a source injection site and elsewhere
in the brain. These hold the promise of revealing spatial patterns of connectivity
throughout the mammalian brain. To achieve this goal, we seek to fit a weighted,
nonnegative adjacency matrix among 100 ?m brain ?voxels? using viral tracer data.
Despite a multi-year experimental effort, injections provide incomplete coverage,
and the number of voxels in our data is orders of magnitude larger than the number
of injections, making the problem severely underdetermined. Furthermore, projection data are missing within the injection site because local connections there are
not separable from the injection signal.
We use a novel machine-learning algorithm to meet these challenges and develop a
spatially explicit, voxel-scale connectivity map of the mouse visual system. Our
method combines three features: a matrix completion loss for missing data, a
smoothing spline penalty to regularize the problem, and (optionally) a low rank
factorization. We demonstrate the consistency of our estimator using synthetic data
and then apply it to newly available Allen Mouse Brain Connectivity Atlas data for
the visual system. Our algorithm is significantly more predictive than current state
of the art approaches which assume regions to be homogeneous. We demonstrate
the efficacy of a low rank version on visual cortex data and discuss the possibility
of extending this to a whole-brain connectivity matrix at the voxel scale.
1
Introduction
Although the study of neural connectivity is over a century old, starting with pioneering neuroscientists
who identified the importance of networks for determining brain function, most knowledge of
anatomical neural network structure is limited to either detailed description of small subsystems
[2, 9, 14, 26] or to averaged connectivity between larger regions [7, 21]. We focus our attention
on spatial, structural connectivity at the mesoscale: a coarser scale than that of single neurons or
cortical columns but finer than whole brain regions. Thanks to the development of new tracing
techniques, image processing algorithms, and high-throughput methods, data at this resolution are
now accessible in animals such as the fly [12, 19] and mouse [15, 18]. We present a novel regression
technique tailored to the challenges of learning spatially refined mesoscale connectivity from neural
tracing experiments. We have designed this technique with neural data in mind and will use this
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
+
+
+
VISp
+
Figure 1: A, We seek to fit a matrix W which reproduces neural tracing experiments. Each column
of W represents the expected signal in target voxels given an injection of one unit into a single source
voxel. B, In the work of Oh et al. [18], a regionally homogeneous connectivity matrix was fit using
a predefined regional parcellation to constrain the problem. We propose that smoothness of W is
a better prior. C, The mouse?s visual field can be represented in azimuth/altitude coordinates. This
representation is maintained in the retinotopy, a smoothly varying map replicated in many visual
areas (e.g. [8]). D, Assuming locations in VISp (the primary visual area) project most strongly to
positions which represent the same retinotopic coordinates in a secondary visual area, then we expect
the mapping between upstream and downstream visual areas to be smooth.
language to describe our method, but it is a general technique to assimilate spatial network data or
infer smooth kernels of integral equations. Obtaining a spatially-resolved mesoscale connectome will
reveal detailed features of connectivity, for example unlocking cell-type specific connectivity and
microcircuit organization throughout the brain [13].
In mesoscale anterograde tracing experiments, a tracer virus is first injected into the brain. This
infects neurons primarily at their cell bodies and dendrites and causes them to express a fluorescent
protein in their cytoplasm, including in their axons. Neurons originating in the source injection site
are then imaged to reveal their axonal projections throughout the brain. Combining many experiments
with different sources then reveals the pathways that connect those sources throughout the brain. This
requires combining data across multiple animals, which appears justified at the mesoscale [18].
We assume there exists some underlying nonnegative, weighted adjacency matrix W 0 that is
common across animals. Each experiment can be thought of as an injection x, and its projections
y, so that y ? W x as in Fig. 1A. Uncovering the unknown W from multiple experiments (xi , yi )
for i = 1, . . . , ninj is then a multivariate regression problem: Each xi is an image of the brain which
represents the strength of the signal within the injection site. Likewise, every yi is an image of the
strength of signal elsewhere, which arises due to the axonal projections of neurons with cell bodies
in the injection site. The unknown matrix W is a linear operator which takes images of the brain
(injections) and returns images of the brain (projections).
In a previous paper, Oh et al. [18] were able to obtain a 213 ? 213 regional weight matrix using 469
experiments with mice (Fig. 1B). They used nonnegative least squares to find the unknown regional
weights in an overdetermined regression problem. Our aim is to obtain a much higher-resolution
connectivity map on the scale of voxels, and this introduces many more challenges.
First, the number of voxels in the brain is much larger than the number of injection experiments we
can expect to perform; for mouse with 100 ?m voxels this is O(105 ) versus O(103 ) [15, 18]. Also, the
injections that are performed will inevitably leave gaps in their coverage of the brain. Thus specifying
W is underdetermined. Second, there is no way to separately image the injections and projections. In
order to construct them, experimenters image the brain once by serial tomography and fluorescence
2
microscopy. The injection sites can be annotated by finding infected cell bodies, but there is no way to
disambiguate fluorescence from the cell bodies and dendrites from that of local injections. Projection
strength is thus unknown within the injection sites and the neighborhood occupied by dendrites.
Third, fitting full-brain voxel-wise connectivity is challenging since the number of elements in W is
the square of the number of voxels in the brain. Thus we need compressed representations of W as
well as efficient algorithms to perform inference. The paper proceeds as follows.
In Section 2, we describe our assumption that the mesoscale connectivity W is smoothly-varying in
space, as could be expected from to the presence of topographic maps across much of cortex. Later,
we show that using this assumption as a prior yields connectivity maps with improved cross-validation
performance.
In Section 3, we present an inference algorithm designed to tackle the difficulties of underdetermination, missing data, and size of the unknown W . To deal with the gaps and ill-conditioning, we use
smoothness as a regularization on W . We take an agnostic approach, similar to matrix completion
[5], to the missing projection data and use a regression loss function that ignores residuals within the
injection site. Finally, we present a low rank version of the estimator that will allow us to scale to
large matrices.
In Section 4, we test our method on synthetic data and show that it performs well for sparse data that
is consistent with the regression priors. This provides evidence that it is a consistent estimator. We
demonstrate the necessity of both the matrix completion and smoothing terms for good reconstruction.
In Section 5, we then apply the spline-smoothing method to recently available Allen Institute for
Brain Science (Allen Institute) connectivity data from mouse visual cortex [15, 18]. We find that our
method is able to outperform current spatially uniform regional models, with significantly reduced
cross-validation errors. We also find that a low rank version is able to achieve approximately 23?
compression of the original data, with the optimal solution very close to the full rank optimum. Our
method is a superior predictor to the existing regional model for visual system data, and the success
of the low rank version suggests that this approach will be able to reveal whole-brain structural
connectivity at unprecedented scale.
All of our supplemental material and data processing and optimization code is available for download
from: https://github.com/kharris/high-res-connectivity-nips-2016.
2
Spatial smoothness of mesoscale connectivity
The visual cortex is a collection of relatively large cortical areas in the posterior part of the mammalian
brain. Visual stimuli sensed in the retina are relayed through the thalamus into primary visual cortex
(VISp), which projects to higher visual areas. We know this partly due to tracing projections between
these areas, but also because neurons in the early visual areas respond to visual stimuli in a localized
region of the visual field called their receptive fields [11].
An interesting and important feature of visual cortex is the presence of topographic maps of the visual
field called the retinotopy [6, 8, 10, 20, 25]. Each eye sees a 2-D image of the world, where two
coordinates, such as azimuth and altitude, define a point in the visual field (Fig. 1C). Retinotopy
refers to the fact that cells are organized in cortical space by the position of their receptive fields;
nearby cells have similar receptive field positions. Furthermore, these retinotopic maps reoccur in
multiple visual areas, albeit with varying orientation and magnification.
Retinotopy in other areas downstream from VISp, which do not receive many projections directly
from thalamus, are likely a function of projections from VISp. It is reasonable to assume that areas
which code for similar visual locations are most strongly connected. Then, because retinotopy is
smoothly varying in cortical space and similar retinotopic coordinates are the most strongly connected
between visual areas, the connections between those areas should be smooth in cortical space (Fig. 1C
and D).
Retinotopy is a specific example of topography, which extends to other sensory systems such as
auditory and somatosensory cortex [22]. For this reason, connectivity may be spatially smooth
throughout the brain, at least at the mesoscale. This idea can be evaluated via the methods we
introduce below: if a smooth model is more predictive of held-out data than another model, then this
supports the assumption.
3
3
Nonnegative spline regression with incomplete tracing data
We consider the problem of fitting an adjacency operator W : T ? S ? R+ to data arising from ninj
injections into a source space S which projects to a target space T . Here S and T are compact subsets
of the brain, itself a compact subset of R3 . In this mathematical setting, S and T could be arbitrary
sets, but typically S = T for the ipsilateral data we present here.1 The source S and target T are
discretized into nx and ny cubic voxels, respectively. The discretization of W is then an adjacency
n ?n
matrix W ? R+y x . Mathematically, we define the tracing data as a set of pairs xi ? Rn+x and
ny
yi ? R+ , the source and target tracer signals at each voxel for experiments i = 1, . . . , ninj . We
would like to fit a linear model, a matrix W such that yi ? W xi . We assume an observation model
yi = W xi + ?i
iid
with ?i ? N (0, ? 2 I) multivariate Gaussian random variables with zero mean and covariance matrix
? 2 I ? Rny ?ny . The true data are not entirely linear, due to saturation effects of the fluorescence
signal, but the linear model provides a tractable way of ?credit assignment? of individual source
voxels? contributions to the target signal [18].
Finally, we assume that the target projections are unknown within the injection site. In other words, we
only know yj outside the support of xj , which we denote supp xj , and we wish to only evaluate error
for the observable voxels. Let ? ? Rny ?ninj , where the jth column ?j = 1 ? 1supp xj , the indicator
of the complement of the support. We define the orthogonal projector P? : Rny ?ninj ? Rny ?ninj
as P? (A) = A ? ?, the entrywise product of A and ?. This operator zeros elements of A which
correspond to the voxels within each experiment?s injection site. The operator P? is similar to what
is used in matrix completion [5], here in the context of regression rather than recovery.
These assumptions lead to a loss function which is the familiar `2 -loss applied to the projected
residuals:
1
kP? (W X ? Y )k2F
(1)
? 2 ninj
where Y = y1 , . . . , yninj and X = x1 , . . . , xninj are data matrices. Here k ? kF is the Frobenius
norm, i.e. the `2 -norm of the matrix as a vector: kAkF = kvec(A)k2 , where vec(A) takes a matrix
and converts it to a vector by stacking consecutive columns.
We next construct a regularization penalty. The matrix W represents the spatial discretization of
a two-point kernel W. An important assumption for W is that it is spatially smooth. Function
space norms of the derivatives of W, viewed as a real-valued function on T ? S, are a natural way
to measure the roughness of this function. For this study, we chose the squared L2 -norm of the
Laplacian
Z
2
|?W| dydx,
T ?S
which is called the thin plate spline bending energy [24]. In the discrete setting, this becomes the
squared `2 -norm of a discrete Laplacian applied to W :
2
2
kL vec(W )k2 =
Ly W + W LTx
F .
(2)
The operator L : Rny nx ? Rny nx is the discrete Laplacian operator or second finite difference
matrix on T ? S. The equality in Eqn. (2) results from the fact that the Laplacian on the product
space T ? S can be decomposed as L = Lx ? Iny + Inx ? Ly [17]. Using the well-known Kronecker
product identity for linear matrix equations
B T ? A vec(X) = vec(Y ) ?? AXB = Y
(3)
gives the result in Eqn. (2) [23], which allows us to efficiently evaluate the Laplacian action. As for
boundary conditions, we do not want to impose any particular values at the boundary, so we choose
the finite difference matrix corresponding to a homogeneous Neumann (zero derivative) boundary
condition. 2
1
Ipsilateral refers to connections within the same cerebral hemisphere. For contralateral (opposite hemisphere)
connectivity, S and T are disjoint subsets of the brain corresponding to the two hemispheres.
2
It is straightforward to avoid smoothing across region boundaries by imposing Neumann boundary conditions
at the boundaries; this is an option in our code available online.
4
Combining the loss and penalty terms, Eqn. (1) and (2), gives a convex optimization problem for
inferring the connectivity:
ninj
Ly W + W LTx
2 .
(P1)
W ? = arg min kP? (W X ? Y )k2F + ?
F
W 0
nx
In the final form, we absorb the noise variance ? 2 into the regularization hyperparameter ? and
rescale the penalty so that it has the same dependence on the problem size nx , ny , and ninj as the loss.
We solve the optimization (P1) using the L-BFGS-B projected quasi-Newton method, implemented
in C++ [3, 4]. The gradient is efficiently computed using matrix algebra.
Note that (P1) is a type of nonnegative least squares problem, since we can use Eqn. (3) to convert it
into
ninj
2
w? = arg min kAw ? yk22 + ?
kL wk2 ,
w0
nx
where A = diag (vec(?)) X T ? Iny , y = diag (vec(?)) vec(Y ), and w = vec(W ). Furthermore,
without the nonnegativity constraint the estimator is linear and has an explicit solution. However, the
design matrix A will have dimension (ny ninj ) ? (ny nx ), with O(ny 3 ninj ) entries if nx = O(ny ).
The dimensionality of the problem prevents us from working directly in the tensor product space.
And since the model is a structured matrix regression problem [1], the usual representer theorems
[24], which reduce the dimensionality of the estimator to effectively the number of data points, do
not immediately apply. However, we hope to elucidate the connection to reproducing kernel Hilbert
spaces in future work.
3.1
Low rank version
The largest object in our problem is the unknown connectivity W , since in the underconstrained
setting ninj nx , ny . In order to improve the scaling of our problem with the number of voxels, we
reformulate it with a compressed version of W :
ninj
Ly U V T + U V T LTx
2 .
(P2)
(U ? , V ? ) = arg min kP? (U V T X ? Y )k2F + ?
F
U,V 0
nx
n ?r
Here, U ? R+y and V ? Rn+x ?r for some fixed rank r, so that the optimal connectivity W ? =
U ? V ?T is given in low rank, factored form. Note that we use nonnegative factors rather than constrain
U V T 0, since this is a nonlinear constraint.
This has the advantage of automatically computing a nonnegative matrix factorization (NMF) of
W . The NMF is of separate scientific interest, to be pursued in future work, since it decomposes
the connectivity into a relatively small number of projection patterns, which has interpretations as a
clustering of the connectivity itself.
In going from the full rank problem (P1) to the low rank version (P2), we lose convexity. So the
usual optimization methods are not guaranteed to find a global optimimum, and the clustering just
mentioned is not unique. However, we have also reduced the size of the unknowns to the potentially
much smaller matrices U and V , if r ny , nx . If nx = O(ny ), we have only O(ny r) unknowns
instead of O(ny 2 ). Evaluating the penalty term still requires computation of ny nx terms, but this can
be performed without storing them in memory.
We use a simple projected gradient method with Nesterov acceleration in Matlab to find a local
optimum for (P2) [3], and will present and compare these results to the solution of (P1) below. As
before, computing the gradients is efficient using matrix algebra. This method has been used before
for NMF [16].
4
Test problem
We next apply our algorithms to a test problem consisting of a one-dimensional ?brain,? where the
source and target space S = T = [0, 1]. The true connectivity kernel corresponds to a Gaussian
profile about the diagonal plus a bump:
(
(
)
2 )
2
x?y
(x ? 0.8) + (y ? 0.1)2
Wtrue (x, y) = exp ?
+ 0.9 exp ?
.
0.4
(0.2)2
5
Wfull without ?
Wtrue
1
Wfull with ?=0
Wrank 20
1
0.9
0.9
0.8
0.8
1
0.9
6
0.8
5
0.7
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.9
0.8
0.8
0.8
0.8
0.7
0.7
0.7
0.6
4
0.6
0.5
0.6
0.5
0.5
0.4
0.4
3
0.5
0.4
0.4
0.3
0.2
0.9
0.6
0.5
0.2
0.9
0.7
0.6
1
0.9
0.3
2
0.2
0.2
0.1
0.4
0.3
0.3
0.3
0.2
0.1
0.2
0.1
0
0.1
0
0.2
1
0.1
0.1
0
0
0
0.2
0.4
0.6
0.8
1
0
?0.1
0
0
0.2
0.4
0.6
0.8
?0.1
1
0
0
0.2
0.4
0.6
0.8
0.1
?0.1
1
0
0
0.2
0.4
0.6
0.8
1
Figure 2: Comparison of the true (far left) and inferred connectivity from 5 injections. Unless noted,
? = 100. Second from left, we show the what happens when we solve (P1) without the matrix
completion term P? . The holes in the projection data cause patchy and incorrect output. Note the
colorbar range is 6? that in the other cases. Second from right is the result with P? but without
regularization, solving (P1) for ? = 0. There, the solution does not interpolate between injections.
Far right is a rank r = 20 result using (P2), which captures the diagonal band and off-diagonal
bump that make up Wtrue . In this case, the low rank result has less relative error (9.6%) than the full
rank result (11.1%, not shown).
See the left panel of Fig. 2. The input and output spaces were discretized using nx = ny = 200
points. Injections are delivered at random locations within S, with a width of 0.12 + 0.1 where
? Uniform(0, 1). The values of x are set to 1 within the injection region and 0 elsewhere, y is
set to 0 within the injection region, and we take noise level ? = 0.1. The matrices Lx = Ly are the
5-point finite difference Laplacians for the rectangular lattice.
Example output of (P1) and (P2) is given for 5 injections in Fig. 2. Unless stated otherwise, ? = 100.
The injections, depicted as black bars in the bottom of each sub-figure, do not cover the whole space
S but do provide good coverage of the bump, otherwise there is no information about that feature.
We depict the result of the full rank algorithm (P1) without the matrix completion term P? , the result
including P? but without smoothing (? = 0), and the result of (P2) with rank r = 20. The full rank
solution is not shown, but is similar to the low rank one.
Figure 2 shows the necessity of each term within the algorithm. Leaving out the matrix completion
P? leads to dramatically biased output since the algorithm uses incorrect values ysupp(x) = 0. If we
include P? but neglect the smoothing term by setting ? = 0, we also get incorrect output: without
smoothing, the algorithm cannot fill in the injection site holes nor can it interpolate between injections.
However, the low rank result accurately approximates the true connectivity Wtrue , including the
diagonal profile and bump, achieving 9.6% relative error measured as kW ? ? Wtrue kF /kWtrue kF .
The full rank version is similar, but in fact has slightly higher 11.1% relative error.
5
Finding a voxel-scale connectivity map for mouse cortex
We next apply our method to the latest data from the Allen Institute Mouse Brain Connectivity Atlas,
obtained with the API at http://connectivity.brain-map.org. Briefly, in each experiment
mice were injected with adeno-associated virus expressing a fluorescent protein. The virus infects
neurons in the injection site, causing them to produce the protein, which is transported throughout
the axonal and dendritic processes. The mouse brains for each experiment were then sliced, imaged,
and aligned onto the common coordinates in the Allen Reference Atlas version 3 [15, 18]. These
coordinates divide the brain volume into 100 ?m ? 100 ?m ? 100 ?m voxels, with approximately
5 ? 105 voxels in the whole brain. The fluorescent pixels in each aligned image were segmented
from the background, and we use the fraction of segmented versus total pixels in a voxel to build the
vectors x and y. Since cortical dendrites project locally, the signal outside the injection site is mostly
axonal, and so the method reveals anterograde axonal projections from the injection site.
From this dataset, we selected 28 experiments which have 95% of their injection volumes contained
within the visual cortex (atlas regions VISal, VISam, VISl, VISp, VISpl, VISpm, VISli, VISpor,
VISrl, and VISa) and injection volume less than 0.7 mm3 . For this study, we present only the results
for ipsilateral connectivity, where S = T and nx = ny = 7497. To compute the smoothing penalty,
we used the 7-point finite-difference Laplacian on the cubic voxel lattice.
6
Model
Voxel MSErel Regional MSErel
Regional
107% (70%)
48% (6.8%)
Voxel
33% (10%)
16% (2.3%)
Table 1: Model performance on Allen Institute Mouse Brain Connectivity Atlas data. Cross-validation
errors of the voxel model (P1) and regionally homogeneous models are shown, with training errors
in parentheses. The errors are computed in both voxel space and regional space, using the relative
mean squared error MSErel , Eqn. (4). In either space, the voxel model shows reduced training and
cross-validation errors relative to the regional model.
In order to evaluate the performance of the estimator, we employ nested cross-validation with 5 inner
and outer folds. The full rank estimator (P1) was fit for ? = 103 , 104 , . . . , 1012 on the training data.
Using the validation data, we then selected the ?opt that minimized the mean square error relative to
the average squared norm of the prediction W X and truth Y , evaluating errors outside the injection
sites:
2kP? (W X ? Y )k2F
.
(4)
MSErel =
kP? (W X)k2F + kP? (Y )k2F
This choice of normalization prevents experiments with small kY k from dominating the error. This
error metric as well as the `2 -loss adopted in Eqn. (P1) both more heavily weight the experiments
with larger signal. After selection of ?opt , the model was refit to the combined training and validation
data. In our dataset, ?opt = 105 was selected for all outer folds. The final errors were computed
with the test datasets in each outer fold. For comparison, we also fit a regional model within the
cross-validation framework, using nonnegative least squares. To do this, similar to the study by Oh
et al. [18], we constrained the connectivity Wkl = WRi Rj to be constant for all voxels k in region Ri
and l in region Rj .
The results are shown in Table 1. Errors were computed according to both voxels and regions. For the
latter, we integrated the residual over voxels within the regions before computing the error. The voxel
model is more predictive of held-out data than the regional model, reducing the voxel and regional
MSErel by 69% and 67%, respectively. The regional model is designed for inter-region connectivity.
To allow an easier comparison with the voxel model, we here include within region connections. We
find that the regional model is a poor predictor of voxel scale projections, with over 100% relative
voxel error, but it performs okay at the regional scale. The training errors, which reflect goodness of
fit, were also reduced significantly with the voxel model. We conclude that the more flexible voxel
model is a better estimator for these Allen Institute data, since it improves both the fits to training
data as well as cross-validation skill.
The inferred visual connectivity also exhibits a number of features that we expect. There are strong
local projections (similar to the diagonal in the test problem, Fig. 2) along with spatially organized
projections to higher visual areas. See Fig. 3, which shows example projections from source voxels
within VISp. These are just two of 7497 voxels in the full matrix, and we depict only a 2-D
projection of 3-D images. The connectivity exhibits strong local projections, which must be filled in
by the smoothing since within the injection sites the projection data are unknown; it is surprising
how well the algorithm does at capturing short-range connectivity that is translationally invariant.
There are also long-range bumps in the higher visual areas, medial and lateral, which move with
the source voxel. This is a result of retinotopic maps between VISp and downstream areas. The
supplementary material presents a view of this high-dimensional matrix in movie form, allowing
one to see the varying projections as the seed voxel moves. We encourage the reader to view the
supplemental movies, where movement of bumps in downstream regions hints at the underlying
retinotopy: https://github.com/kharris/high-res-connectivity-nips-2016.
5.1
Low rank inference successfully approximates full rank solution for visual system
We next use these visual system data, for which the full rank solution was computed, to test whether
the low rank approximation can be applied. This is an important stepping stone to an eventual
inference of spatial connectivity for the full brain.
?
First, we note that the singular value spectrum of the fitted Wfull
(now using all 28 injections and
5
? = 10 ) is heavily skewed: 95% of the energy can be captured with 21 of 7497 components, and
99% with 67 components. However, this does not directly imply that a nonnegative factorization will
7
Figure 3: Inferred connectivity using all 28 selected injections from visual system data. Left,
Projections from a source voxel (blue) located in VISp to all other voxels in the visual areas. The
view is integrated over the superior-inferior axis. The connectivity shows strong local connections
and weaker connections to higher areas, in particular VISam, VISal, and VISl. Movies of the inferred
connectivity (full, low rank, and the low rank residual) for varying source voxel are available in
the supplementary material. Center, For a source 800 ?m voxels away, the pattern of anterograde
projections is similar, but the distal projection centers are shifted, as expected from retinotopy. Right,
The residuals between the full rank and rank 160 result from solving (P2), for the same source voxel
as in the center. The residuals are an order of magnitude less than typical features of the connectivity.
perform as well. To test this, we fit a low rank decomposition directly to all 28 visual injection data
using (P2) with rank r = 160 and ? = 105 . The output of the optimization procedure yields U ? and
?
V ? , and we find that the low rank output is very similar to the full result Wfull
fit to the same data
(see also Fig. 3, which visualizes the residuals):
?
kU ? V ?T ? Wfull
kF
= 13%.
?
kWfull kF
This close approximation is despite the fact that the low rank solution achieves a roughly 23?
compression of the 7497 ? 7497 matrix.
Assuming similar compressibility for the whole brain, where the number of voxels is 5 ? 105 , would
mean a rank of approximately 104 . This is still a problem in O(109 ) unknowns, but these bring the
memory requirements of storing one matrix iterate in double precision from approximately 1.9 TB to
75 GB, which is within reach of commonly available large memory machines.
6
Conclusions
We have developed and implemented a new inference algorithm that uses modern machine learning
ideas?matrix completion loss, a smoothing penalty, and low rank factorization?to assimilate sparse
connectivity data into complete, spatially explicit connectivity maps. We have shown that this
method can be applied to the latest Allen Institute data from multiple visual cortical areas, and that it
significantly improves cross-validated predictions over the current state of the art and unveils spatial
patterning of connectivity. Finally, we show that a low rank version of the algorithm produces very
similar results on these data while compressing the connectivity map, potentially opening the door to
the inference of whole brain connectivity from viral tracer data at the voxel scale.
Acknowledgements
We acknowledge the support of the UW NIH Training Grant in Big Data for Neuroscience and Genetics (KDH),
Boeing Scholarship (KDH), NSF Grant DMS-1122106 and 1514743 (ESB & KDH), and a Simons Fellowship
in Mathematics (ESB). We thank Liam Paninski for helpful insights at the outset of this project. We wish to
thank the Allen Institute founders, Paul G. Allen and Jody Allen, for their vision, encouragement, and support.
This work was facilitated though the use of advanced computational, storage, and networking infrastructure
provided by the Hyak supercomputer system at the University of Washington.
References
[1] Argyriou, A., Micchelli, C. A., and Pontil, M. (2009). When Is There a Representer Theorem? Vector
Versus Matrix Regularizers. J. Mach. Learn. Res., 10:2507?2529.
8
[2] Bock, D. D., Lee, W.-C. A., Kerlin, A. M., Andermann, M. L., Hood, G., Wetzel, A. W., Yurgenson, S.,
Soucy, E. R., Kim, H. S., and Reid, R. C. (2011). Network anatomy and in vivo physiology of visual cortical
neurons. Nature, 471(7337):177?182.
[3] Boyd, S. and Vandenberghe, L. (2004). Convex Optimization. Cambridge University Press, New York, NY,
USA.
[4] Byrd, R., Lu, P., Nocedal, J., and Zhu, C. (1995). A Limited Memory Algorithm for Bound Constrained
Optimization. SIAM Journal on Scientific Computing, 16(5):1190?1208.
[5] Candes, E. and Tao, T. (2010). The Power of Convex Relaxation: Near-Optimal Matrix Completion. IEEE
Transactions on Information Theory, 56(5):2053?2080.
[6] Chaplin, T. A., Yu, H.-H., and Rosa, M. G. (2013). Representation of the visual field in the primary visual
area of the marmoset monkey: Magnification factors, point-image size, and proportionality to retinal ganglion
cell density. Journal of Comparative Neurology, 521(5):1001?1019.
[7] Felleman, D. J. and Van Essen, D. C. (1991). Distributed Hierarchical Processing in the Primate. Cerebral
Cortex, 1(1):1?47.
[8] Garrett, M. E., Nauhaus, I., Marshel, J. H., and Callaway, E. M. (2014). Topography and Areal Organization
of Mouse Visual Cortex. Journal of Neuroscience, 34(37):12587?12600.
[9] Glickfeld, L. L., Andermann, M. L., Bonin, V., and Reid, R. C. (2013). Cortico-cortical projections in
mouse visual cortex are functionally target specific. Nature Neuroscience, 16(2).
[10] Goodman, C. S. and Shatz, C. J. (1993). Developmental mechanisms that generate precise patterns of
neuronal connectivity. Cell, 72, Supplement:77?98.
[11] Hubel, D. H. and Wiesel, T. N. (1962). Receptive fields, binocular interaction and functional architecture
in the cat?s visual cortex. The Journal of Physiology, 160(1):106?154.2.
[12] Jenett, A., Rubin, G. M., Ngo, T.-T. B., Shepherd, D., Murphy, C., Dionne, H., Pfeiffer, B. D., Cavallaro,
A., Hall, D., Jeter, J., Iyer, N., Fetter, D., Hausenfluck, J. H., Peng, H., Trautman, E. T., Svirskas, R. R.,
Myers, E. W., Iwinski, Z. R., Aso, Y., DePasquale, G. M., Enos, A., Hulamm, P., Lam, S. C. B., Li, H.-H.,
Laverty, T. R., Long, F., Qu, L., Murphy, S. D., Rokicki, K., Safford, T., Shaw, K., Simpson, J. H., Sowell, A.,
Tae, S., Yu, Y., and Zugates, C. T. (2012). A GAL4-Driver Line Resource for Drosophila Neurobiology. Cell
Reports, 2(4):991?1001.
[13] Jonas, E. and Kording, K. (2015). Automatic discovery of cell types and microcircuitry from neural
connectomics. eLife, 4:e04250.
[14] Kleinfeld, D., Bharioke, A., Blinder, P., Bock, D. D., Briggman, K. L., Chklovskii, D. B., Denk, W.,
Helmstaedter, M., Kaufhold, J. P., Lee, W.-C. A., Meyer, H. S., Micheva, K. D., Oberlaender, M., Prohaska,
S., Reid, R. C., Smith, S. J., Takemura, S., Tsai, P. S., and Sakmann, B. (2011). Large-Scale Automated
Histology in the Pursuit of Connectomes. The Journal of Neuroscience, 31(45):16125?16138.
[15] Kuan, L., Li, Y., Lau, C., Feng, D., Bernard, A., Sunkin, S. M., Zeng, H., Dang, C., Hawrylycz, M., and
Ng, L. (2015). Neuroinformatics of the Allen Mouse Brain Connectivity Atlas. Methods, 73:4?17.
[16] Lin, C.-J. (2007). Projected Gradient Methods for Nonnegative Matrix Factorization. Neural Computation,
19(10):2756?2779.
[17] Lynch, R. E., Rice, J. R., and Thomas, D. H. (1964). Tensor product analysis of partial difference equations.
Bulletin of the American Mathematical Society, 70(3):378?384.
[18] Oh, S. W., Harris, J. A., Ng, L., Winslow, B., Cain, N., Mihalas, S., Wang, Q., Lau, C., Kuan, L., Henry,
A. M., Mortrud, M. T., Ouellette, B., Nguyen, T. N., Sorensen, S. A., Slaughterbeck, C. R., Wakeman, W.,
Li, Y., Feng, D., Ho, A., Nicholas, E., Hirokawa, K. E., Bohn, P., Joines, K. M., Peng, H., Hawrylycz, M. J.,
Phillips, J. W., Hohmann, J. G., Wohnoutka, P., Gerfen, C. R., Koch, C., Bernard, A., Dang, C., Jones, A. R.,
and Zeng, H. (2014). A mesoscale connectome of the mouse brain. Nature, 508(7495):207?214.
[19] Peng, H., Tang, J., Xiao, H., Bria, A., Zhou, J., Butler, V., Zhou, Z., Gonzalez-Bellido, P. T., Oh, S. W.,
Chen, J., Mitra, A., Tsien, R. W., Zeng, H., Ascoli, G. A., Iannello, G., Hawrylycz, M., Myers, E., and Long,
F. (2014). Virtual finger boosts three-dimensional imaging and microsurgery as well as terabyte volume
image visualization and analysis. Nature Communications, 5.
[20] Rosa, M. G. and Tweedale, R. (2005). Brain maps, great and small: Lessons from comparative studies of
primate visual cortical organization. Philosophical Transactions of the Royal Society B: Biological Sciences,
360(1456):665?691.
[21] Sporns, O. (2010). Networks of the Brain. The MIT Press, 1st edition.
[22] Udin, S. B. and Fawcett, J. W. (1988). Formation of Topographic Maps. Annual Review of Neuroscience,
11(1):289?327.
[23] Van Loan, C. F. (2000). The ubiquitous Kronecker product. Journal of Computational and Applied
Mathematics, 123(1?2):85?100.
[24] Wahba, G. (1990). Spline Models for Observational Data. SIAM.
[25] Wang, Q. and Burkhalter, A. (2007). Area map of mouse visual cortex. The Journal of Comparative
Neurology, 502(3):339?357.
[26] White, J. G., Southgate, E., Thomson, J. N., and Brenner, S. (1986). The Structure of the Nervous System
of the Nematode Caenorhabditis elegans. Philosophical Transactions of the Royal Society of London B:
Biological Sciences, 314(1165):1?340.
9
| 6244 |@word briefly:1 version:10 wiesel:1 compression:2 norm:6 anterograde:3 proportionality:1 seek:2 sensed:1 covariance:1 decomposition:1 kerlin:1 briggman:1 necessity:2 efficacy:1 existing:1 current:3 com:2 virus:3 discretization:2 surprising:1 hohmann:1 must:1 connectomics:1 dydx:1 atlas:6 designed:3 depict:2 medial:1 pursued:1 selected:4 patterning:1 nervous:1 smith:1 short:1 infrastructure:1 provides:2 location:3 lx:2 org:2 relayed:1 mathematical:2 along:1 driver:1 jonas:1 incorrect:3 combine:1 pathway:1 fitting:2 introduce:1 peng:3 inter:1 expected:3 roughly:1 p1:12 nor:1 multi:1 brain:42 discretized:2 decomposed:1 automatically:1 byrd:1 dang:2 becomes:1 spain:1 project:5 retinotopic:4 underlying:2 panel:1 agnostic:1 provided:1 what:2 monkey:1 developed:1 supplemental:2 finding:2 every:1 tackle:1 k2:2 adeno:1 unit:1 ly:5 grant:2 reid:3 before:3 local:6 mitra:1 severely:1 api:1 despite:2 esb:2 mach:1 colorbar:1 meet:1 approximately:4 black:1 mesoscale:9 chose:1 plus:1 wk2:1 specifying:1 challenging:1 suggests:1 wri:1 factorization:5 limited:2 liam:1 range:3 callaway:1 averaged:1 unique:1 hood:1 yj:1 procedure:1 pontil:1 area:21 significantly:4 revealing:1 projection:27 thought:1 word:1 outset:1 refers:2 physiology:2 boyd:1 protein:3 get:1 cannot:1 close:2 subsystem:1 operator:6 onto:1 selection:1 context:1 storage:1 map:15 projector:1 missing:4 center:3 straightforward:1 attention:1 starting:1 latest:2 convex:3 rectangular:1 resolution:3 recovery:1 immediately:1 factored:1 estimator:8 insight:1 argyriou:1 fill:1 regularize:1 oh:5 vandenberghe:1 century:1 coordinate:6 target:8 elucidate:1 heavily:2 homogeneous:4 us:2 overdetermined:1 element:2 magnification:2 located:1 mammalian:2 coarser:1 bottom:1 fly:1 wang:2 capture:1 region:15 compressing:1 connected:2 movement:1 mentioned:1 developmental:1 convexity:1 nesterov:1 denk:1 unveils:1 solving:2 kaw:1 algebra:2 predictive:3 eric:1 resolved:1 represented:1 cat:1 finger:1 describe:2 london:1 kp:6 burkhalter:1 formation:1 neighborhood:1 refined:1 outside:3 neuroinformatics:1 nematode:1 larger:4 valued:1 solve:2 dominating:1 supplementary:2 otherwise:2 compressed:2 topographic:3 itself:2 final:2 online:1 delivered:1 kuan:2 advantage:1 myers:2 unprecedented:1 propose:1 reconstruction:1 interaction:1 product:6 lam:1 sowell:1 caenorhabditis:1 causing:1 aligned:2 combining:3 achieve:2 regionally:2 description:1 frobenius:1 ky:1 double:1 optimum:2 extending:1 neumann:2 produce:2 requirement:1 comparative:3 leave:1 object:1 develop:1 completion:9 measured:1 rescale:1 strong:3 p2:8 coverage:3 implemented:2 somatosensory:1 anatomy:1 annotated:1 observational:1 material:3 virtual:1 adjacency:4 drosophila:1 opt:3 dendritic:1 biological:2 underdetermined:2 mathematically:1 roughness:1 iny:2 hold:1 koch:1 credit:1 hall:1 exp:2 great:1 seed:1 mapping:1 bump:6 achieves:1 early:1 consecutive:1 lose:1 fluorescence:3 largest:1 successfully:1 weighted:2 aso:1 stefan:1 hope:1 mit:1 gaussian:2 lynch:1 aim:1 rather:2 occupied:1 avoid:1 zhou:2 varying:6 validated:1 focus:1 microcircuitry:1 rank:36 kim:1 helpful:1 inference:6 typically:1 integrated:2 originating:1 quasi:1 going:1 tao:1 pixel:2 arg:3 uncovering:1 flexible:1 among:1 ill:1 orientation:1 development:1 animal:3 spatial:7 smoothing:10 art:2 constrained:2 field:9 construct:2 once:1 rosa:2 washington:4 ng:2 represents:3 kw:1 yu:2 k2f:6 throughput:1 thin:1 representer:2 jones:1 future:2 minimized:1 report:1 stimulus:2 spline:6 hint:1 primarily:1 retina:1 employ:1 okay:1 modern:1 opening:1 interpolate:2 individual:1 murphy:2 familiar:1 translationally:1 consisting:1 assimilate:2 neuroscientist:1 organization:3 interest:1 possibility:1 essen:1 simpson:1 introduces:1 regularizers:1 held:2 sorensen:1 predefined:1 integral:1 encourage:1 partial:1 orthogonal:1 unless:2 filled:1 incomplete:3 old:1 divide:1 bonin:1 re:3 fitted:1 column:4 infected:1 patchy:1 cover:1 goodness:1 udin:1 assignment:1 lattice:2 stacking:1 subset:3 entry:1 contralateral:1 uniform:2 predictor:2 azimuth:2 connect:1 synthetic:2 combined:1 thanks:1 density:1 st:1 siam:2 accessible:1 lee:2 off:1 connectome:2 mouse:17 connectivity:54 squared:4 reflect:1 choose:1 american:1 derivative:2 return:1 li:3 supp:2 bfgs:1 retinal:1 infects:2 performed:2 later:1 view:3 bharioke:1 option:1 candes:1 simon:1 vivo:1 contribution:1 square:5 variance:1 who:1 likewise:1 efficiently:2 yield:2 correspond:1 lesson:1 accurately:1 underdetermination:1 iid:1 lu:1 shatz:1 finer:1 visualizes:1 reach:1 networking:1 energy:2 dm:1 nauhaus:1 associated:1 newly:1 experimenter:1 auditory:1 dataset:2 knowledge:1 dimensionality:2 improves:2 organized:2 hilbert:1 ubiquitous:1 garrett:1 appears:1 higher:6 improved:1 entrywise:1 evaluated:1 microcircuit:1 strongly:3 though:1 furthermore:3 just:2 binocular:1 working:1 eqn:6 zeng:3 nonlinear:1 kleinfeld:1 reveal:4 scientific:2 usa:1 effect:1 brown:1 true:4 regularization:4 equality:1 spatially:8 imaged:2 deal:1 distal:1 white:1 skewed:1 width:1 inferior:1 maintained:1 noted:1 stone:1 plate:1 mm3:1 demonstrate:3 complete:1 felleman:1 performs:2 allen:14 bring:1 thomson:1 image:12 wise:1 novel:2 recently:1 nih:1 common:2 superior:2 viral:3 functional:1 stepping:1 conditioning:1 cerebral:2 volume:4 interpretation:1 approximates:2 functionally:1 expressing:1 cambridge:1 vec:8 imposing:1 areal:1 smoothness:3 encouragement:1 automatic:1 consistency:1 mathematics:5 phillips:1 language:1 henry:1 cortex:14 multivariate:2 posterior:1 histology:1 hemisphere:3 success:1 yi:5 cain:1 captured:1 impose:1 terabyte:1 signal:9 multiple:4 full:15 rj:2 infer:1 thalamus:2 smooth:6 segmented:2 cross:8 long:3 lin:1 serial:1 laplacian:6 parenthesis:1 prediction:2 regression:9 vision:1 metric:1 fawcett:1 represent:1 tailored:1 kernel:4 microscopy:1 cell:11 normalization:1 justified:1 receive:1 want:1 separately:1 background:1 fellowship:1 chklovskii:1 singular:1 source:16 leaving:1 kdh:3 biased:1 goodman:1 regional:15 shepherd:1 elegans:1 ngo:1 structural:2 axonal:5 presence:2 yk22:1 door:1 near:1 automated:1 iterate:1 xj:3 fit:10 fetter:1 architecture:1 identified:1 opposite:1 wahba:1 reduce:1 idea:2 inner:1 whether:1 gb:1 effort:1 penalty:7 york:1 cause:2 reoccur:1 action:1 matlab:1 dramatically:1 detailed:2 band:1 locally:1 tomography:1 reduced:4 http:3 generate:1 outperform:1 nsf:1 shifted:1 neuroscience:5 arising:1 disjoint:1 ipsilateral:3 anatomical:1 blue:1 discrete:3 hyperparameter:1 promise:1 express:1 southgate:1 achieving:1 nocedal:1 uw:3 imaging:1 relaxation:1 downstream:4 fraction:1 year:1 convert:2 facilitated:1 injected:2 respond:1 extends:1 throughout:6 reasonable:1 reader:1 gonzalez:1 scaling:1 entirely:1 capturing:1 bound:1 guaranteed:1 fold:3 nonnegative:11 annual:1 strength:3 kronecker:2 constraint:2 constrain:2 ri:1 nearby:1 min:3 soucy:1 ltx:3 elife:1 separable:1 injection:40 relatively:2 structured:1 according:1 poor:1 across:4 smaller:1 slightly:1 visa:1 founder:1 qu:1 making:1 happens:1 marmoset:1 primate:2 lau:2 invariant:1 altitude:2 equation:3 resource:1 visualization:1 discus:1 r3:1 mechanism:1 mind:1 know:2 tractable:1 adopted:1 available:7 pursuit:1 apply:5 hierarchical:1 away:1 nicholas:1 shaw:1 ho:1 supercomputer:1 cavallaro:1 original:1 thomas:1 clustering:2 include:2 newton:1 neglect:1 parcellation:1 scholarship:1 build:1 society:3 feng:2 tensor:2 move:2 micchelli:1 receptive:4 primary:3 dependence:1 usual:2 diagonal:5 inx:1 exhibit:2 gradient:4 separate:1 thank:2 lateral:1 outer:3 nx:15 reason:1 assuming:2 code:3 connectomes:1 reformulate:1 optionally:1 mostly:1 potentially:2 stated:1 boeing:1 design:1 sakmann:1 refit:1 unknown:11 perform:3 allowing:1 neuron:7 observation:1 datasets:1 ascoli:1 finite:4 acknowledge:1 inevitably:1 marshel:1 unlocking:1 neurobiology:1 communication:1 precise:1 y1:1 rn:2 wkl:1 reproducing:1 compressibility:1 arbitrary:1 download:1 nmf:3 inferred:4 complement:1 pair:1 kl:2 connection:8 philosophical:2 barcelona:1 boost:1 nip:3 able:4 bar:1 proceeds:1 below:2 pattern:4 laplacians:1 challenge:3 pioneering:1 saturation:1 tb:1 including:3 memory:4 royal:2 sporns:1 power:1 difficulty:1 natural:1 indicator:1 residual:7 advanced:1 zhu:1 pfeiffer:1 improve:1 github:2 movie:3 eye:1 imply:1 axis:1 tracer:4 wtrue:5 bending:1 prior:3 voxels:22 l2:1 acknowledgement:1 kf:5 discovery:1 determining:1 relative:7 wakeman:1 review:1 loss:8 expect:3 kakf:1 topography:2 interesting:1 takemura:1 fluorescent:3 versus:3 localized:1 bock:2 validation:9 consistent:2 rubin:1 xiao:1 storing:2 elsewhere:3 genetics:1 jth:1 allow:2 weaker:1 cortico:1 institute:9 bulletin:1 sparse:2 tracing:9 van:2 distributed:1 boundary:6 dimension:1 cortical:10 world:1 evaluating:2 ignores:1 sensory:1 collection:1 kvec:1 replicated:1 projected:4 commonly:1 nguyen:1 voxel:26 far:2 transaction:3 kording:1 cytoplasm:1 compact:2 observable:1 skill:1 absorb:1 reproduces:1 global:1 reveals:2 hubel:1 conclude:1 andermann:2 xi:5 neurology:2 butler:1 spectrum:1 decomposes:1 table:2 disambiguate:1 nature:4 ku:1 transported:1 learn:1 gal4:1 helmstaedter:1 obtaining:1 dendrite:4 upstream:1 diag:2 decker:1 whole:8 noise:2 big:1 profile:2 paul:1 edition:1 sliced:1 body:4 x1:1 site:16 fig:9 neuronal:1 cubic:2 ny:17 axon:1 precision:1 sub:1 position:3 inferring:1 explicit:3 wish:2 nonnegativity:1 meyer:1 third:1 tang:1 theorem:2 specific:3 kaufhold:1 evidence:1 exists:1 albeit:1 underconstrained:1 effectively:1 importance:1 shea:1 supplement:1 magnitude:2 iyer:1 hole:2 gap:2 easier:1 chen:1 smoothly:3 depicted:1 tsien:1 paninski:1 likely:1 wetzel:1 ganglion:1 visual:41 prevents:2 contained:1 corresponds:1 nested:1 rny:6 truth:1 harris:2 rice:1 goal:1 viewed:1 identity:1 acceleration:1 eventual:1 brenner:1 retinotopy:8 axb:1 loan:1 typical:1 reducing:1 called:3 total:1 secondary:1 partly:1 experimental:1 yurgenson:1 bernard:2 support:5 latter:1 arises:1 tsai:1 evaluate:3 tae:1 |
5,797 | 6,245 | Without-Replacement Sampling
for Stochastic Gradient Methods
Ohad Shamir
Department of Computer Science and Applied Mathematics
Weizmann Institute of Science
Rehovot, Israel
[email protected]
Abstract
Stochastic gradient methods for machine learning and optimization problems are
usually analyzed assuming data points are sampled with replacement. In contrast, sampling without replacement is far less understood, yet in practice it is
very common, often easier to implement, and usually performs better. In this
paper, we provide competitive convergence guarantees for without-replacement
sampling under several scenarios, focusing on the natural regime of few passes
over the data. Moreover, we describe a useful application of these results in the
context of distributed optimization with randomly-partitioned data, yielding a
nearly-optimal algorithm for regularized least squares (in terms of both communication complexity and runtime complexity) under broad parameter regimes. Our
proof techniques combine ideas from stochastic optimization, adversarial online
learning and transductive learning theory, and can potentially be applied to other
stochastic optimization and learning problems.
1
Introduction
Many canonical machine learning problems boil down to solving a convex empirical risk minimization
problem of the form
m
min F (w) =
w?W
1 X
fi (w),
m i=1
(1)
where each individual function fi (?) is convex (e.g. the loss on a given example in the training
data), and the set W ? Rd is convex. In large-scale applications, where both m, d can be huge, a
very popular approach is to employ stochastic gradient methods. Generally speaking, these methods
maintain some iterate wt ? W, and at each iteration, sample an individual function fi (?), and perform
some update to wt based on ?fi (wt ). Since the update is with respect to a single function, this
update is usually computationally cheap. Moreover, when the sampling is done independently and
uniformly at random, ?fi (wt ) is an unbiased estimator of the true gradient ?F (wt ), which allows
for good convergence guarantees after a reasonable number of iterations (see for instance [18, 15]).
However, in practical implementations of such algorithms, it is actually quite common to use withoutreplacement sampling, or equivalently, pass sequentially over a random shuffling of the functions
fi . Intuitively, this forces the algorithm to process more equally all data points, and often leads to
better empirical performance. Moreover, without-replacement sampling is often easier and faster to
implement, as it requires sequential data access, as opposed to the random data access required by
with-replacement sampling (see for instance [3, 16, 8]).
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
1.1
What is Known so Far?
Unfortunately, without-replacement sampling is not covered well by current theory. The challenge is
that unlike with-replacement sampling, the functions processed at every iteration are not statistically
independent, and their correlations are difficult to analyze. Since this lack of theory is the main
motivation for our paper, we describe the existing known results in some detail, before moving to our
contributions.
To begin with, there exist classic convergence results which hold deterministically for every order in
which the individual functions are processed, and in particular when we process them by sampling
without replacement (e.g. [14]). However, these can be exponentially worse than those obtained
using random without-replacement sampling, and this gap is inevitable (see for instance [16]).
More recently, Recht and R? [16] studied this problem, attempting to show that at least for least
squares optimization, without-replacement sampling is always better (or at least not substantially
worse) than with-replacement sampling on a given dataset. They showed this reduces to a fundamental
conjecture about arithmetic-mean inequalities for matrices, and provided partial results in that
direction, such as when the individual functions themselves are assumed to be generated i.i.d. from
some distribution. However, the general question remains open.
In a recent breakthrough, G?rb?zbalaban et al. [8] provided a new analysis of gradient descent
algorithms for solving Eq. (1) based on random reshuffling: Each epoch, the algorithm draws a new
permutation on {1, . . . , m} uniformly at random, and processes the individual functions in that order.
Under smoothness and strong convexity assumptions, the authors obtain convergence guarantees
of essentially O(1/k 2 ) after k epochs, vs. O(1/k) using with-replacement sampling (with the O(?)
notation including certain dependencies on the problem parameters and data size). Thus, withoutreplacement sampling is shown to be strictly better than with-replacement sampling, after sufficiently
many passes over the data. However, this leaves open the question of why without-replacement
sampling works well after a few ? or even just one ? passes over the data. Indeed, this is often the
regime at which stochastic gradient methods are most useful, do not require repeated data reshuffling,
and their good convergence properties are well-understood in the with-replacement case.
1.2
Our Results
In this paper, we provide convergence guarantees for stochastic gradient methods, under several
scenarios, in the natural regime where the number of passes over the data is small, and in particular
that no data reshuffling is necessary. We emphasize that our goal here will be more modest than those
of [16, 8]: Rather than show superiority of without-replacement sampling, we only show that it will
not be significantly worse (in a worst-case sense) than with-replacement sampling. Nevertheless, such
guarantees are novel, and still justify the use of with-replacement sampling, especially in situations
where it is advantageous due to data access constraints or other reasons. Moreover, these results
have a useful application in the context of distributed learning and optimization, as we will shortly
describe. Our main contributions can be summarized as follows:
? For convex functions on some convex domain W, we consider algorithms which perform a single
pass over a random permutation of m individual functions, and show that their suboptimality can
be characterized by a combination of two quantities, each from a different field: First, the regret
which the algorithm can attain in the setting of adversarial online convex optimization [17, 9], and
second, the transductive Rademacher complexity of W with respect to the individual functions, a
notion stemming from transductive learning theory [22, 6].
? As a concrete application of the above, we show that if each function fi (?) corresponds to a convex
Lipschitz loss of a linear?
predictor, and the algorithm belongs to the class of algorithms which in the
online setting attain O( T ) regret on T such functions (which includes, for example, stochastic
gradient descent), then
? the suboptimality using without-replacement sampling, after processing
T functions, is O(1/ T ). Up to numerical constants, the guarantee is the same as that obtained
using with-replacement sampling.
? We turn to consider more specifically the stochastic gradient descent algorithm, and show that
if the objective function F (?) is ?-strongly convex, and the functions fi (?) are also smooth, then
the suboptimality bound becomes O(1/?T ), which matches the with-replacement guarantees
2
(although with replacement, smoothness is not needed, and the dependence on some parameters
hidden in the O(?) is somewhat better).
? In recent years, a new set of fast stochastic algorithms to solve Eq. (1) has emerged, such as
SAG, SDCA, SVRG, and quite a few other variants. These algorithms are characterized by
cheap stochastic iterations, involving computations of individual function gradients, yet unlike
traditional stochastic algorithms, enjoy a linear convergence rate (runtime scaling logarithmically
with the required accuracy). To the best of our knowledge, all existing analyses require sampling
with replacement. We consider a representative algorithm from this set, namely SVRG, and the
problem of regularized least squares, and show that similar guarantees can be obtained using
without-replacement sampling. This result has a potentially interesting implication: Under the
mild assumption that the problem?s condition number is smaller than the data size, we get that
SVRG can converge to an arbitrarily accurate solution (even up to machine precision), without the
need to reshuffle the data ? only a single shuffle at the beginning suffices. Thus, at least for this
problem, we can obatin fast and high-quality solutions even if random data access is expensive.
? A further application of the SVRG result is in the context of distributed learning: By simulating
without-replacement SVRG on data randomly partitioned between several machines, we get a
nearly-optimal algorithm for regularized least squares, in terms of communication and computational complexity, as long as the condition number is smaller than the data size per machine
(up to logarithmic factors). This builds on the work of Lee et al. [13], who were the first to
recognize the applicability of SVRG to distributed optimization. However, their results relied on
with-replacement sampling, and are applicable only for much smaller condition numbers.
?
We note that our focus is on scenarios where no reshufflings are necessary. In particular, the O(1/ T )
and O(1/?T ) bounds apply for all T ? {1, 2, . . . , m}, namely up to one full pass over a random
permutation of the entire data. However, our techniques are also applicable to a constant (> 1)
number of passes, by randomly reshuffling the data after every pass. In a similar vein, our SVRG result
can be readily extended to a situation where each epoch of the algorithm is done on an independent
permutation of the data. We leave a full treatment of this to future work.
2
Preliminaries and Notation
We use bold-face symbols to denote vectors. Given a vector w, wi denotes it?s i-th coordinate. We
? ?(?),
?
? to hide constants
utilize the standard O(?), ?(?), ?(?) notation to hide constants, and O,
?(?)
and logarithmic factors.
Given convex functions f1 (?), f2 (?), . . . , fm (?) from Rd to R, we define our objective function
F : Rd ? R as
m
1 X
F (w) =
fi (w),
m i=1
with some fixed optimum w? ? arg minw?W F (w). In machine learning applications, each individual fi (?) usually corresponds to a loss with respect to a data point, hence will use the terms
?individual function?, ?loss function? and ?data point? interchangeably throughout the paper.
We let ? be a permutation over {1, . . . , m} chosen uniformly at random. In much of the paper, we
consider methods which draw loss functions without replacement according to that permutation (that
is, f?(1) (?), f?(2) (?), f?(3) (?), . . .). We will use the shorthand notation
t?1
F1:t?1 (?) =
m
X
1 X
1
f?(i) (?) , Ft:m (?) =
f?(i) (?)
t ? 1 i=1
m ? t + 1 i=t
to denote the average loss with respect to the first t ? 1 and last m ? t + 1 loss functions respectively,
as ordered by the permutation (intuitively, the losses in F1:t?1 (?) are those already observed by the
algorithm at the beginning of iteration t, whereas the losses in Ft:m (?) are those not yet observed).
We use the convention that F1:1 (?) ? 0, and the same goes for other expressions of the form
Pt?1
1
i=1 ? ? ? throughout the paper, when t = 1. We also define quantities such as ?F1:t?1 (?) and
t?1
?Ft:m (?) similarly.
3
A function f : Rd ? R is ?-strongly convex, if for any w, w0 , f (w0 ) ? f (w) +
2
hg, w0 ? wi + ?2 kw0 ? wk , where g is any (sub)-gradient of f at w, and is ?-smooth if for
2
any w, w0 , f (w0 ) ? f (w) + hg, w0 ? wi + ?2 kw0 ? wk . ?-smoothness also implies that
the function f is differentiable, and its gradient is ?-Lipschitz. Based on these properties, it
is easy to verify that if w? ? arg min f (w), and f is ?-strongly convex and ?-smooth, then
2
?
? 2
? f (w) ? f (w? ) ? ?2 kw ? w? k .
2 kw ? w k
We will also require the notion of trandsuctive Rademacher complexity, as developed by El-Yaniv
and Pechyony [6, Definition 1], with a slightly different notation adapted to our setting:
Definition 1. Let V be a set of vectors v = (v1 , . . . , vm ) in Rm . Let s, u be positive integers such
su
that s + u = m, and denote p := (s+u)
2 ? (0, 1/2). We define the transductive Rademacher
Pm
1
Complexity Rs,u (V) as Rs,u (V) = s + u1 ? Er1 ,...,rm [supv?V i=1 ri vi ] , where r1 , . . . , rm are
i.i.d. random variables such that ri = 1 or ?1 with probability p, and ri = 0 with probability 1 ? 2p.
This quantity is an important parameter is studying the richness of the set V, and will prove crucial
in providing some of the convergence results presented later on. Note that it differs slightly from
standard Rademacher complexity, which is used in the theory of learning from i.i.d. data, where the
Rademacher variables ri only take ?1, +1 values, and (1/s + 1/u) is replaced by 1/m).
3
Convex Lipschitz Functions
We begin by considering loss functions f1 (?), f2 (?), . . . , fm (?) which are convex and L-Lipschitz
over some convex domain W. We assume the algorithm sequentially goes over some permuted
ordering of the losses, and before processing the t-th loss function, produces an iterate wt ? W.
Moreover, we assume that the algorithm has a regret bound in the adversarial online setting, namely
that for any sequence of T convex Lipschitz losses f1 (?), . . . , fT (?), and any w ? W,
T
X
t=1
ft (wt ) ?
T
X
ft (w) ? RT
t=1
for some RT scaling sub-linearly in T 1 . For example, online gradient descent (which is equivalent
to stochastic
gradient descent when the losses are i.i.d.), with a suitable step size, satisfies RT =
?
O(BL T ), where L is the Lipschitz parameter and B upper bounds the norm of any vector in W. A
similar regret bound can also be shown for other online algorithms (see [9, 17, 23]).
Since the ideas used for analyzing this setting will also be used in the more complicated results which
follow, we provide the analysis in some detail. We first have the following straightforward theorem,
which upper bounds the expected suboptimality in terms of regret and the expected difference between
the average loss on prefixes and suffixes of the data.
Theorem 1. Suppose the algorithm has a regret bound RT , and sequentially processes
f?(1) (?), . . . , f?(T ) (?) where ? is a random permutation on {1, . . . , m}. Then in expectation over ?,
"
#
T
T
1X
RT
1 X
?
E
F (wt ) ? F (w ) ?
+
(t ? 1) ? E[F1:t?1 (wt ) ? Ft:m (wt )].
T t=1
T
mT t=2
The left hand side in the inequality above can be interpreted as an expected bound on F (wt )?F (w? ),
where t is drawn uniformly at random from 1, 2, . . . , T . Alternatively, by Jensen?s inequality
? T ) ? F (w? )], where
and the fact that F (?) is convex, the same bound also applies on E[F (w
P
T
? T = T1 t=1 wt .
w
The proof of the theorem relies on the following simple but key lemma, which expresses the expected
difference between with-replacement and without-replacement sampling in an alternative form,
similar to Thm. 1 and one which lends itself to tools and ideas from transductive learning theory.
This lemma will be used in proving all our main results, and its proof appears in Subsection A.2
1
For simplicity, we assume the algorithm is deterministic given f1 , . . . , fm , but all results in this section also
hold for randomized algorithms (in expectation over the algorithm?s randomness), assuming the expected regret
of the algorithm w.r.t. any w ? W is at most RT .
4
Lemma 1. Let ? be a permutation over {1, . . . , m} chosen uniformly at random. Let s1 , . . . , sm ? R
be random
variables which conditioned on ?(1), . . . , ?(t ? 1), are independent of ?(t), . . . , ?(m).
1 P
m
t?1
Then E m
i=1 si ? s?(t) equals m ? E [s1:t?1 ? st:m ] for t > 1, and 0 for t = 1.
Proof of Thm. 1. Adding and subtracting terms, and using the facts that ? is a permutation chosen
uniformly at random, and w? is fixed,
"
#
"
#
"
#
T
T
T
1 X
1 X
1 X
?
?
E
F (wt ) ? F (w ) = E
f?(t) (wt ) ? F (w ) + E
F (wt ) ? f?(t) (wt )
T t=1
T t=1
T t=1
"
#
"
#
T
T
1 X
1 X
?
f?(t) (wt ) ? f?(t) (w ) + E
F (wt ) ? f?(t) (wt )
= E
T t=1
T t=1
Applying the regret bound assumption on the sequence of losses f?(1) (?), . . . , f?(T ) (?), the above is
PT
at most RTT + T1 t=1 E F (wt ) ? f?(t) (wt ) . Since wt (as a random variable over the permutation
? of the data) depends only on ?(1), . . . , ?(t ? 1), we can use Lemma 1 (where si = fi (wt ), and
PT
1
noting that the expectation above is 0 when t = 1), and get that the above equals RTT + mT
t=2 (t ?
1) ? E[F1:t?1 (wt ) ? Ft:m (wt )].
Having reduced the expected suboptimality to the expected difference E[F1:t?1 (wt ) ? Ft:m (wt )],
the next step is to upper bound it with E[supw?W (F1:t?1 (w) ? Ft:m (w))]: Namely, having split
our loss functions at random to two groups of size t ? 1 and m ? t + 1, how large can be the difference
between the average loss of any w on the two groups? Such uniform convergence bounds are exactly
the type studied in transductive learning theory, where a fixed dataset is randomly split to a training
set and a test set, and we consider the generalization performance of learning algorithms ran on the
training set. Such results can be provided in terms of the transductive Rademacher complexity of W,
and combined with Thm. 1, lead to the following bound in our setting:
Theorem 2. Suppose that each wt is chosen from a fixed domain W, that the algorithm enjoys a
regret bound RT , and that supi,w?W |fi (w)| ? B. Then in expectation over the random permutation
?,
#
"
T
T
RT
1 X
24B
1X
?
F (wt ) ? F (w ) ?
+
(t ? 1)Rt?1:m?t+1 (V) + ? ,
E
T t=1
T
mT t=2
m
where V = {(f1 (w), . . . , fm (w) | w ? W}.
Thus, we obtained a generic bound which depends on the online learning characteristics of the
algorithm, as well as the statistical learning properties of the class W on the loss functions. The proof
(as the proofs of all our results from now on) appears in Section A.
We now instantiate the theorem to the prototypical case of bounded-norm linear predictors, where the
loss is some convex and Lipschitz function of the prediction hw, xi of a predictor w on an instance
x, possibly with some regularization:
? and each loss
Corollary 1. Under the conditions of Thm. 2, suppose that W ? {w : kwk ? B},
function fi has the form `i (hw, xi i) + r(w) for some L-Lipschitz
`
,
kx
k
?
1,
and
a fixed function
i
i
?
h P
i
?
2(12+ 2)BL
T
RT
1
?
?
r. Then E T t=1 F (wt ) ? F (w ) ? T +
.
m
?
?
As discussed earlier, in the setting of Corollary?1, typical regret bounds are on the order of O(BL
T ).
?
Thus, the expected suboptimality is O(BL/
T ), all the way up to T = m (i.e. a full pass over a
random permutation of the data). Up to constants, this is the same as the suboptimality attained by T
iterations of with-replacement sampling, using stochastic gradient descent or similar algorithms.
4
Faster Convergence for Strongly Convex Functions
We now consider more specifically the stochastic gradient descent algorithm on a convex domain W,
which can be described as follows: We initialize at some w1 ? W, and perform the update steps
wt+1 = ?W (wt ? ?t gt ),
5
where ?t > 0 are fixed step sizes, ?W is projection on W, and gt is a subgradient of f?(t) (?) at
wt . Moreover, we assume the objective function F (?) is ?-strongly convex for some ? > 0. In this
scenario, using with-replacement sampling (i.e. gt is a subgradient of an independently drawn fi (?)),
performing T iterations as above and returning a randomly sampled iterate wt or their average results
? 2 /?T ), where G2 is an upper bound on the expected squared norm of
in expected suboptimality O(G
gt [15, 18]. Here, we study a similar situation in the without-replacement case.
In the result below, we will consider specifically the case where each fi (w) is a Lipschitz and smooth
loss of a linear predictor w, possibly with some regularization. The smoothness assumption is needed
to get a good bound on the transductive Rademacher complexity of quantities such as h?fi (w), wi.
However, the technique can be potentially applied to more general cases.
Theorem 3. Suppose W has diameter B, and that F (?) is ?-strongly convex on W. Assume that
fi (w) = `i (hw, xi i) + r(w) where kxi k ? 1, r(?) is possibly some regularization term, and each
`i is L-Lipschitz and ?-smooth on {z : z = hw, xi , w ? W, kxk ? 1}. Furthermore, suppose
supw?W k?fi (w)k ? G. Then for any 1 < T ? m, if we run SGD for T iterations with step size
?t = 2/?t, we have (for a universal positive constant c)
#
"
T
((L + ?B)2 + G2 ) log(T )
1X
?
F (wt ) ? F (w ) ? c ?
.
E
T t=1
?T
As in the results of the previous section, the left hand side is the expected optimality of a single
wt where t is chosen uniformly at random, or an upper bound on the expected suboptimality of the
P
? T = T1 Tt=1 wt . This result is similar to the with-replacement case, up to numerical
average w
constants and the additional (L + ?B 2 ) term in the numerator. We note that in some cases, G2 is
the dominant term anyway2 . However, it is not clear that the (L + ?B 2 ) term is necessary, and
removing it is left to future work. We also note that the log(T ) factor in the theorem can be removed
PT
by considering not T1 t=1 F (wt ), but rather only an average over some suffix of the iterates, or
weighted averaging (see for instance [15, 12, 21], where the same techniques can be applied here).
The proof of Thm. 3 is somewhat more challenging than the results
? of the previous section, since
we are attempting to get a faster rate of O(1/T ) rather than O(1/ T ), all the way up to T = m. A
significant technical obstacle is that our proof technique relies on concentration
of averages around
?
expectations, which on T samples does not go down faster than O(1/ T ). To overcome this, we
apply concentration results not on the function values (i.e. F1:t?1 (wt ) ? Ft:m (wt ) as in the previous
section), but rather on gradient-iterate inner products, i.e. h?F1:t?1 (wt ) ? ?Ft:m (wt ), wt ? w? i,
where w? is the optimal solution. To get good bounds, we need to assume these gradients have a
certain structure, which is why we need to make the assumption in the theorem about the form of each
fi (?). Using transductive Rademacher complexity tools, we manage
to upper bound the expectation
q
2 ?
of these inner products by quantities roughly of the form E[kwt ? w? k ]/ t (assuming here
t < m/2 for simplicity). We now utilize the fact that in the strongly convex case, kwt ? w? k itself
decreases to zero with t at a certain rate, to get fast rates decreasing as 1/t.
5
Without-Replacement SVRG for Least Squares
In this section, we will consider a more sophisticated stochastic gradient approach, namely the SVRG
algorithm of [11], designed to solve optimization problems with a finite sum structure as in Eq. (1).
Unlike purely stochastic gradient procedures, this algorithm does require several passes over the
data. However, assuming the condition number 1/? is smaller than the data size (assuming each
fi (?) is O(1) smooth, and ? is the strong convexity parameter of F (?)), only O(m log(1/)) gradient
evaluations are required to get an -accurate solution, for any . Thus, we can get a high-accuracy
solution after the equivalent of a small number of data passes. As discussed in the introduction, over
the past few years several other algorithms have been introduced and shown to have such a behavior.
We will focus on the algorithm in its basic form, where the domain W equals Rd .
The existing analysis of SVRG ([11]) assumes stochastic iterations, which involves sampling the data
with replacement. Thus, it is natural to consider whether a similar convergence behavior occurs using
2
G is generally on the order of L + ?B, which is the same as L + ?B if L is the dominant term. This
happens for instance with the squared loss, whose Lipschitz parameter is on the order of ?B.
6
without-replacement sampling. As we shall see, a positive reply has at least two implications: The
first is that as long as the condition number is smaller than the data size, SVRG can be used to obtain
a high-accuracy solution, without the need to reshuffle the data: Only a single shuffle at the beginning
suffices, and the algorithm terminates after a small number of sequential passes (logarithmic in the
required accuracy). The second implication is that such without-replacement SVRG can be used
to get a nearly-optimal algorithm for convex distributed learning and optimization on randomly
partitioned data, as long as the condition number is smaller than the data size at each machine.
The SVRG algorithm using without-replacement sampling on a dataset of size m is described
as Algorithm 1. It is composed of multiple epochs (indexed by s), each involving one gradient
computation on the entire dataset, and T stochastic iterations, involving gradient computations with
respect to individual data points. Although the gradient computation on the entire dataset is expensive,
it is only needed to be done once per epoch. Since the algorithm will be shown to have linear
convergence as a function of the number of epochs, this requires only a small (logarithmic) number
of passes over the data.
Algorithm 1 SVRG using Without-Replacement Sampling
Parameters: ?, T , permutation ? on {1, . . . , m}
? 1 at 0
Initialize w
for s = 1, 2, . . . do
?s
w(s?1)T +1 := w
Pm
1
? := ?F (w
? s) = m
? s)
n
i=1 ?fi (w
for t = (s ? 1)T + 1, . . . , sT do
? s) + n
?
wt+1 := wt ? ? ?f?(t) (wt ) ? ?f?(t) (w
end for
? s+1 be the average of w(s?1)T +1 , . . . , wsT , or one of them drawn uniformly at random.
Let w
end for
We will consider specifically the regularized least mean squares problem, where
?
1
?
2
2
fi (w) = (hw, xi i ? yi ) + kwk .
(2)
2
2
P
? > 0. Moreover, we assume that F (w) = 1 m fi (w) is ?-strongly convex
for some xi , yi and ?
i=1
m
? For convenience, we will assume that kxi k , |yi |, ? are all at most 1 (this
(note that necessarily ? ? ?).
is without much loss of generality, since we can always re-scale the loss functions by an appropriate
? ? 2-smooth.
factor). Note that under this assumption, each fi (?) as well as F (?) are also 1 + ?
Theorem 4. Suppose each loss function fi (?) corresponds to Eq. (2), where xi ? Rd , maxi kxi k ? 1,
? > 0, and that F (?) is ?-strongly convex, where ? ? (0, 1). Moreover, let B ? 1
maxi |yi | ? 1, ?
2
be such that kw? k ? B and maxt F (wt ) ? F (w? ) ? B with probability 1 over the random
permutation. There is some universal constant c0 ? 1, such that for any c ? c0 and any ? (0, 1), if
we run algorithm 1, using parameters ?, T satisfying
1
9
64dmB 2
?=
, T ?
, m ? c log2
T,
c
??
?
? S+1 satisfies E[F (w
? S+1 )?minw F (w)] ? .
then after S = dlog4 (9/)e epochs of the algorithm, w
In particular, by taking ? = ?(1) and T = ?(1/?), the algorithm attains an -accurate solution after
O(log(1/)/?) stochastic iterations of without-replacement sampling, and O(log(1/)) sequential
passes over the data to compute gradients of F (?). This implies that as long as 1/? (which stands
for the condition number of the problem) is smaller than O(m/ log(1/)), the number of withoutreplacement stochastic iterations is smaller than the data size m. Thus, assuming the data is randomly
shuffled, we can get a solution using only sequential data passes, without the need to reshuffle.
In terms of the log factors, we note that the condition maxt F (wt ) ? F (w? ) ? B with probability 1
is needed for technical reasons in our analysis, and we conjecture that it can be improved. However,
since B appears only inside log factors, even a crude bound would suffice. In appendix C, we indeed
show that under there is always a valid B satisfying log(B) = O (log(1/) log(T ) + log(1/?)).
Regarding the logarithmic dependence on the dimension d, it is due to an application of a matrix
Bernstein inequality for d ? d matrices, and can possibly be improved.
7
5.1 Application of Without-Replacement SVRG to distributed learning
An important variant of the problems we discussed so far is when training data (or equivalently,
the individual loss functions f1 (?), . . . , fm (?)) are split between different machines, who need to
communicate in order to reach a good solution. This naturally models situations where data is too
large to fit at a single machine, or where we wish to speed up the optimization process by parallelizing
it on several computers. Over the past few years, there has been much research on this question in the
machine learning community, with just a few examples including [24, 2, 1, 5, 4, 10, 20, 19, 25, 13].
A substantial number of these works focus on the setting where the data is split equally at random
between k machines (or equivalently, that data is assigned to each machine by sampling without
replacement from {f1 (?), . . . , fm (?)})). Intuitively, this creates statistical similarities between the
data at different machines, which can be leveraged to aid the optimization process. Recently, Lee
et al. [13] proposed a simple and elegant approach, which applies at least in certain parameter
regimes. This is based on the observation that SVRG, according to its with-replacement analysis,
requires O(log(1/)) epochs, P
where in each epoch one needs to compute an exact gradient of the
m
1
objective function F (?) = m
i=1 fi (?), and O(1/?) gradients of individual losses fi (?) chosen
uniformly at random (where is the required suboptimality, and ? is the strong convexity parameter
of F (?)). Therefore, if each machine had i.i.d. samples from {f1 (?), . . . , fm (?)}, whose union
cover {f1 (?), . . . , fm (?)}, the machines could just simulate SVRG: First, each machine splits its
data to batches of sizeP
O(1/?). Then, each SVRG epoch is simulated by the machines computing a
m
1
gradient of F (?) = m
i=1 fi (?) ? which can be fully parallelized and involves one communication
round (assuming a broadcast communication model) ? and one machine computing gradients with
respect to one of its batches. Overall, this would require O(log(1/)) communication rounds, and
O(m/k + 1/?) log(1/) runtime, where m/k is the number of data points per machine (ignoring
communication time, and assuming constant time to compute a gradient of fi (?)). Under the
reasonable assumption that the strong convexity parameter ? is at least k/m, this requires runtime
linear in the data size per machine, and logarithmic in the target accuracy , with only a logarithmic
number of communication rounds. Up to log factors, this is essentially the best one can hope for
with a distributed algorithm. Moreover, a lower bound in [13] indicates that at least in the worst case,
O(log(1/)) communication rounds is impossible to obtain if ? is significantly smaller than k/m.
Unfortunately, the reasoning above crucially relies on each machine having access to i.i.d. samples,
which can be reasonable in some cases, but is different than the more common assumption that the
data is randomly split among the machines. To circumvent this issue, [13] propose to communicate
individual data points / losses fi (?) between machines, so as to simulate i.i.d. sampling. However,
by the birthday paradox, this only works well in the?regime where the overall number of samples
required (O((1/?) log(1/)) is not much larger than m. Otherwise, with-replacement and withoutreplacement sampling becomes statistically very different, and a large number of data points would
need to be communicated. In other words, if communication is an expensive
resource, then the
?
solution proposed in [13] only works well when ? is at least order of 1/ m. In machine learning
applications, the strong convexity parameter ? often comes from explicit regularization
designed to
?
prevent over-fitting, and needs to scale with the data size, usually between 1/ m and 1/m. Thus,
the solution above is communication-efficient only when ? is relatively large.
However, the situation immediately improves if we can use a without-replacement version of SVRG,
which can easily be simulated with randomly partitioned data: The stochastic batches can now be
simply subsets of each machine?s data, which are statistically identical to sampling {f1 (?), . . . , fm (?)}
without replacement. Thus, no data points need to be sent across machines, even if ? is small. For
clarity, we present an explicit pseudocode as Algorithm 2 in Appendix D.
Let us consider the analysis of no-replacement SVRG provided in Thm. 4. According to this analysis,
by setting T = ?(1/?), then as long as the total number of batches is at least ?(log(1/)), and
?
? = ?(1/m),
then the algorithm will attain an -suboptimal solution in expectation. In other words,
without any additional communication, we extend the applicability of distributed SVRG (at least for
? ?m) regime to 1/? = ?(1/m).
?
regularized least squares) from the ? = ?(1/
We emphasize that this formal analysis only applies to regularized squared loss, which is the scope of
Thm. 4. However, this algorithmic approach can be applied to any loss function, and we conjecture
that it will have similar performance for any smooth losses and strongly-convex objectives.
Acknowledgments: This research is supported in part by an FP7 Marie Curie CIG grant, an ISF grant
425/13, and by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI).
8
References
[1] A. Agarwal, O. Chapelle, M. Dud?k, and J. Langford. A reliable effective terascale linear learning system.
CoRR, abs/1110.4198, 2011.
[2] M.-F. Balcan, A. Blum, S. Fine, and Y. Mansour. Distributed learning, communication complexity and
privacy. In COLT, 2012.
[3] L. Bottou. Stochastic gradient descent tricks. In Neural Networks: Tricks of the Trade. Springer, 2012.
[4] A. Cotter, O. Shamir, N. Srebro, and K. Sridharan. Better mini-batch algorithms via accelerated gradient
methods. In NIPS, 2011.
[5] O. Dekel, R. Gilad-Bachrach, O. Shamir, and L. Xiao. Optimal distributed online prediction using
mini-batches. Journal of Machine Learning Research, 13:165?202, 2012.
[6] R. El-Yaniv and D. Pechyony. Transductive rademacher complexity and its applications. Journal of
Artificial Intelligence Research, 35:193?234, 2009.
[7] D. Gross and V. Nesme. Note on sampling without replacing from a finite collection of matrices. arXiv
preprint arXiv:1001.2738, 2010.
[8] M. G?rb?zbalaban, A. Ozdaglar, and P. Parrilo. Why random reshuffling beats stochastic gradient descent.
arXiv preprint arXiv:1510.08560, 2015.
[9] E. Hazan. Introduction to online convex optimization. Book draft, 2015.
[10] M. Jaggi, V. Smith, M. Tak?c, J. Terhorst, S. Krishnan, T. Hofmann, and M. Jordan. Communicationefficient distributed dual coordinate ascent. In NIPS, 2014.
[11] R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In
NIPS, 2013.
[12] S. Lacoste-Julien, M. Schmidt, and F. Bach. A simpler approach to obtaining an o (1/t) convergence rate
for the projected stochastic subgradient method. arXiv preprint arXiv:1212.2002, 2012.
[13] J. Lee, T. Ma, and Q. Lin. Distributed stochastic variance reduced gradient methods. arXiv preprint
arXiv:1507.07595, 2015.
[14] A. Nedi?c and D. Bertsekas. Convergence rate of incremental subgradient algorithms. In Stochastic
optimization: algorithms and applications, pages 223?264. Springer, 2001.
[15] A. Rakhlin, O. Shamir, and K. Sridharan. Making gradient descent optimal for strongly convex stochastic
optimization. arXiv preprint arXiv:1109.5647, 2011.
[16] B. Recht and C. R?. Beneath the valley of the noncommutative arithmetic-geometric mean inequality:
conjectures, case-studies, and consequences. In COLT, 2012.
[17] S. Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine
Learning, 4(2):107?194, 2011.
[18] S. Shalev-Shwartz and S. Ben-David. Understanding machine learning: From Theory to Algorithms.
Cambridge University Press, 2014.
[19] O. Shamir and N. Srebro. On distributed stochastic optimization and learning. In Allerton, 2014.
[20] O. Shamir, N. Srebro, and T. Zhang. Communication-efficient distributed optimization using an approximate newton-type method. In ICML, 2014.
[21] O. Shamir and T. Zhang. Stochastic gradient descent for non-smooth optimization: Convergence results
and optimal averaging schemes. In ICML, 2013.
[22] V. Vapnik. Statistical learning theory. Wiley New York, 1998.
[23] L. Xiao. Dual averaging methods for regularized stochastic learning and online optimization. Journal of
Machine Learning Research, 11:2543?2596, 2010.
[24] Y. Zhang, J. Duchi, and M. Wainwright. Communication-efficient algorithms for statistical optimization.
Journal of Machine Learning Research, 14:3321?3363, 2013.
[25] Y. Zhang and L. Xiao. Communication-efficient distributed optimization of self-concordant empirical loss.
In ICML, 2015.
9
| 6245 |@word mild:1 version:1 advantageous:1 norm:3 c0:2 dekel:1 open:2 r:2 crucially:1 sgd:1 reduction:1 prefix:1 past:2 existing:3 current:1 si:2 yet:3 readily:1 stemming:1 numerical:2 hofmann:1 cheap:2 designed:2 update:4 v:1 intelligence:2 leaf:1 instantiate:1 beginning:3 smith:1 iterates:1 draft:1 noncommutative:1 allerton:1 zbalaban:2 zhang:5 simpler:1 shorthand:1 prove:1 combine:1 fitting:1 inside:1 privacy:1 expected:12 indeed:2 behavior:2 themselves:1 roughly:1 decreasing:1 considering:2 becomes:2 spain:1 begin:2 notation:5 moreover:9 provided:4 bounded:1 suffice:1 israel:1 what:1 interpreted:1 substantially:1 developed:1 guarantee:8 every:3 runtime:4 sag:1 exactly:1 rm:3 supv:1 returning:1 ozdaglar:1 grant:2 enjoy:1 superiority:1 bertsekas:1 before:2 t1:4 understood:2 positive:3 consequence:1 analyzing:1 birthday:1 studied:2 challenging:1 statistically:3 weizmann:2 practical:1 acknowledgment:1 practice:1 regret:10 implement:2 differs:1 union:1 communicated:1 procedure:1 sdca:1 empirical:3 universal:2 significantly:2 attain:3 projection:1 word:2 get:11 convenience:1 valley:1 context:3 risk:1 applying:1 impossible:1 equivalent:2 deterministic:1 go:3 straightforward:1 independently:2 convex:29 nedi:1 bachrach:1 communicationefficient:1 simplicity:2 immediately:1 estimator:1 classic:1 proving:1 notion:2 coordinate:2 shamir:8 pt:4 suppose:6 target:1 exact:1 trick:2 logarithmically:1 trend:1 expensive:3 satisfying:2 vein:1 observed:2 ft:12 preprint:5 worst:2 richness:1 ordering:1 shuffle:2 removed:1 decrease:1 trade:1 ran:1 substantial:1 gross:1 convexity:5 complexity:12 solving:2 predictive:1 purely:1 creates:1 f2:2 easily:1 er1:1 fast:3 describe:3 effective:1 artificial:1 shalev:2 quite:2 emerged:1 whose:2 solve:2 larger:1 otherwise:1 transductive:10 itself:2 online:12 sequence:2 differentiable:1 cig:1 propose:1 subtracting:1 product:2 beneath:1 convergence:15 yaniv:2 optimum:1 r1:1 rademacher:9 produce:1 incremental:1 leave:1 ben:1 ac:1 eq:4 strong:5 involves:2 implies:2 come:1 convention:1 direction:1 stochastic:32 require:5 suffices:2 f1:20 generalization:1 preliminary:1 strictly:1 hold:2 sufficiently:1 around:1 scope:1 algorithmic:1 applicable:2 tool:2 weighted:1 cotter:1 minimization:1 hope:1 reshuffling:6 always:3 rather:4 corollary:2 focus:3 indicates:1 contrast:1 adversarial:3 attains:1 sense:1 el:2 suffix:2 entire:3 hidden:1 tak:1 issue:1 arg:2 supw:2 overall:2 among:1 dual:2 colt:2 breakthrough:1 initialize:2 field:1 equal:3 once:1 having:3 sampling:38 identical:1 kw:3 broad:1 icml:3 nearly:3 inevitable:1 future:2 few:6 employ:1 randomly:9 composed:1 recognize:1 kwt:2 individual:14 replaced:1 replacement:48 maintain:1 ab:1 huge:1 evaluation:1 analyzed:1 yielding:1 hg:2 implication:3 accurate:3 partial:1 necessary:3 wst:1 ohad:2 minw:2 modest:1 indexed:1 re:1 instance:6 earlier:1 obstacle:1 cover:1 applicability:2 subset:1 predictor:4 uniform:1 johnson:1 too:1 dependency:1 kxi:3 combined:1 recht:2 st:2 fundamental:1 randomized:1 lee:3 vm:1 dmb:1 concrete:1 w1:1 squared:3 manage:1 opposed:1 leveraged:1 possibly:4 broadcast:1 worse:3 book:1 parrilo:1 summarized:1 bold:1 includes:1 wk:2 vi:1 depends:2 later:1 analyze:1 kwk:2 hazan:1 competitive:1 relied:1 complicated:1 curie:1 contribution:2 collaborative:1 il:1 square:7 accuracy:5 variance:2 who:2 characteristic:1 pechyony:2 randomness:1 reach:1 definition:2 naturally:1 proof:8 boil:1 sampled:2 dataset:5 treatment:1 popular:1 knowledge:1 subsection:1 improves:1 sophisticated:1 actually:1 focusing:1 appears:3 attained:1 follow:1 improved:2 done:3 strongly:11 generality:1 furthermore:1 just:3 reply:1 correlation:1 langford:1 hand:2 replacing:1 su:1 lack:1 quality:1 icri:1 verify:1 unbiased:1 true:1 hence:1 regularization:4 shuffled:1 assigned:1 dud:1 round:4 interchangeably:1 numerator:1 self:1 suboptimality:10 tt:1 performs:1 duchi:1 balcan:1 reasoning:1 novel:1 fi:30 recently:2 common:3 permuted:1 pseudocode:1 mt:3 exponentially:1 discussed:3 extend:1 isf:1 significant:1 cambridge:1 shuffling:1 rd:6 smoothness:4 mathematics:1 similarly:1 pm:2 had:1 moving:1 access:5 chapelle:1 similarity:1 gt:4 dominant:2 jaggi:1 showed:1 recent:2 hide:2 belongs:1 scenario:4 certain:4 obatin:1 inequality:5 arbitrarily:1 yi:4 additional:2 somewhat:2 parallelized:1 converge:1 arithmetic:2 full:3 multiple:1 reduces:1 smooth:9 technical:2 faster:4 characterized:2 match:1 bach:1 long:5 lin:1 equally:2 prediction:2 variant:2 involving:3 basic:1 essentially:2 expectation:7 supi:1 arxiv:10 iteration:12 agarwal:1 gilad:1 whereas:1 fine:1 crucial:1 withoutreplacement:4 unlike:3 pass:11 ascent:1 elegant:1 sent:1 sridharan:2 jordan:1 integer:1 noting:1 bernstein:1 split:6 easy:1 krishnan:1 iterate:4 fit:1 fm:9 suboptimal:1 inner:2 idea:3 regarding:1 whether:1 expression:1 accelerating:1 reshuffle:3 speaking:1 york:1 useful:3 generally:2 covered:1 clear:1 processed:2 diameter:1 reduced:2 exist:1 canonical:1 sizep:1 per:4 rb:2 rehovot:1 shall:1 express:1 group:2 key:1 nevertheless:1 blum:1 drawn:3 clarity:1 prevent:1 marie:1 utilize:2 lacoste:1 v1:1 subgradient:4 year:3 sum:1 run:2 communicate:2 throughout:2 reasonable:3 draw:2 appendix:2 scaling:2 bound:23 adapted:1 constraint:1 ri:4 u1:1 speed:1 simulate:2 min:2 optimality:1 attempting:2 performing:1 relatively:1 conjecture:4 department:1 according:3 combination:1 smaller:9 slightly:2 terminates:1 across:1 partitioned:4 wi:4 making:1 s1:2 happens:1 intuitively:3 computationally:1 resource:1 remains:1 turn:1 kw0:2 needed:4 fp7:1 end:2 studying:1 apply:2 generic:1 appropriate:1 simulating:1 alternative:1 batch:6 shortly:1 schmidt:1 denotes:1 assumes:1 log2:1 newton:1 especially:1 build:1 bl:4 objective:5 question:3 quantity:5 already:1 occurs:1 concentration:2 dependence:2 rt:10 traditional:1 gradient:37 lends:1 simulated:2 w0:6 reason:2 assuming:8 mini:2 providing:1 equivalently:3 difficult:1 unfortunately:2 potentially:3 implementation:1 perform:3 upper:6 observation:1 sm:1 finite:2 descent:12 beat:1 situation:5 extended:1 communication:15 paradox:1 mansour:1 rtt:2 thm:7 parallelizing:1 community:1 introduced:1 david:1 namely:5 required:6 barcelona:1 nip:4 usually:5 below:1 regime:7 challenge:1 nesme:1 including:2 reliable:1 wainwright:1 suitable:1 natural:3 force:1 regularized:7 circumvent:1 scheme:1 julien:1 epoch:10 geometric:1 understanding:1 loss:33 fully:1 permutation:15 interesting:1 prototypical:1 srebro:3 foundation:1 xiao:3 terascale:1 maxt:2 supported:1 last:1 svrg:21 enjoys:1 side:2 formal:1 institute:2 face:1 taking:1 distributed:15 overcome:1 dimension:1 stand:1 valid:1 author:1 collection:1 projected:1 far:3 approximate:1 emphasize:2 sequentially:3 assumed:1 xi:7 shwartz:2 alternatively:1 why:3 ignoring:1 obtaining:1 bottou:1 necessarily:1 domain:5 main:3 linearly:1 motivation:1 repeated:1 representative:1 intel:1 aid:1 wiley:1 precision:1 sub:2 deterministically:1 wish:1 explicit:2 crude:1 hw:5 down:2 theorem:9 removing:1 jensen:1 symbol:1 maxi:2 rakhlin:1 vapnik:1 sequential:4 adding:1 corr:1 ci:1 conditioned:1 terhorst:1 kx:1 gap:1 easier:2 logarithmic:7 simply:1 kxk:1 ordered:1 g2:3 applies:3 springer:2 corresponds:3 satisfies:2 relies:3 ma:1 goal:1 lipschitz:11 specifically:4 typical:1 uniformly:9 wt:48 justify:1 averaging:3 lemma:4 total:1 pas:5 concordant:1 accelerated:1 |
5,798 | 6,246 | Orthogonal Random Features
Felix Xinnan Yu Ananda Theertha Suresh Krzysztof Choromanski
Daniel Holtmann-Rice Sanjiv Kumar
Google Research, New York
{felixyu, theertha, kchoro, dhr, sanjivk}@google.com
Abstract
We present an intriguing discovery related to Random Fourier Features: in Gaussian
kernel approximation, replacing the random Gaussian matrix by a properly scaled
random orthogonal matrix significantly decreases kernel approximation error. We
call this technique Orthogonal Random Features (ORF), and provide theoretical
and empirical justification for this behavior. Motivated by this discovery, we further
propose Structured Orthogonal Random Features (SORF), which uses a class of
structured discrete orthogonal matrices to speed up the computation. The method
reduces the time cost from O(d2 ) to O(d log d), where d is the data dimensionality,
with almost no compromise in kernel approximation quality compared to ORF.
Experiments on several datasets verify the effectiveness of ORF and SORF over the
existing methods. We also provide discussions on using the same type of discrete
orthogonal structure for a broader range of applications.
1
Introduction
Kernel methods are widely used in nonlinear learning [8], but they are computationally expensive for
large datasets. Kernel approximation is a powerful technique to make kernel methods scalable, by
mapping input features into a new space where dot products approximate the kernel well [19]. With
accurate kernel approximation, efficient linear classifiers can be trained in the transformed space
while retaining the expressive power of nonlinear methods [10, 21].
Formally, given a kernel K(?, ?) : Rd ? Rd ! R, kernel approximation methods seek to find a
0
nonlinear transformation (?) : Rd ! Rd such that, for any x, y 2 Rd
?
K(x, y) ? K(x,
y) = (x)T (y).
Random Fourier Features [19] are used widely in approximating smooth, shift-invariant kernels. This
technique requires the kernel to exhibit two properties: 1) shift-invariance, i.e. K(x, y) = K( )
where = x y; and 2) positive semi-definiteness of K( ) on Rd . The second property guarantees
that the Fourier transform of K( ) is a nonnegative function [3]. Let p(w) be the Fourier transform
of K(z). Then,
Z
T
K(x y) =
p(w)ejw (x y) dw.
Rd
This means that one can treat p(w) as a density function and use Monte-Carlo sampling to derive the
following nonlinear map for a real-valued kernel:
p
?
?T
T
T
(x) = 1/D sin(w1T x), ? ? ? , sin(wD
x), cos(w1T x), ? ? ? , cos(wD
x) ,
where wi is sampled i.i.d. from a probability distribution with density p(w). Let W =
?
?T
w1 , ? ? ? , wD . The linear transformation Wx is central to the above computation since,
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
?4
?3
2
x 10
4
x 10
?4
RFF (Random Gaussian)
ORF (Random Orthogonal)
6
MSE
MSE
MSE
1
2
1
0.5
2
3
4
5 6
D/d
7
(a) USPS
8
9 10
0
1
x 10
RFF (Random Gaussian)
ORF (Random Orthogonal)
3
1.5
0
1
8
RFF (Random Gaussian)
ORF (Random Orthogonal)
4
2
2
3
4
5 6
D/d
(b) MNIST
7
8
9 10
0
1
2
3
4
5 6
D/d
7
8
9 10
(c) CIFAR
Figure 1: Kernel approximation mean squared error (MSE) for the Gaussian kernel K(x, y) =
2
2
e ||x y|| /2 . D: number of rows in the linear transformation W. d: input dimension. ORF
imposes orthogonality on W (Section 3).
?
?
The choice of matrix W determines how well the estimated kernel converges to the actual kernel;
The computation of Wx has space and time costs of O(Dd). This is expensive for highdimensional data, especially since D is often required to be larger than d to achieve low approximation error.
In this work, we address both of the above issues. We first show an intriguing discovery (Figure 1):
by enforcing orthogonality on the rows of W, the kernel approximation error can be significantly
reduced. We call this method Orthogonal Random Features (ORF). Section 3 describes the method
and provides theoretical explanation for the improved performance.
Since both generating a d ? d orthogonal matrix (O(d3 ) time and O(d2 ) space) and computing the
transformation (O(d2 ) time and space) are prohibitively expensive for high-dimensional data, we
further propose Structured Orthogonal Random Features (SORF) in Section 4. The idea is to replace
random orthogonal matrices by a class of special structured matrices consisting of products of binary
diagonal matrices and Walsh-Hadamard matrices. SORF has fast computation time, O(D log d),
and almost no extra memory cost (with efficient in-place implementation). We show extensive
experiments in Section 5. We also provide theoretical discussions in Section 6 of applying the
structured matrices in a broader range of applications where random Gaussian matrix is used.
2
Related Works
Explicit nonlinear random feature maps have been constructed for many types of kernels, such as
intersection kernels [15], generalized RBF kernels [22], skewed multiplicative histogram kernels
[14], additive kernels [24], and polynomial kernels [11, 18]. In this paper, we focus on approximating
Gaussian kernels following the seminal Random Fourier Features (RFF) framework [19], which has
been extensively studied both theoretically and empirically [26, 20, 23].
Key to the RFF technique is Monte-Carlo sampling. It is well known that the convergence of MonteCarlo can be largely improved by carefully choosing a deterministic sequence instead of random
samples [17]. Following this line of reasoning, Yang et al. [25] proposed to use low-displacement
rank sequences in RFF. Yu et al. [28] studied optimizing the sequences in a data-dependent fashion to
achieve more compact maps. In contrast to the above works, this paper is motivated by an intriguing
new discovery that using orthogonal random samples provides much faster convergence. Compared
to [25], the proposed SORF method achieves both lower kernel approximation error and greatly
reduced computation and memory costs. Furthermore, unlike [28], the results in this paper are data
independent.
Structured matrices have been used for speeding up dimensionality reduction [1], binary embedding
[27], deep neural networks [5] and kernel approximation [13, 28, 7]. For the kernel approximation
works, in particular, the ?structured randomness? leads to a minor loss of accuracy, but allows faster
computation since the structured matrices enable the use of FFT-like algorithms. Furthermore, these
matrices provide substantial model compression since they require subquadratic (usually only linear)
2
Method
Random Fourier Feature (RFF) [19]
Compact Nonlinear Map (CNM) [28]
Quasi-Monte Carlo (QMC) [25]
Structured (fastfood/circulant) [28, 13]
Orthogonal Random Feature (ORF)
Structured ORF (SORF)
Extra Memory
O(Dd)
O(Dd)
O(Dd)
O(D)
O(Dd)
O(D) or O(1)
Time
O(Dd)
O(Dd)
O(Dd)
O(D log d)
O(Dd)
O(D log d)
Lower error than RFF?
Yes (data-dependent)
Yes
No
Yes
Yes
Table 1: Comparison of different kernel approximation methods under the framework of Random
Fourier Features [19]. We assume D
d. The proposed SORF method have O(D) degrees of
freedom. The computations can be efficiently implemented as in-place operations with fixed random
seeds. Therefore it can cost O(1) in extra space.
space. In comparison with the above works, our proposed methods SORF and ORF are more effective
than RFF. In particular SORF demonstrates both lower approximation error and better efficiency than
RFF. Table 1 compares the space and time costs of different techniques.
3
Orthogonal Random Features
Our goal is to approximate a Gaussian kernel of the form
K(x, y) = e
||x y||2 /2
2
.
In the paragraph below, we assume a square linear transformation matrix W 2 RD?d , D = d.
When D < d, we simply use the first D dimensions of the result. When D > d, we use multiple
independently generated random features and concatenate the results. We comment on this setting at
the end of this section.
Recall that the linear transformation matrix of RFF can be written as
1
WRFF = G,
(1)
where G 2 Rd?d is a random Gaussian matrix, with every entry sampled independently from the
standard normal distribution. Denote the approximate kernel based on the above WRFF as KRFF (x, y).
For completeness, we first show the expectation and variance of KRFF (x, y).
Lemma 1. (Appendix A.2) KRFF (x, y) is an unbiased estimator of the Gaussian kernel, i.e.,
2
2
E(KRFF (x, y)) = e ||x y|| /2 . Let z = ||x
y||/ . The variance of KRFF (x, y) is
?
?2
1
z2
Var (KRFF (x, y)) = 2D 1 e
.
The idea of Orthogonal Random Features (ORF) is to impose orthogonality on the matrix on the
linear transformation matrix G. Note that one cannot achieve unbiased kernel estimation by simply
replacing G by an orthogonal matrix, since the norms of the rows of G follow the -distribution,
while rows of an orthogonal matrix have the unit norm. The linear transformation matrix of ORF has
the following form
1
WORF = SQ,
(2)
where Q is a uniformly distributed random orthogonal matrix1 . The set of rows of Q forms a bases in
Rd . S is a diagonal matrix, with diagonal entries sampled i.i.d. from the -distribution with d degrees
of freedom. S makes the norms of the rows of SQ and G identically distributed.
Denote the approximate kernel based on the above WORF as KORF (x, y). The following shows that
KORF (x, y) is an unbiased estimator of the kernel, and it has lower variance in comparison to RFF.
Theorem 1. KORF (x, y) is an unbiased estimator of the Gaussian kernel, i.e.,
E(KORF (x, y)) = e
||x y||2 /2
2
.
1
We first generate the random Gaussian matrix G in (1). Q is the orthogonal matrix obtained from the QR
decomposition of G. Q is distributed uniformly on the Stiefel manifold (the space of all orthogonal matrices)
based on the Bartlett decomposition theorem [16].
3
1.2
1.2
5
d=?
4
0.8
0.6
0.8
0.6
0.4
0.4
0.2
0.2
1
2
z
3
d=2
d=4
d=8
d = 16
d = 32
d=?
0
0
4
(a) Variance ratio (when d is large)
count
variance ratio
variance ratio
1
0
0
letter
forest
usps
cifar
mnist
gisette
1
1
2
z
3
2
1
3
0
0
4
(b) Variance ratio (simulation)
1
2
z
3
4
(c) Empirical distribution of z
Figure 2: (a) Var(KORF (x, y))/Var(KRFF (x, y)) when d is large and d = D. z = ||x y||/ . (b)
Simulation of Var(KORF (x, y))/Var(KRFF (x, y)) when D = d. Note that the empirical variance is
the Mean Squared Error (MSE). (c) Distribution of z for several datasets, when we set as the mean
distance to 50th-nearest neighbor for samples from the dataset. The count is normalized such that the
area under curve for each dataset is 1. Observe that most points in all the datasets have z < 2. As
shown in (a), for these values of z, ORF has much smaller variance compared to the standard RFF.
Let D ? d, and z = ||x
KORF (x, y) is bounded by
y||/ . There exists a function f such that for all z, the variance of
Var (KORF (x, y)) ?
1
2D
??
1
z2
e
?2
D
1
d
z2 4
e
z
?
+
f (z)
.
d2
Proof. We first show ?the proof of the unbiasedness.
Let z = x y , and z = ||z||, then
?
P
P
D
D
1
1
T
T
E(KORF (x, y)) = E D
i=1 cos(wi z) = D
i=1 E cos(wi z) . Based on the definition
of ORF, w1 , w2 , . . . , wD are D random vectors given by wi = si ui , with u1 , u2 , . . . , ud a uniformly chosen random orthonormal basis for Rd , and si ?s are independent -distributed random
variables with d degrees of freedom. It is easy to show that for each i, wi is distributed according to
N (0, Id ), and hence by Bochner?s theorem,
E[cos(wT z)] = e
z 2 /2
.
We now show a proof sketch of the variance. Suppose, ai = cos(wiT z).
Var
D
1 X
ai
D i=1
!
=E
" P
D
i=1
D
ai
!2 #
E
" P
D
i=1
ai
!#2
D
XX
1 X
1
= 2
E[a2i ] E[ai ]2 + 2
(E[ai aj ]
D i
D i
j6=i
?
?
2 2
?
1 e z
2
D(D 1) ?
=
+
E[a1 a2 ] e z ,
2
2D
D
E[ai ]E[aj ])
where the last equality follows from symmetry. The first term in the resulting expression is exactly
2
the variance of RFF. In order to have lower variance, E[a1 a2 ] e z must be negative. We use the
following lemma to quantify this term.
Lemma 2. (Appendix A.3) There is a function f such that for any z,
z4
f (z)
+ 2 .
2d
d
Therefore, for a large d, and D ? d, the ratio of the variance of ORF and RFF is
E[ai aj ] ? e
z2
e
Var(KORF (x, y))
?1
Var(KRFF (x, y))
z2
(D 1)e
d(1 e
z2 4
z
z 2 )2
.
(3)
Figure 2(a) shows the ratio of the variance of ORF to that of RFF when D = d and d is large. First
notice that this ratio is always smaller than 1, and hence ORF always provides improvement over
4
1.2
0.2
1
d=2
d=4
d=8
d=16
d=32
1.2
1
variance ratio
0.5
0
bias
bias
0.1
?0.1
?0.2
0
d=2
d=4
d=8
d = 16
d = 32
d=?
1
variance ratio
0.3
d=2
d=4
d=8
d=16
d=32
0.8
0.6
0.6
0.4
0.2
0.2
?0.5
0
0.8
0.4
?0.5
?0.3
?0.4
d = 16
d = 32
d = 64
d=?
?1
2
4
6
8
z
(a) Bias of ORF0
10
0
2
4
6
8
0
0
10
1
z
(b) Bias of SORF
2
z
3
4
0
0
1
2
z
3
4
(c) Variance ratio of ORF0 (d) Variance ratio of SORF
Figure 3: Simulations of bias and variance of ORF0 and SORF. z = ||x
y||/ . (a)
2
2
E(KORF0 (x, y)) e z /2 . (b) E(KSORF (x, y)) e z /2 . (c) Var(KORF0 (x, y))/Var(KRFF (x, y)).
(d) Var(KSORF (x, y))/Var(KRFF (x, y)). Each point on the curve is based on 20,000 choices of the
random matrices and two fixed points with distance z. For both ORF and ORF0 , even at d = 32, the
bias is close to 0 and the variance is close to that of d = 1 (Figure 2(a)).
the conventional RFF. Interestingly, we gain significantly for small values of z. In fact, when z ! 0
and d ! 1, the ratio is roughly z 2 (note ex ? 1 + x when x ! 0), and ORF exhibits infinitely
lower error relative to RFF. Figure 2(b) shows empirical simulations of this ratio. We can see that the
variance ratio is close to that of d = 1 (3), even when d = 32, a fairly low-dimensional setting in
real-world cases.
Recall that z = ||x y||/ . This means that ORF preserves the kernel value especially well for data
points that are close, thereby retaining the local structure of the dataset. Furthermore, empirically
is typically not set too small in order to prevent overfitting?a common rule of thumb is to set to be
the average distance of 50th-nearest neighbors in a dataset. In Figure 2(c), we plot the distribution of
z for several datasets with this choice of . These distributions are all concentrated in the regime
where ORF yields substantial variance reduction.
The above analysis is under the assumption that D ? d. Empirically, for RFF, D needs to be larger
than d in order to achieve low approximation error. In that case, we independently generate and apply
the transformation (2) multiple times. The next lemma bounds the variance for this case.
Corollary 1. Let D = m ? d, for an integer m and z = ||x y||/ . There exists a function f such
that for all z, the variance of KORF (x, y) is bounded by
?
?
?2 d 1
1 ?
f (z)
z2
z2 4
Var (KORF (x, y)) ?
1 e
e z +
.
2D
d
dD
4
Structured Orthogonal Random Features
In the previous section, we presented Orthogonal Random Features (ORF) and provided a theoretical
explanation for their effectiveness. Since generating orthogonal matrices in high dimensions can be
expensive, here we propose a fast version of ORF by imposing structure on the orthogonal matrices.
This method can provide drastic memory and time savings with minimal compromise on kernel
approximation quality. Note that the previous works on fast kernel approximation using structured
matrices do not use structured orthogonal matrices [13, 28, 7].
p
Let us first introduce a simplified version of ORF: replace S in (2) by a scalar d. Let us call this
method ORF0 . The transformation matrix thus has the following form:
p
d
WORF0 =
Q.
(4)
Theorem 2. (Appendix B) Let KORF0 (x, y) be the approximate kernel computed with linear transformation matrix (4). Let D ? d and z = ||x y||/ . There exists a function f such that the bias of
KORF0 (x, y) satisfies
E(KORF0 (x, y))
e
z 2 /2
5
?e
z 2 /2 z
4
4d
+
f (z)
,
d2
?3
8
MSE
0.03
0.02
?3
3
RFF
ORF
SORF
QMC(digitalnet)
circulant
fastfood
6
MSE
RFF
ORF
SORF
QMC(digitalnet)
circulant
fastfood
x 10
4
x 10
RFF
ORF
SORF
QMC(digitalnet)
circulant
fastfood
2.5
2
MSE
0.04
1.5
1
0.01
2
0.5
2
3
4
5 6
D/d
7
8
0
1
9 10
(a) LETTER (d = 16)
4
5 6
D/d
7
8
0
1
9 10
6
0.8
0.6
1.2
RFF
ORF
SORF
QMC(digitalnet)
circulant
fastfood
4
3
0.2
1
0.2
4
5 6
D/d
7
8
9 10
0
1
(d) CIFAR (d = 512)
2
7
8
9 10
3
4
5 6
D/d
7
8
x 10
RFF
ORF
SORF
QMC(digitalnet)
circulant
fastfood
0.6
0.4
3
5 6
D/d
0.8
2
2
4
1
0.4
0
1
3
?4
x 10
5
MSE
RFF
ORF
SORF
QMC(digitalnet)
circulant
fastfood
2
(c) USPS (d = 256)
?4
x 10
1
MSE
3
(b) FOREST (d = 64)
?3
1.2
2
MSE
0
1
0
1
9 10
(e) MNIST (d = 1024)
2
3
4
5 6
D/d
7
8
9 10
(f) GISETTE (d = 4096)
Figure 4: Kernel approximation mean squared error (MSE) for the Gaussian kernel K(x, y) =
2
2
e ||x y|| /2 . D: number of transformations. d: input feature dimension. For each dataset,
is chosen to be the mean distance of the 50th `2 nearest neighbor for 1,000 sampled datapoints.
Empirically, this yields good classification results. The curves for SORF and ORF overlap.
and the variance satisfies
Var (KORF0 (x, y)) ?
1
2D
?
(1
e
z2 2
)
D
1
d
e
z2 4
z
?
+
f (z)
.
d2
The above implies that when d is large KORF0 (x, y) is a good estimation of the kernel with low
variance. Figure 3(a) shows that even for relatively small d, the estimation is almost unbiased. Figure
3(c) shows that when d 32, the variance ratio is very close to that of d = 1. We find empirically
that ORF0 also provides very similar MSE in comparison with ORF in real-world datasets.
We now introduce Structured Orthogonal Random Features (SORF). It replaces the random orthogonal
matrix Q of ORF0 in (4) by a special type of structured matrix HD1 HD2 HD3 :
p
d
WSORF =
HD1 HD2 HD3 ,
(5)
where Di 2 Rd?d , i = 1, 2, 3 are diagonal ?sign-flipping? matrices, with each diagonal entry
sampled from the Rademacher distribution. H is the normalized Walsh-Hadamard matrix.
Computing WSORF x has the time cost O(d log d), since multiplication with D takes O(d) time and
multiplication with H takes O(d log d) time using fast Hadamard transformation. The computation
of SORF can also be carried out with almost no extra memory due to the fact that both sign flipping
and the Walsh-Hadamard transformation can be efficiently implemented as in-place operations [9].
Figures 3(b)(d) show the bias and variance of SORF. Note that although the curves for small d are
different from those of ORF, when d is large (d > 32 in practice), the kernel estimation is almost
unbiased, and the variance ratio converges to that of ORF. In other words, it is clear that SORF can
provide almost identical kernel approximation quality as that of ORF. This is also confirmed by the
experiments in Section 5. In Section 6, we provide theoretical discussions to show that the structure
of (5) can also be generally applied to many scenarios where random Gaussian matrices are used.
6
Dataset
letter
d = 16
forest
d = 64
usps
d = 256
cifar
d = 512
mnist
d = 1024
gisette
d = 4096
Method
RFF
ORF
SORF
RFF
ORF
SORF
RFF
ORF
SORF
RFF
ORF
SORF
RFF
ORF
SORF
RFF
ORF
SORF
D = 2d
76.44 ? 1.04
77.49 ? 0.95
76.18 ? 1.20
77.61 ? 0.23
77.88 ? 0.24
77.64 ? 0.20
94.27 ? 0.38
94.21 ? 0.51
94.45 ? 0.39
73.19 ? 0.23
73.59 ? 0.44
73.54 ? 0.26
94.83 ? 0.13
94.95 ? 0.25
94.98 ? 0.18
97.68 ? 0.28
97.56 ? 0.17
97.64 ? 0.17
D = 4d
81.61 ? 0.46
82.49 ? 1.16
81.63 ? 0.77
78.92 ? 0.30
78.71 ? 0.19
78.88 ? 0.14
94.98 ? 0.10
95.26 ? 0.25
95.20 ? 0.43
75.06 ? 0.33
75.06 ? 0.28
75.11 ? 0.21
95.48 ? 0.10
95.64 ? 0.06
95.48 ? 0.08
97.74 ? 0.11
97.72 ? 0.15
97.62 ? 0.04
D = 6d
85.46 ? 0.56
85.41 ? 0.60
84.43 ? 0.92
79.29 ? 0.24
79.38 ? 0.19
79.31 ? 0.12
95.43 ? 0.22
96.46 ? 0.18
95.51 ? 0.34
75.85 ? 0.30
76.00 ? 0.26
75.76 ? 0.21
95.85 ? 0.07
95.85 ? 0.09
95.77 ? 0.09
97.66 ? 0.25
97.80 ? 0.07
97.64 ? 0.11
D = 8d
86.58 ? 0.99
87.17 ? 0.40
85.71 ? 0.52
79.57 ? 0.21
79.63 ? 0.21
79.50 ? 0.14
95.66 ? 0.25
95.52 ? 0.20
95.46 ? 0.34
76.28 ? 0.30
76.29 ? 0.26
76.48 ? 0.24
96.02 ? 0.06
95.95 ? 0.08
95.98 ? 0.05
97.70 ? 0.16
97.64 ? 0.09
97.68 ? 0.08
D = 10d
87.84 ? 0.59
87.73 ? 0.63
86.78 ? 0.53
79.85 ? 0.10
79.54 ? 0.15
79.56 ? 0.09
95.71 ? 0.18
95.76 ? 0.17
95.67 ? 0.15
76.54 ? 0.31
76.69 ? 0.09
76.47 ? 0.28
95.98 ? 0.05
96.06 ? 0.07
96.02 ? 0.07
97.74 ? 0.05
97.68 ? 0.04
97.70 ? 0.14
Exact
90.10
80.43
95.57
78.71
97.14
97.60
Table 2: Classification Accuracy based on SVM. ORF and SORF provide competitive classification
accuracy for a given D. Exact is based on kernel-SVM trained on the Gaussian kernel. Note that
in all the settings SORF is faster than RFF and ORF by a factor of O(d/ log d). For example, on
gisette with D = 2d, SORF provides 10 times speedup in comparison with RFF and ORF.
5
Experiments
Kernel Approximation. We first show kernel approximation performance on six datasets. The input
feature dimension d is set to be power of 2 by padding zeros or subsampling. Figure 4 compares the
mean squared error (MSE) of all methods. For fixed D, the kernel approximation MSE exhibits the
following ordering:
SORF ' ORF < QMC [25] < RFF [19] < Other fast kernel approximations [13, 28].
By imposing orthogonality on the linear transformation matrix, Orthogonal Random Features (ORF)
achieves significantly lower approximation error than Random Fourier Features (RFF). The Structured
Orthogonal Random Features (SORF) have almost identical MSE to that of ORF. All other fast kernel
approximation methods, such as circulant [28] and FastFood [13] have higher MSE. We also include
DigitalNet, the best performing method among Quasi-Monte Carlo techniques [25]. Its MSE is lower
than that of RFF, but still higher than that of ORF and SORF. The order of time cost for a fixed D is
SORF ' Other fast kernel approximations [13, 28] ? ORF = QMC [25] = RFF [19].
Remarkably, SORF has both better computational efficiency and higher kernel approximation quality
compared to other methods.
We also apply ORF and SORF on classification tasks. Table 2 shows classification accuracy for
different kernel approximation techniques with a (linear) SVM classifier. SORF is competitive with
or better than RFF, and has greatly reduced time and space costs.
The Role of . Note that a very small will lead to overfitting, and a very large provides no
discriminative power for classification. Throughout the experiments, for each dataset is chosen to
be the mean distance of the 50th `2 nearest neighbor, which empirically yields good classification
results [28]. As shown in Section 3, the relative improvement over RFF is positively correlated with
. Figure 5(a)(b) verify this on the mnist dataset. Notice that the proposed methods (ORF and
SORF) consistently improve over RFF.
Simplifying SORF. The SORF transformation consists of three Hadamard-Diagonal blocks. A
natural question is whether using fewer computations and randomness can achieve similar empirical
performance. Figure 5(c) shows that reducing the number of blocks to two (HDHD) provides similar
performance, while reducing to one block (HD) leads to large error.
6
Analysis and General Applicability of the Hadamard-Diagonal Structure
We provide theoretical discussions of SORF in this section. We first show that for large d, SORF is
an unbiased estimator of the Gaussian kernel.
7
?4
?4
x 10
1
RFF
ORF
SORF
RFF
ORF
SORF
5
MSE
3
2
(a)
HDHDHD
HDHD
HD
0.8
0.6
0.4
1
0.2
1
0
1
x 10
2
4
MSE
?3
x 10
MSE
6
2
3
4
5 6
D/d
7
8
9 10
= 0.5? 50NN distance
0
1
(b)
2
3
4
5 6
D/d
7
8
9 10
= 2? 50NN distance
0
1
2
3
4
5 6
D/d
7
8
9 10
(c) Variants of SORF
Figure 5: (a) (b) MSE on mnist with different . (c) Effect of using less randomness on mnist.
HDHDHD is the the proposed SORF method. HDHD reduces the number of Hadamard-Diagonal
blocks to two, and HD uses only one such block.
Theorem 3. (Appendix
C) Let KSORF (x, y) be the approximate kernel computed with linear transp
formation matrix dHD1 HD2 HD3 . Let z = ||x y||/ . Then
E(KSORF (x, y))
e
z 2 /2
6z
?p .
d
Even though SORF is nearly-unbiased, proving tight variance and concentration guarantees similar
to ORF remains an open question. The following discussion provides a sketch in that direction. We
first show a lemma of RFF.
Lemma 3. Let W be a random Gaussian matrix as in RFF, for a given z, the distribution of Wz is
N (0, ||z||2 Id ).
Note that Wz in RFF can be written as Rg, where R is a scaled orthogonal matrix such that each
row has norm ||z||2 and g is distributed according to N (0, Id ). Hence the distribution of Rg is
N (0, ||z||2 Id ), identical to Wz. The concentration results of RFF use the fact that the projections of
a Gaussian vector g onto orthogonal directions R are independent.
p
We show that dHD1 HD2 HD3 z has similar properties. In particular, we show that it can be
? g, where rows of R
? are ?near-orthogonal? (with high probability) and have norm ||z||2 ,
written as R?
?
and the vector g is close to Gaussian (?
g has independent sub-Gaussian elements), and hence the
? = vec(D1 ) (vector of diagonal entries of
projections behave ?near-independently?. Specifically, g
? is a function of D2 , D3 and z.
D1 ), and R
? (function of D2 , D3 , z), such that
Theorem
4. (Appendix D) For a given z, there exists a R
p
?
?
dHD1 HD2 HD3 z = Rvec(D1 ). Each row of R has norm ||z||2 and for any t
1/d, with
2/3 1/3
? is at most t||z||2 , where c
probability 1 de c?t d , the inner product between any two rows of R
is a constant.
The above result can also be applied to settings not limited to kernel approximation. In the appendix,
we show empirically that the same scheme can be successfully applied to angle estimation where the
nonlinear map f is a non-smooth sign(?) function [4]. We note that the HD1 HD2 HD3 structure
has also been recently used in fast cross-polytope LSH [2, 12, 6].
7
Conclusions
We have demonstrated that imposing orthogonality on the transformation matrix can greatly reduce
the kernel approximation MSE of Random Fourier Features when approximating Gaussian kernels.
We further proposed a type of structured orthogonal matrices with substantially lower computation
and memory cost. We provided theoretical insights indicating that the Hadamard-Diagonal block
structure can be generally used to replace random Gaussian matrices in a broader range of applications.
Our method can also be generalized to other types of kernels such as general shift-invariant kernels
and polynomial kernels based on Schoenberg?s characterization as in [18].
8
References
[1] N. Ailon and B. Chazelle. Approximate nearest neighbors and the fast Johnson-Lindenstrauss transform.
In STOC, 2006.
[2] A. Andoni, P. Indyk, T. Laarhoven, I. Razenshteyn, and L. Schmidt. Practical and optimal lsh for angular
distance. In NIPS, 2015.
[3] S. Bochner. Harmonic analysis and the theory of probability. Dover Publications, 1955.
[4] M. S. Charikar. Similarity estimation techniques from rounding algorithms. In STOC, 2002.
[5] Y. Cheng, F. X. Yu, R. S. Feris, S. Kumar, A. Choudhary, and S.-F. Chang. An exploration of parameter
redundancy in deep networks with circulant projections. In ICCV, 2015.
[6] K. Choromanski, F. Fagan, C. Gouy-Pailler, A. Morvan, T. Sarlos, and J. Atif. Triplespin-a generic compact
paradigm for fast machine learning computations. arXiv, 2016.
[7] K. Choromanski and V. Sindhwani. Recycling randomness with structure for sublinear time kernel
expansions. ICML, 2015.
[8] C. Cortes and V. Vapnik. Support-vector networks. Machine Learning, 20(3):273?297, 1995.
[9] B. J. Fino and V. R. Algazi. Unified matrix treatment of the fast walsh-hadamard transform. IEEE
Transactions on Computers, (11):1142?1146, 1976.
[10] T. Joachims. Training linear SVMs in linear time. In KDD, 2006.
[11] P. Kar and H. Karnick. Random feature maps for dot product kernels. In AISTATS, 2012.
[12] C. Kennedy and R. Ward. Fast cross-polytope locality-sensitive hashing. arXiv, 2016.
[13] Q. Le, T. Sarl?s, and A. Smola. Fastfood ? approximating kernel expansions in loglinear time. In ICML,
2013.
[14] F. Li, C. Ionescu, and C. Sminchisescu. Random fourier approximations for skewed multiplicative
histogram kernels. Pattern Recognition, pages 262?271, 2010.
[15] S. Maji and A. C. Berg. Max-margin additive classifiers for detection. In ICCV, 2009.
[16] R. J. Muirhead. Aspects of multivariate statistical theory, volume 197. John Wiley & Sons, 2009.
[17] H. Niederreiter. Quasi-Monte Carlo Methods. Wiley Online Library, 2010.
[18] J. Pennington, F. Yu, and S. Kumar. Spherical random features for polynomial kernels. In NIPS, 2015.
[19] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPS, 2007.
[20] A. Rudi, R. Camoriano, and L. Rosasco. Generalization properties of learning with random features.
arXiv:1602.04474, 2016.
[21] S. Shalev-Shwartz, Y. Singer, N. Srebro, and A. Cotter. Pegasos: Primal estimated sub-gradient solver for
SVM, volume = 127, year = 2011. Mathematical Programming, (1):3?30.
[22] V. Sreekanth, A. Vedaldi, A. Zisserman, and C. Jawahar. Generalized RBF feature maps for efficient
detection. In BMVC, 2010.
[23] B. Sriperumbudur and Z. Szab?. Optimal rates for random fourier features. In NIPS, 2015.
[24] A. Vedaldi and A. Zisserman. Efficient additive kernels via explicit feature maps. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 34(3):480?492, 2012.
[25] J. Yang, V. Sindhwani, H. Avron, and M. Mahoney. Quasi-monte carlo feature maps for shift-invariant
kernels. In ICML, 2014.
[26] T. Yang, Y.-F. Li, M. Mahdavi, R. Jin, and Z.-H. Zhou. Nystr?m method vs random fourier features: A
theoretical and empirical comparison. In NIPS, 2012.
[27] F. X. Yu, S. Kumar, Y. Gong, and S.-F. Chang. Circulant binary embedding. In ICML, 2014.
[28] F. X. Yu, S. Kumar, H. Rowley, and S.-F. Chang. Compact nonlinear maps and circulant extensions.
arXiv:1503.03893, 2015.
[29] X. Zhang, F. X. Yu, R. Guo, S. Kumar, S. Wang, and S.-F. Chang. Fast orthogonal projection based on
kronecker product. In ICCV, 2015.
9
| 6246 |@word version:2 compression:1 polynomial:3 norm:6 open:1 d2:8 seek:1 korf:12 orf:55 decomposition:2 simulation:4 simplifying:1 thereby:1 nystr:1 reduction:2 daniel:1 interestingly:1 existing:1 com:1 wd:4 z2:10 chazelle:1 si:2 intriguing:3 written:3 must:1 john:1 sanjiv:1 additive:3 wx:2 concatenate:1 razenshteyn:1 kdd:1 plot:1 v:1 intelligence:1 fewer:1 dover:1 feris:1 provides:8 completeness:1 matrix1:1 characterization:1 zhang:1 mathematical:1 constructed:1 consists:1 paragraph:1 introduce:2 theoretically:1 roughly:1 behavior:1 spherical:1 actual:1 solver:1 spain:1 xx:1 gisette:4 bounded:2 provided:2 substantially:1 unified:1 transformation:17 guarantee:2 avron:1 every:1 niederreiter:1 exactly:1 prohibitively:1 classifier:3 scaled:2 demonstrates:1 unit:1 positive:1 felix:1 local:1 treat:1 id:4 studied:2 co:6 limited:1 walsh:4 range:3 practical:1 practice:1 block:6 sq:2 suresh:1 displacement:1 area:1 empirical:6 significantly:4 vedaldi:2 projection:4 word:1 cannot:1 close:6 onto:1 pegasos:1 applying:1 seminal:1 conventional:1 sanjivk:1 map:10 deterministic:1 demonstrated:1 sarlos:1 independently:4 wit:1 estimator:4 rule:1 insight:1 muirhead:1 orthonormal:1 datapoints:1 dw:1 hd:3 embedding:2 proving:1 justification:1 schoenberg:1 suppose:1 fino:1 exact:2 programming:1 us:2 element:1 expensive:4 recognition:1 role:1 wang:1 laarhoven:1 ordering:1 decrease:1 substantial:2 cnm:1 ui:1 rowley:1 trained:2 tight:1 compromise:2 efficiency:2 usps:4 basis:1 maji:1 fast:13 effective:1 monte:6 formation:1 choosing:1 shalev:1 sarl:1 widely:2 valued:1 larger:2 ward:1 transform:4 indyk:1 online:1 sequence:3 propose:3 product:5 hadamard:9 achieve:5 qr:1 rff:46 convergence:2 rademacher:1 generating:2 converges:2 derive:1 gong:1 nearest:5 minor:1 ex:1 implemented:2 implies:1 quantify:1 direction:2 exploration:1 enable:1 kchoro:1 require:1 generalization:1 extension:1 normal:1 seed:1 mapping:1 camoriano:1 achieves:2 a2:2 estimation:6 jawahar:1 sensitive:1 successfully:1 cotter:1 gaussian:23 always:2 zhou:1 broader:3 publication:1 corollary:1 focus:1 joachim:1 properly:1 improvement:2 rank:1 consistently:1 greatly:3 contrast:1 dependent:2 nn:2 typically:1 transformed:1 quasi:4 issue:1 classification:7 among:1 retaining:2 special:2 fairly:1 saving:1 sampling:2 identical:3 yu:7 icml:4 nearly:1 subquadratic:1 preserve:1 consisting:1 freedom:3 detection:2 mahoney:1 primal:1 accurate:1 orthogonal:36 holtmann:1 theoretical:8 minimal:1 xinnan:1 cost:10 applicability:1 entry:4 rounding:1 johnson:1 too:1 unbiasedness:1 recht:1 density:2 w1:2 squared:4 central:1 rosasco:1 li:2 mahdavi:1 de:1 hdhdhd:2 multiplicative:2 algazi:1 competitive:2 gouy:1 square:1 accuracy:4 variance:31 largely:1 efficiently:2 yield:3 yes:4 thumb:1 carlo:6 confirmed:1 kennedy:1 j6:1 randomness:4 hdhd:3 fagan:1 definition:1 sriperumbudur:1 proof:3 di:1 sampled:5 gain:1 dataset:8 treatment:1 recall:2 dimensionality:2 carefully:1 transp:1 higher:3 hashing:1 follow:1 zisserman:2 improved:2 bmvc:1 though:1 furthermore:3 angular:1 smola:1 sketch:2 replacing:2 expressive:1 nonlinear:8 google:2 aj:3 quality:4 pailler:1 effect:1 verify:2 unbiased:8 normalized:2 hence:4 equality:1 sin:2 skewed:2 generalized:3 stiefel:1 reasoning:1 harmonic:1 recently:1 common:1 empirically:7 volume:2 imposing:3 ai:8 vec:1 rd:12 z4:1 dot:2 lsh:2 similarity:1 base:1 multivariate:1 optimizing:1 scenario:1 kar:1 binary:3 impose:1 bochner:2 paradigm:1 ud:1 semi:1 multiple:2 reduces:2 rahimi:1 smooth:2 faster:3 cross:2 cifar:4 a1:2 scalable:1 variant:1 expectation:1 arxiv:4 histogram:2 kernel:71 remarkably:1 extra:4 w2:1 unlike:1 comment:1 effectiveness:2 call:3 integer:1 near:2 yang:3 identically:1 easy:1 fft:1 inner:1 idea:2 reduce:1 shift:4 whether:1 motivated:2 expression:1 six:1 bartlett:1 padding:1 york:1 deep:2 generally:2 clear:1 extensively:1 concentrated:1 svms:1 reduced:3 generate:2 notice:2 sign:3 estimated:2 ionescu:1 discrete:2 key:1 redundancy:1 choromanski:3 d3:3 prevent:1 krzysztof:1 year:1 angle:1 letter:3 powerful:1 place:3 almost:7 throughout:1 appendix:6 bound:1 rudi:1 cheng:1 replaces:1 nonnegative:1 orthogonality:5 kronecker:1 fourier:12 speed:1 u1:1 aspect:1 kumar:6 performing:1 relatively:1 speedup:1 structured:17 ailon:1 according:2 charikar:1 describes:1 smaller:2 son:1 wi:5 invariant:3 iccv:3 computationally:1 remains:1 montecarlo:1 count:2 singer:1 drastic:1 end:1 hd2:6 operation:2 apply:2 observe:1 generic:1 schmidt:1 a2i:1 subsampling:1 include:1 recycling:1 felixyu:1 especially:2 approximating:4 question:2 flipping:2 concentration:2 diagonal:10 loglinear:1 exhibit:3 gradient:1 distance:8 manifold:1 polytope:2 enforcing:1 ratio:16 sreekanth:1 stoc:2 negative:1 implementation:1 datasets:7 jin:1 behave:1 required:1 extensive:1 barcelona:1 nip:6 address:1 usually:1 below:1 pattern:2 regime:1 wz:3 memory:6 explanation:2 max:1 power:3 overlap:1 natural:1 scheme:1 improve:1 choudhary:1 library:1 carried:1 qmc:9 speeding:1 discovery:4 multiplication:2 relative:2 loss:1 sublinear:1 srebro:1 var:15 degree:3 imposes:1 dd:10 row:10 last:1 bias:8 circulant:11 neighbor:5 distributed:6 curve:4 dimension:5 world:2 lindenstrauss:1 karnick:1 simplified:1 transaction:2 approximate:7 compact:4 overfitting:2 discriminative:1 shwartz:1 table:4 hd3:6 symmetry:1 forest:3 sminchisescu:1 expansion:2 mse:23 aistats:1 fastfood:9 w1t:2 positively:1 fashion:1 definiteness:1 wiley:2 sub:2 explicit:2 theorem:6 theertha:2 svm:4 cortes:1 exists:4 mnist:7 andoni:1 vapnik:1 pennington:1 margin:1 locality:1 rg:2 intersection:1 simply:2 infinitely:1 scalar:1 u2:1 chang:4 sindhwani:2 determines:1 satisfies:2 rice:1 goal:1 rbf:2 replace:3 specifically:1 uniformly:3 reducing:2 wt:1 szab:1 ananda:1 lemma:6 hd1:3 invariance:1 indicating:1 formally:1 highdimensional:1 berg:1 support:1 guo:1 d1:3 correlated:1 |
5,799 | 6,247 | A Minimax Approach to Supervised Learning
Farzan Farnia?
[email protected]
David Tse?
[email protected]
Abstract
Given a task of predicting Y from X, a loss function L, and a set of probability
distributions ? on (X, Y ), what is the optimal decision rule minimizing the worstcase expected loss over ?? In this paper, we address this question by introducing
a generalization of the maximum entropy principle. Applying this principle to
sets of distributions with marginal on X constrained to be the empirical marginal,
we provide a minimax interpretation of the maximum likelihood problem over
generalized linear models as well as some popular regularization schemes. For
quadratic and logarithmic loss functions we revisit well-known linear and logistic
regression models. Moreover, for the 0-1 loss we derive a classifier which we
call the minimax SVM. The minimax SVM minimizes the worst-case expected
0-1 loss over the proposed ? by solving a tractable optimization problem. We
perform several numerical experiments to show the power of the minimax SVM in
outperforming the SVM.
1
Introduction
Supervised learning, the task of inferring a function that predicts a target Y from a feature vector
X = (X1 , . . . , Xd ) by using n labeled training samples {(x1 , y1 ), . . . , (xn , yn )}, has been a problem
of central interest in machine learning. Given the underlying distribution P?X,Y , the optimal prediction
rules had long been studied and formulated in the statistics literature. However, the advent of highdimensional problems raised this important question: What would be a good prediction rule when we
do not have enough samples to estimate the underlying distribution?
To understand the difficulty of learning in high-dimensional settings, consider a genome-based
classification task where we seek to predict a binary trait of interest Y from an observation of
3, 000, 000 SNPs, each of which can be considered as a discrete variable Xi ? {0, 1, 2}. Hence, to
estimate the underlying distribution we need O(33,000,000 ) samples.
With no possibility of estimating the underlying P? in such problems, several approaches have been
proposed to deal with high-dimensional settings. The standard approach in statistical learning
theory is empirical risk minimization (ERM) [1]. ERM learns the prediction rule by minimizing an
approximated loss under the empirical distribution of samples. However, to avoid overfitting, ERM
restricts the set of allowable decision rules to a class of functions with limited complexity measured
through its VC-dimension.
This paper focuses on a complementary approach to ERM where one can learn the prediction rule
through minimizing a decision rule?s worst-case loss over a larger set of distributions ?(P? ) centered
at the empirical distribution P? . In other words, instead of restricting the class of decision rules, we
consider and evaluate all possible decision rules, but based on a more stringent criterion that they will
have to perform well over all distributions in ?(P? ). As seen in Figure 1, this minimax approach can
be broken into three main steps:
1. We compute the empirical distribution P? from the data,
?
Department of Electrical Engineering, Stanford University, Stanford, CA 94305.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Figure 1: Minimax Approach
Figure 2: Minimax-hinge Loss
2. We form a distribution set ?(P? ) based on P? ,
3. We learn a prediction rule ? ? that minimizes the worst-case expected loss over ?(P? ).
Some special cases of this minimax approach, which are based on learning a prediction rule from
low-order marginal/moments, have been addressed in the literature: [2] solves a robust minimax
classification problem for continuous settings with fixed first and second-order moments; [3] develops
a classification approach by minimizing the worst-case hinge loss subject to fixed low-order marginals;
[4] fits a model minimizing the maximal correlation under fixed pairwise marginals to design a robust
classification scheme. In this paper, we develop a general minimax approach for supervised learning
problems with arbitrary loss functions.
To formulate Step 3 in Figure 1, given a general loss function L and set of distribution ?(P? ) we
generalize the problem formulation discussed at [3] to
argmin max E L Y, ?(X) .
(1)
???
P ??(P? )
Here, ? is the space of all decision rules. Notice the difference with the ERM problem where ? was
restricted to smaller function classes while ?(P? ) = {P? }.
If we have to predict Y with no access to X, (1) will reduce to the formulation studied at [5]. There,
the authors propose to use the principle of maximum entropy [6], for a generalized definition of
entropy, to find the optimal prediction rule minimizing the worst-case expected loss. By the principle
of maximum entropy, we should predict based on a distribution in ?(P? ) that maximizes the entropy
function.
How can we use the principle of maximum entropy to solve (1) when we observe X as well? A
natural idea is to apply the maximum entropy principle to the conditional PY |X=x instead of the
marginal PY . This idea motivates a generalized version of the principle of maximum entropy, which
we call the principle of maximum conditional entropy. In fact, this principle breaks Step 3 into two
smaller steps:
3a. We search for P ? the distribution maximizing the conditional entropy over ?(P? ),
3b. We find ? ? the optimal decision rule for P ? .
Although the principle of maximum conditional entropy characterizes the solution to (1), computing
the maximizing distribution is hard in general. In [7], the authors propose a conditional version of the
principle of maximum entropy, for the specific case of Shannon entropy, and draw the principle?s
connection to (1). They call it the principle of minimum mutual information, by which one should
predict based on the distribution minimizing mutual information among X and Y . However, they
develop their theory targeting a broad class of distribution sets, which results in a convex problem,
yet the number of variables is exponential in the dimension of the problem.
To overcome this issue, we propose a specific structure for the distribution set by matching the
marginal PX of all the joint distributions PX,Y in ?(P? ) to the empirical marginal P?X while matching
only the cross-moments between X and Y with those of the empirical distribution P?X,Y . We show
that this choice of ?(P? ) has two key advantages: 1) the minimax decision rule ? ? can be computed
efficiently; 2) the minimax generalization error can be controlled by allowing a level of uncertainty in
the matching of the cross-moments, which can be viewed as regularization in the minimax framework.
Our solution is achieved through convex duality. For some loss functions, the dual problem turns
out to be equivalent to the maximum likelihood problem for generalized linear models. For example,
2
under quadratic and logarithmic loss functions this minimax approach revisits the linear and logistic
regression models respectively.
On the other hand, for 0-1 loss, the minimax approach leads to a new randomized linear classifier
which we call the minimax SVM. The minimax SVM minimizes the worst-case expected 0-1 loss
over ?(P? ) by solving a tractable optimization problem. In contrast, the classic ERM formulation of
minimizing the 0-1 loss over linear classifiers is well-known to be NP-hard [8]. Interestingly, the dual
problem for the 0-1 loss minimax problem corresponds also to an ERM problem for linear classifiers,
but with a loss function different from 0-1 loss. This loss function, which we call the minimax-hinge
loss, is also different from the classic hinge loss (Figure 2). We emphasize that while the hinge loss is
an adhoc surrogate loss function chosen to convexify the 0-1 loss ERM problem, the minimax-hinge
loss emerges from the minimax formulation. We also perform several numerical experiments to
demonstrate the power of the minimax SVM in outperforming the standard SVM which minimizes
the surrogate hinge loss.
2
Principle of Maximum Conditional Entropy
In this section, we provide a conditional version of the key definitions and results developed in [5].
We propose the principle of maximum conditional entropy to break Step 3 into 3a and 3b in Figure 1.
We also define and characterize Bayes decision rules for different loss functions to address Step 3b.
2.1
Decision Problems, Bayes Decision Rules, Conditional Entropy
Consider a decision problem. Here the decision maker observes X ? X from which she predicts a
random target variable Y ? Y using an action a ? A. Let PX,Y = (PX , PY |X ) be the underlying
distribution for the random pair (X, Y ). Given a loss function L : Y ? A ? [0, ?], L(y, a) indicates
the loss suffered by the decision maker by deciding action a when Y = y. The decision maker uses
a decision rule ? : X ? A to select an action a = ?(x) from A based on an observation x ? X .
We will in general allow the decision rules to be random, i.e. ? is random. The main purpose of
extending to the space of randomized decision rules is to form a convex set of decision rules. Later in
Theorem 1, this convexity is used to prove a saddle-point theorem.
We call a (randomized) decision rule ?Bayes a Bayes decision rule if for all decision rules ? and for
all x ? X :
E[L(Y, ?Bayes (X))|X = x] ? E[L(Y, ?(X))|X = x].
It should be noted that ?Bayes depends only on PY |X , i.e. it remains a Bayes decision rule under a
different PX . The (unconditional) entropy of Y is defined as [5]
H(Y ) := inf E[L(Y, a)].
a?A
(2)
Similarly, we can define conditional entropy of Y given X = x as
H(Y |X = x) := inf E[L(Y, ?(X))|X = x],
?
and the conditional entropy of Y given X as
X
H(Y |X) :=
PX (x)H(Y |X = x) = inf E[L(Y, ?(X))].
?
x
(3)
(4)
Note that H(Y |X = x) and H(Y | X) are both concave in PY |X . Applying Jensen?s inequality, this
concavity implies that
H(Y |X) ? H(Y ),
which motivates the following definition for the information that X carries about Y ,
I(X; Y ) := H(Y ) ? H(Y |X),
(5)
i.e. the reduction of expected loss in predicting Y by observing X. In [9], the author has defined
the same concept to which he calls a coherent dependence measure. It can be seen that I(X; Y ) =
EPX [ D(PY |X , PY ) ] where D is the divergence measure corresponding to the loss L, defined for
any two probability distributions PY , QY with Bayes actions aP , aQ as [5]
D(PY , QY ) := EP [L(Y, aQ )] ? EP [L(Y, aP )] = EP [L(Y, aQ )] ? HP (Y ).
3
(6)
2.2
2.2.1
Examples
Logarithmic loss
For an outcome y ? Y and distribution QY , define logarithmic loss as Llog (y, QY ) = ? log QY (y).
It can be seen Hlog (Y ), Hlog (Y |X), Ilog (X; Y ) are the well-known unconditional, conditional
Shannon entropy and mutual information [10]. Also, the Bayes decision rule for a distribution PX,Y
is given by ?Bayes (x) = PY |X (?|x).
2.2.2
0-1 loss
The 0-1 loss function is defined for any y, y? ? Y as L0-1 (y, y?) = I(?
y 6= y). Then, we can show
X
H0-1 (Y ) = 1 ? max PY (y), H0-1 (Y |X) = 1 ?
max PX,Y (x, y).
y?Y
x?X
y?Y
The Bayes decision rule for a distribution PX,Y is the well-known maximum a posteriori (MAP) rule,
i.e. ?Bayes (x) = argmaxy?Y PY |X (y|x).
2.2.3
Quadratic loss
The quadratic loss function is defined as L2 (y, y?) = (y ? y?)2 . It can be seen
H2 (Y ) = Var(Y ),
H2 (Y |X) = E [Var(Y |X)],
I2 (X; Y ) = Var (E[Y |X]) .
The Bayes decision rule for any PX,Y is the well-known minimum mean-square error (MMSE)
estimator that is ?Bayes (x) = E[Y |X = x].
2.3
Principle of Maximum Conditional Entropy & Robust Bayes decision rules
Given a distribution set ?, consider the following minimax problem to find a decision rule minimizing
the worst-case expected loss over ?
argmin max EP [L(Y, ?(X))],
???
P ??
(7)
where ? is the space of all randomized mappings from X to A and EP denotes the expected value
over distribution P . We call any solution ? ? to the above problem a robust Bayes decision rule
against ?. The following results motivate a generalization of the maximum entropy principle to find a
robust Bayes decision rule. Refer to the supplementary material for the proofs.
Theorem 1.A. (Weak Version) Suppose ? is convex and closed, and let L be a bounded loss function.
Assume X , Y are finite and that the risk set S = { [L(y, a)]y?Y : a ? A } is closed. Then there
exists a robust Bayes decision rule ? ? against ?, which is a Bayes decision rule for a distribution P ?
that maximizes the conditional entropy H(Y |X) over ?.
Theorem 1.B. (Strong Version) Suppose ? is convex and that under any P ? ? there exists a Bayes
decision rule. We also assume the continuity in Bayes decision rules for distributions in ? (See the
supplementary material for the exact condition). Then, if P ? maximizes H(Y |X) over ?, any Bayes
decision rule for P ? is a robust Bayes decision rule against ?.
Principle of Maximum Conditional Entropy: Given a set of distributions ?, predict Y based on a
distribution in ? that maximizes the conditional entropy of Y given X, i.e.
argmax H(Y |X)
(8)
P ??
Note that while the weak version of Theorem 1 guarantees only the existence of a saddle point for
(7), the strong version further guarantees that any Bayes decision rule of the maximizing distribution
results in a robust Bayes decision rule. However, the continuity in Bayes decision rules does not hold
for the discontinuous 0-1 loss, which requires considering the weak version of Theorem 1 to address
this issue.
4
3
Prediction via Maximum Conditional Entropy Principle
Consider a prediction task with target variable Y and feature vector X = (X1 , . . . , Xd ). We do not
require the variables to be discrete. As discussed earlier, the maximum conditional entropy principle
reduces (7) to (8), which formulate steps 3 and 3a in Figure 1, respectively. However, a general
formulation of (8) in terms of the joint distribution PX,Y leads to an exponential computational
complexity in the feature dimension d.
The key question is therefore under what structures of ?(P? ) in Step 2 we can solve (8) efficiently. In
this section, we propose a specific structure for ?(P? ), under which we provide an efficient solution
to Steps 3a and 3b in Figure 1. In addition, we prove a bound on the excess worst-case risk for the
proposed ?(P? ).
To describe this structure, consider a set of distributions ?(Q) centered around a given distribution
QX,Y , where for a given norm k ? k, mapping vector ?(Y )t?1 ,
?(Q) = { PX,Y : PX = QX ,
? 1 ? i ? t : k EP [?i (Y )X] ? EQ [?i (Y )X] k ? i }.
(9)
Here ? encodes Y with t-dimensional ?(Y ), and ?i (Y ) denotes the ith entry of ?(Y ). The first
constraint in the definition of ?(Q) requires all distributions in ?(Q) to share the same marginal on
X as Q; the second imposes constraints on the cross-moments between X and Y , allowing for some
uncertainty in estimation. When applied to the supervised learning problem, we will choose Q to be
the empirical distribution P? and select ? appropriately based on the loss function L. However, for
now we will consider the problem of solving (8) over ?(Q) for general Q and ?.
To that end, we use a similar technique as in the Fenchel?s duality theorem, also used at [11, 12, 13]
to address divergence minimization problems. However, we consider a different version of convex
conjugate for ?H, which is defined with respect to ?. Considering PY as the set of all probability
distributions for the variable Y , we define F? : Rt ? R as the convex conjugate of ?H(Y ) with
respect to the mapping ?,
F? (z) := max H(Y ) + E[?(Y )]T z.
(10)
P ?PY
Theorem 2. Define ?(Q), F? as given by (9), (10). Then the following duality holds
max H(Y |X) = min
P ??(Q)
A?Rt?d
t
X
EQ F? (AX) ? ?(Y )T AX +
i kAi k? ,
(11)
i=1
where kAi k? denotes k ? k?s dual norm of the A?s ith row. Furthermore, for the optimal P ? and A?
EP ? [ ?(Y ) | X = x ] = ?F? (A? x).
(12)
Proof. Refer to the the supplementary material for the proof.
When applying Theorem 2 on a supervised learning problem with a specific loss function, ? will be
chosen such that EP ? [ ?(Y ) | X = x ] provides sufficient information to compute the Bayes decision
rule ?? for P ? . This enables the direct computation of ? ? , i.e. step 3 of Figure 1, without the need
to explicitly compute P ? itself. For the loss functions discussed at Subsection 2.2, we choose the
identity ?(Y ) = Y for the quadratic loss and the one-hot encoding ?(Y ) = [ I(Y = i) ]ti=1 for the
logarithmic and 0-1 loss functions. Later in this section, we will discuss how this theorem applies to
these loss functions.
3.1
Generalization Bounds for the Worst-case Risk
By establishing the objective?s Lipschitzness and boundedness through appropriate assumptions, we
can bound the rate of uniform convergence for the problem in the RHS of (11) [14]. Here we consider
the uniform convergence of the empirical averages, when Q = P?n is the empirical distribution of n
samples drawn i.i.d. from the underlying distribution P? , to their expectations when Q = P? .
In the supplementary material, we prove the following theorem which bounds the excess worst-case
risk. Here ??n and ?? denote the robust Bayes decision rules against ?(P?n ) and ?(P? ), respectively.
5
Figure 3: Duality of Maximum Conditional Entropy/Maximum Likelihood in GLMs
As explained earlier, by the maximum conditional entropy principle we can learn ??n by solving the
RHS of (11) for the empirical distribution of samples and then applying (12).
Theorem 3. Consider a loss function L with the entropy function H and suppose ?(Y ) includes
only one element, i.e. t = 1. Let M = maxP ?PY H(Y ) be the maximum entropy value over PY .
Also, take k ? k/k ? k? to be the `p /`q pair where p1 + 1q = 1, 1 ? q ? 2. Given that kXk2 ? B and
|?(Y )| ? L, for any ? > 0 with probability at least 1 ? ?
r
4BLM
9 log(4/?)
?
?
?
max E[L(Y, ?n (X))] ? max E[L(Y, ?(X))] ?
1+
.
(13)
8
n
P ??(P? )
P ??(P? )
Theorem 3 states that though we learn the prediction rule ??n by solving the maximum conditional
problem for the empirical case, we can bound the excess ?-based worst-case risk. This result justifies
the specific constraint of fixing the marginal PX across the proposed ?(Q) and explains the role of
the uncertainty parameter in bounding the excess worst-case risk.
3.2
A Minimax Interpretation of Generalized Linear Models
We make the key observation that if F? is the log-partition function of an expoenetial-family distribution, the problem in the RHS of (11), when i = 0 for all i?s, is equivalent to minimizing the negative
log-likelihood for fitting a generalized linear model [15] given by
? An exponential-family distribution p(y|?) = h(y) exp ? T ?(y) ? F? (?) with the log-partition
function F? and the sufficient statistic ?(Y ),
? A linear predictor, ?(X) = AX,
? A mean function, E[ ?(Y )|X = x] = ?F? (?(x)).
Therefore, Theorem 2 reveals a duality between the maximum conditional entropy problem over
?(Q) and the regularized maximum likelihood problem for the specified generalized linear model.
As a geometric interpretation of this duality, by solving the regularized maximum likelihood problem
in the RHS of (11), we in fact minimize a regularized KL-divergence
argmin EQX [ DKL ( QY |X || PY |X ) ] +
PY |X ?SF
t
X
i kAi (PY |X )k? ,
(14)
i=1
where SF = {PY |X (y|x) = h(y) exp( ?(y)T Ax?F? (Ax) | A ? Rt?s } is the set of all exponentialfamily conditional distributions for the specified generalized linear model. This can be viewed as
projecting Q onto (QX , SF ) (See Figure 3).
Furthermore, for a label-invariant entropy H(Y ) the Bayes act for the uniform distribution UY leads
to the same expected loss under any distribution on Y . Based on the divergence D?s definition in
(6), maximizing H(Y |X) over ?(Q) in the LHS of (11) is therefore equivalent to the following
divergence minimization problem
argmin
EQX [ D(PY |X , UY |X ) ].
(15)
PY |X : (QX ,PY |X )??(Q)
6
Here UY |X denotes the uniform conditional distribution over Y given any x ? X . This can be
interpreted as projecting the joint distribution (QX , UY |X ) onto ?(Q) (See Figure 3). Then, the
duality shown in Theorem 2 implies the following corollary.
Corollary 1. Given a label-invariant H, the solution to (14) also minimizes (15), i.e. (14) ? (15).
3.3
Examples
3.3.1
Logarithmic Loss: Logistic Regression
To gain sufficient information for the Bayes decision rule under the logarithmic loss, for Y ? Y =
{1, . . . , t + 1}, let ?(Y ) be the one-hot encoding of Y , i.e. ? i (Y ) = I(Y = i) for 1 ? i ? t. Here,
Pt
we exclude i = t + 1 as I(Y = t + 1) = 1 ? i=1 I(Y = i). Then
F? (z) = log 1 +
t
X
exp(zj ) ,
t
X
?F? (z) i = exp (zi ) / 1 +
exp(zj ) , (16)
?1 ? i ? t :
j=1
j=1
which is the logistic regression model [16]. Also, the RHS of (11) will be the regularized maximum
likelihood problem for logistic regression. This particular result is well-studied in the literature and
straightforward using the duality shown in [17].
3.3.2
0-1 Loss: Minimax SVM
To get sufficient information for the Bayes decision rule under the 0-1 loss, we again consider the
one-hot encoding ? described for the logarithmic loss. We show in the supplementary material that if
? = (z, 0) and z?(i) denotes the ith largest element of z
?,
z
Pk
k ? 1 + j=1 z?(j)
F? (z) = max
.
(17)
1?k?t+1
k
In particular, if Y ? Y = {?1, 1} is binary the dual problem (11) for learning the optimal linear
predictor ?? given n samples (xi , yi )ni=1 will be
n
1 ? yi ?T xi
1X
min
max 0 ,
, ?yi ?T xi + k?k? .
(18)
?
n i=1
2
The first term is the empirical risk of a linear classifier over the minimax-hinge loss max{0, 1?z
2 , ?z}
as shown in Figure 2. In contrast, the standard SVM is formulated using the hinge loss max{0, 1 ? z}:
n
min
?
1X
max 0 , 1 ? yi ?T xi + k?k? ,
n i=1
(19)
We therefore call this classification approach the minimax SVM. However, unlike the standard SVM,
the minimax SVM is naturally extended to multi-class classification.
Using Theorem 1.A2 , we prove that for 0-1 loss the robust Bayes decision rule exists and is randomized
? = (A? x, 0) randomly predicts a label according
in general, where given the optimal linear predictor z
?-based distribution on labels
to the following z
?
Pkmax
?
1 ? j=1
z?(j)
?
if ?(i) ? kmax ,
? 1 ? i ? t + 1 : p?(i) = z?(i) +
(20)
kmax
?
?0
Otherwise.
? in the ascending order, i.e. z??(i) = z?(i) , and kmax is the largest
Here ? is the permutation sorting z
Pk
?(k) ] < 1. For example, in the binary case discussed, the minimax
z(i) ? z
index k satisfying i=1 [?
SVM first solves
?? and then predicts label y = 1 vs. label y = ?1 with
(18) to find the optimal
T ?
probability min 1 , max{0 , (1 + x ? )/2} .
2
We show that given the specific structure of ?(Q) Theorem 1.A holds whether X is finite or infinite.
7
Dataset
adult
credit
kr-vs-kp
promoters
votes
hepatitis
mmSVM
17
12
4
5
3
17
SVM
22
16
3
9
5
20
DCC
18
14
10
5
3
19
MPM
22
13
5
6
4
18
TAN
17
17
7
44
8
17
DRC
17
13
5
6
3
17
Table 1: Methods Performance (error in %)
3.3.3
Quadratic Loss: Linear Regression
Based on the Bayes decision rule for the quadratic loss, we choose ?(Y ) = Y . To derive F? , note that
if we let PY in (10) include all possible distributions, the maximized entropy (variance for quadratic
loss) and thus the value of F? would be infinity. Therefore, given a parameter ?, we restrict the
second moment of distributions in PY = {PY : E[Y 2 ] ? ?2 } and then apply (10). We show in the
supplementary material that an adjusted version of Theorem 2 holds after this change, and
2
z /4
if |z/2| ? ?
2
F? (z) ? ? =
(21)
?(|z| ? ?) if |z/2| > ?,
which is the Huber function [18]. Given the samples of a supervised learning task if we choose the
parameter ? large enough, by solving the RHS of (11) when F? (z) is replaced with z 2 /4 and set ?
greater than maxi |A? xi |, we can equivalently take F? (z) = z 2 /4 + ?2 . Then, by (12) we derive the
linear regression model and the RHS of (11) is equivalent to
?
?
?
?
4
Least squares when = 0.
Lasso [19, 20] when k ? k/k ? k? is the `? /`1 pair.
Ridge regression [21] when k ? k is the `2 -norm.
(overlapping) Group lasso [22, 23] with the `1,p penalty when ?GL (Q) is defined, given subsets
I1 , . . . Ik of {1, . . . , d} and 1/p + 1/q = 1, as
?GL (Q) = { PX,Y : PX = QX ,
(22)
? 1 ? j ? k : k EP Y XIj ? EQ Y XIj kq ? j }.
Numerical Experiments
We evaluated the performance of the minimax SVM on six binary classification datasets from the
UCI repository, compared to these five benchmarks: Support Vector Machines (SVM) [24], Discrete
Chebyshev Classifiers (DCC) [3], Minimax Probabilistic Machine (MPM) [2], Tree Augmented
Naive Bayes (TAN) [25], and Discrete R?nyi Classifiers (DRC) [4]. The results are summarized in
Table 1 where the numbers indicate the percentage of error in the classification task.
We implemented the minimax SVM by applying the subgradient descent to (18) with the regularizer
?k?k22 . We determined the parameters by cross validation, where we used a randomly-selected 70%
of the training set for training and the rest 30% for testing. We tested the values in {2?10 , . . . , 210 }.
Using the tuned parameters, we trained the algorithm over all the training set and then evaluated the
error rate over the test set. We performed this procedure in 1000 Monte Carlo runs each training on
70% of the data points and testing on the rest 30% and averaged the results.
As seen in the table, the minimax SVM results in the best performance for five of the six datasets.
To compare these methods in high-dimensional problems, we ran an experiment over synthetic
data with n = 200 samples and d = 10000 features. We generated features by i.i.d. Bernoulli
with P (Xi = 1) = 0.7, and considered y = sign(? T x + z) where z ? N (0, 1). Using the above
procedure, we evaluated 19.3% for the mmSVM, 19.5% error rate for SVM, 19.6% error rate for
DRC, which indicates the mmSVM can outperform SVM and DRC in high-dimensional settings as
well. Also, the average training time for training mmSVM was 0.085 seconds, faster than the training
time for the SVM (using Matlab?s SVM command) with the average 0.105 seconds.
Acknowledgments: We are grateful to Stanford University providing a Stanford Graduate Fellowship,
and the Center for Science of Information (CSoI), an NSF Science and Technology Center under
grant agreement CCF-0939370 , for the support during this research.
8
References
[1] Vladimir Vapnik. The nature of statistical learning theory. Springer Science & Business Media, 2013.
[2] Gert RG Lanckriet, Laurent El Ghaoui, Chiranjib Bhattacharyya, and Michael I Jordan. A robust minimax
approach to classification. The Journal of Machine Learning Research, 3:555?582, 2003.
[3] Elad Eban, Elad Mezuman, and Amir Globerson. Discrete chebyshev classifiers. In Proceedings of the
31st International Conference on Machine Learning (ICML-14), pages 1233?1241, 2014.
[4] Meisam Razaviyayn, Farzan Farnia, and David Tse. Discrete r?nyi classifiers. In Advances in Neural
Information Processing Systems 28, pages 3258?3266, 2015.
[5] Peter D. Gr?nwald and Philip Dawid. Game theory, maximum entropy, minimum discrepancy and robust
bayesian decision theory. The Annals of Statistics, 32(4):1367?1433, 2004.
[6] Edwin T Jaynes. Information theory and statistical mechanics. Physical review, 106(4):620, 1957.
[7] Amir Globerson and Naftali Tishby. The minimum information principle for discriminative learning. In
Proceedings of the 20th conference on Uncertainty in artificial intelligence, pages 193?200, 2004.
[8] Vitaly Feldman, Venkatesan Guruswami, Prasad Raghavendra, and Yi Wu. Agnostic learning of monomials
by halfspaces is hard. SIAM Journal on Computing, 41(6):1558?1590, 2012.
[9] Philip Dawid. Coherent measures of discrepancy, uncertainty and dependence, with applications to
bayesian predictive experimental design. Technical Report 139, University College London, 1998.
http://www.ucl.ac.uk/Stats/research/abs94.html.
[10] Thomas M Cover and Joy A Thomas. Elements of information theory. John Wiley & Sons, 2012.
[11] Yasemin Altun and Alexander Smola. Unifying divergence minimisation and statistical inference via
convex duality. In Learning Theory: Conference on Learning Theory COLT 2006, Proceedings, 2006.
[12] Miroslav Dud?k, Steven J Phillips, and Robert E Schapire. Maximum entropy density estimation with
generalized regularization and an application to species distribution modeling. Journal of Machine Learning
Research, 8(6):1217?1260, 2007.
[13] Ayse Erkan and Yasemin Altun. Semi-supervised learning via generalized maximum entropy. In AISTATS,
pages 209?216, 2010.
[14] Peter L Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds and structural
results. Journal of Machine Learning Research, 3(Nov):463?482, 2002.
[15] Peter McCullagh and John A Nelder. Generalized linear models, volume 37. CRC press, 1989.
[16] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. The elements of statistical learning, volume 1.
Springer, 2001.
[17] Adam L Berger, Vincent J Della Pietra, and Stephen A Della Pietra. A maximum entropy approach to
natural language processing. Computational linguistics, 22(1):39?71, 1996.
[18] Peter J Huber. Robust Statistics. Wiley, 1981.
[19] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society.
Series B (Methodological), pages 267?288, 1996.
[20] Scott Shaobing Chen, David L. Donoho, and Michael A. Saunders. Atomic decomposition by basis pursuit.
SIAM Journal on Scientific Computing, 20(1):33?61, 1998.
[21] Arthur E Hoerl and Robert W Kennard. Ridge regression: Biased estimation for nonorthogonal problems.
Technometrics, 12(1):55?67, 1970.
[22] Ming Yuan and Yi Lin. Model selection and estimation in regression with grouped variables. Journal of
the Royal Statistical Society: Series B (Statistical Methodology), 68(1):49?67, 2006.
[23] Laurent Jacob, Guillaume Obozinski, and Jean-Philippe Vert. Group lasso with overlap and graph lasso. In
Proceedings of the 26th annual international conference on machine learning, pages 433?440, 2009.
[24] Corinna Cortes and Vladimir Vapnik. Support-vector networks. Machine learning, 20(3):273?297, 1995.
[25] CK Chow and CN Liu. Approximating discrete probability distributions with dependence trees. Information
Theory, IEEE Transactions on, 14(3):462?467, 1968.
9
| 6247 |@word repository:1 version:10 norm:3 mezuman:1 seek:1 prasad:1 decomposition:1 jacob:1 boundedness:1 carry:1 reduction:1 moment:6 liu:1 series:2 tuned:1 interestingly:1 mmse:1 bhattacharyya:1 jaynes:1 yet:1 john:2 numerical:3 partition:2 enables:1 joy:1 v:2 intelligence:1 selected:1 amir:2 mpm:2 ith:3 eqx:2 provides:1 five:2 direct:1 ik:1 yuan:1 prove:4 fitting:1 pairwise:1 huber:2 expected:9 p1:1 mechanic:1 multi:1 ming:1 considering:2 spain:1 estimating:1 moreover:1 underlying:6 maximizes:4 bounded:1 advent:1 medium:1 what:3 agnostic:1 argmin:4 interpreted:1 minimizes:5 developed:1 lipschitzness:1 convexify:1 guarantee:2 ti:1 concave:1 xd:2 act:1 classifier:9 uk:1 grant:1 yn:1 engineering:1 encoding:3 establishing:1 laurent:2 ap:2 studied:3 limited:1 graduate:1 averaged:1 uy:4 acknowledgment:1 globerson:2 testing:2 atomic:1 procedure:2 empirical:13 vert:1 matching:3 word:1 altun:2 get:1 onto:2 targeting:1 selection:2 risk:9 applying:5 kmax:3 py:26 www:1 equivalent:4 map:1 center:2 maximizing:4 straightforward:1 convex:8 formulate:2 stats:1 rule:49 estimator:1 classic:2 gert:1 annals:1 target:3 suppose:3 pt:1 tan:2 exact:1 us:1 agreement:1 drc:4 element:4 lanckriet:1 approximated:1 satisfying:1 dawid:2 predicts:4 labeled:1 ep:9 role:1 steven:1 electrical:1 worst:12 observes:1 ran:1 halfspaces:1 broken:1 complexity:3 convexity:1 motivate:1 trained:1 solving:7 grateful:1 predictive:1 exponentialfamily:1 basis:1 edwin:1 joint:3 regularizer:1 describe:1 london:1 monte:1 kp:1 artificial:1 outcome:1 h0:2 saunders:1 jean:1 stanford:6 larger:1 solve:2 supplementary:6 kai:3 otherwise:1 elad:2 maxp:1 statistic:4 itself:1 advantage:1 ucl:1 propose:5 maximal:1 uci:1 epx:1 convergence:2 extending:1 rademacher:1 adam:1 derive:3 develop:2 ac:1 fixing:1 measured:1 eq:3 strong:2 solves:2 implemented:1 implies:2 indicate:1 discontinuous:1 vc:1 centered:2 stringent:1 material:6 explains:1 require:1 crc:1 generalization:4 adjusted:1 hold:4 around:1 considered:2 credit:1 deciding:1 exp:5 mapping:3 predict:5 nonorthogonal:1 a2:1 purpose:1 estimation:4 hoerl:1 label:6 maker:3 largest:2 grouped:1 minimization:3 gaussian:1 ck:1 avoid:1 shrinkage:1 command:1 minimisation:1 corollary:2 l0:1 focus:1 ax:5 she:1 methodological:1 bernoulli:1 likelihood:7 indicates:2 hepatitis:1 contrast:2 posteriori:1 inference:1 el:1 chow:1 i1:1 issue:2 classification:9 dual:4 among:1 html:1 colt:1 constrained:1 raised:1 special:1 mutual:3 marginal:8 broad:1 icml:1 discrepancy:2 np:1 report:1 develops:1 randomly:2 divergence:6 blm:1 pietra:2 replaced:1 argmax:1 friedman:1 technometrics:1 interest:2 possibility:1 argmaxy:1 unconditional:2 meisam:1 arthur:1 lh:1 tree:2 miroslav:1 fenchel:1 tse:2 earlier:2 modeling:1 cover:1 introducing:1 entry:1 subset:1 monomials:1 uniform:4 predictor:3 kq:1 gr:1 tishby:1 characterize:1 synthetic:1 st:1 density:1 international:2 randomized:5 siam:2 probabilistic:1 michael:2 again:1 central:1 choose:4 exclude:1 summarized:1 includes:1 explicitly:1 depends:1 performed:1 later:2 break:2 closed:2 observing:1 characterizes:1 bayes:34 minimize:1 square:2 ni:1 variance:1 efficiently:2 maximized:1 generalize:1 weak:3 bayesian:2 raghavendra:1 vincent:1 carlo:1 trevor:1 definition:5 against:4 naturally:1 proof:3 gain:1 dataset:1 popular:1 subsection:1 emerges:1 dcc:2 supervised:7 methodology:1 formulation:5 evaluated:3 though:1 furthermore:2 smola:1 correlation:1 glms:1 hand:1 jerome:1 overlapping:1 continuity:2 logistic:5 dntse:1 scientific:1 k22:1 concept:1 ccf:1 regularization:3 hence:1 dud:1 i2:1 deal:1 during:1 game:1 naftali:1 noted:1 criterion:1 generalized:11 allowable:1 ridge:2 demonstrate:1 snp:1 physical:1 volume:2 discussed:4 interpretation:3 he:1 trait:1 marginals:2 refer:2 feldman:1 phillips:1 similarly:1 hp:1 language:1 had:1 aq:3 access:1 inf:3 inequality:1 outperforming:2 binary:4 shahar:1 yi:6 yasemin:2 seen:5 minimum:4 greater:1 venkatesan:1 nwald:1 semi:1 stephen:1 reduces:1 technical:1 faster:1 cross:4 long:1 lin:1 dkl:1 controlled:1 prediction:10 regression:11 expectation:1 achieved:1 qy:6 addition:1 fellowship:1 addressed:1 suffered:1 appropriately:1 biased:1 rest:2 unlike:1 subject:1 vitaly:1 jordan:1 call:9 structural:1 enough:2 fit:1 zi:1 hastie:1 restrict:1 lasso:5 reduce:1 idea:2 cn:1 chebyshev:2 whether:1 six:2 guruswami:1 bartlett:1 penalty:1 peter:4 action:4 matlab:1 http:1 schapire:1 outperform:1 xij:2 restricts:1 zj:2 percentage:1 revisit:1 notice:1 nsf:1 sign:1 tibshirani:2 discrete:7 group:2 key:4 drawn:1 graph:1 subgradient:1 run:1 uncertainty:5 family:2 wu:1 draw:1 decision:46 bound:6 quadratic:8 annual:1 constraint:3 infinity:1 encodes:1 shaobing:1 min:4 px:16 department:1 according:1 conjugate:2 smaller:2 across:1 son:1 explained:1 restricted:1 projecting:2 erm:8 invariant:2 ghaoui:1 chiranjib:1 remains:1 turn:1 discus:1 tractable:2 ascending:1 end:1 pursuit:1 apply:2 observe:1 appropriate:1 corinna:1 existence:1 thomas:2 denotes:5 include:1 linguistics:1 hinge:9 unifying:1 nyi:2 society:2 approximating:1 objective:1 question:3 dependence:3 rt:3 surrogate:2 philip:2 index:1 eban:1 berger:1 providing:1 minimizing:10 vladimir:2 equivalently:1 hlog:2 robert:4 negative:1 design:2 motivates:2 perform:3 allowing:2 observation:3 ilog:1 datasets:2 benchmark:1 finite:2 descent:1 philippe:1 extended:1 y1:1 arbitrary:1 david:3 pair:3 specified:2 kl:1 connection:1 adhoc:1 coherent:2 barcelona:1 nip:1 address:4 adult:1 scott:1 max:14 royal:2 power:2 hot:3 overlap:1 difficulty:1 natural:2 regularized:4 predicting:2 business:1 minimax:35 scheme:2 technology:1 naive:1 review:1 literature:3 l2:1 geometric:1 loss:62 permutation:1 var:3 validation:1 h2:2 sufficient:4 imposes:1 principle:22 share:1 row:1 gl:2 allow:1 understand:1 farzan:2 overcome:1 dimension:3 xn:1 genome:1 concavity:1 author:3 qx:6 transaction:1 excess:4 nov:1 emphasize:1 overfitting:1 reveals:1 nelder:1 xi:7 discriminative:1 continuous:1 search:1 table:3 learn:4 nature:1 robust:13 ca:1 aistats:1 pk:2 main:2 promoter:1 rh:7 bounding:1 revisits:1 complementary:1 razaviyayn:1 x1:3 augmented:1 kennard:1 wiley:2 inferring:1 exponential:3 sf:3 kxk2:1 erkan:1 learns:1 theorem:18 specific:6 jensen:1 maxi:1 svm:23 cortes:1 exists:3 mendelson:1 restricting:1 vapnik:2 kr:1 justifies:1 sorting:1 farnia:3 chen:1 entropy:38 rg:1 logarithmic:8 saddle:2 applies:1 springer:2 corresponds:1 worstcase:1 obozinski:1 conditional:24 viewed:2 formulated:2 identity:1 donoho:1 hard:3 change:1 mccullagh:1 infinite:1 determined:1 csoi:1 llog:1 specie:1 duality:9 experimental:1 shannon:2 vote:1 select:2 highdimensional:1 college:1 guillaume:1 support:3 alexander:1 evaluate:1 tested:1 della:2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.